Paper Contents
Abstract
Conventional assistive technologies do not usually offer the ability to interact in real time, meaning that users must switch between several devices to complete a given task. This paper proposes an ingenious solution- smart glasses integrated object detection, sign language recognition, and TTS technology as one wearable device. The system improves real-time environmental awareness and communication for people with disabilities using cutting-edge AI models, sensor fusion and mechatronic processing applied on embedded devices. These practical implementations illustrate how the proposed solution provides greater accessibility and autonomy for people with visual impairments, speech or hearing disabilities, and mobility challenges.
Copyright
Copyright © 2025 Sahil Sanjay Gurav. This is an open access article distributed under the Creative Commons Attribution License.