A Machine Learning Framework and Method to Translate Speech to Real-time Sign Language for AR glasses
Rahul Solleti Solleti
Paper Contents
Abstract
Communication challenges persist among individuals with hearing disabilities, given the limited prevalence of sign language proficiency within the general population. This research endeavors to devise an approach, with the overarching goal of ameliorating communication hurdles faced by those with hearing impairments by translating the speech into configurable Sign Language (cSL). Acknowledging the arduous nature of obtaining sign language skills, this study introduces a viable solution through the amalgamation of speech recognition, image processing technologies and machine learning solutions. The evolution of sign languages has significantly augmented communication accessibility for the deaf and hard of hearing communities. Within this scholarly endeavor, we propose the implementation of a real-time system proficient in recognizing speech through Recurrent Neural Networks (RNN). This system further embraces the sequence-to-sequence learning to transmute recognized speech into textual form. Subsequently, the text undergoes translation into cSL using machine learning framework, culminating in its manifestation as a series of images, seamlessly presented through augmented reality glasses.
Copyright
Copyright © 2023 Rahul Solleti. This is an open access article distributed under the Creative Commons Attribution License.