Paper Contents
Abstract
Sign language is a crucial means of communication for individuals with hearing and speech impairments, enabling them to express thoughts, emotions, and needs without spoken words. In real- life situations such as education, healthcare, customer service, and public interactions, sign language bridges the gap between the hearing and non-hearing communities. However, many people do not understand sign language, which creates a communication barrier. To address this, we developed a real-time Sign Language Recognition System that combines both hand gestures and facial expressions, making sign interpretation more accurate and emotionally expressive.This system uses MediaPipe to extract 21 landmarks from each hand and 468 facial landmarks, capturing both physical gestures and subtle facial cues. OpenCV is used for real-time webcam integration and frame processing. The extracted features form a dataset with 1530 values per sample, which is then processed and used to train a Random Forest classification model. This machine learning algorithm was chosen for its high accuracy, ability to handle large feature sets, and resistance to overfitting. The trained model can recognize multiple predefined gestures, and outputs the detected sign along with optional audio feedback for better user interaction.The proposed system has practical applications in schools, hospitals, customer support counters, and even daily communication where a translator might not be available. By combining facial and hand movement analysis, it brings a more human-like understanding of sign language to computers. Overall, this project demonstrates how computer vision and machine learning can be used together to build an intelligent, real-time tool that promotes accessibility and inclusion in society.
Copyright
Copyright © 2025 Mrs.B.Divya. This is an open access article distributed under the Creative Commons Attribution License.