Paper ID: 2406.03729
Enhancing Sign Language Detection through Mediapipe and Convolutional Neural Networks (CNN)
Aditya Raj Verma, Gagandeep Singh, Karnim Meghwal, Banawath Ramji, Praveen Kumar Dadheech
This research combines MediaPipe and CNNs for the efficient and accurate interpretation of ASL dataset for the real-time detection of sign language. The system presented here captures and processes hands' gestures in real time. the intended purpose was to create a very easy, accurate, and fast way of entering commands without the necessity of touching something.MediaPipe supports one of the powerful frameworks in real-time hand tracking capabilities for the ability to capture and preprocess hand movements, which increases the accuracy of the gesture recognition system. Actually, the integration of CNN with the MediaPipe results in higher efficiency in using the model of real-time processing.The accuracy achieved by the model on ASL datasets is 99.12\%.The model was tested using American Sign Language (ASL) datasets. The results were then compared to those of existing methods to evaluate how well it performed, using established evaluation techniques. The system will have applications in the communication, education, and accessibility domains. Making systems such as described in this paper even better will assist people with hearing impairment and make things accessible to them. We tested the recognition and translation performance on an ASL dataset and achieved better accuracy over previous this http URL is meant to the research is to identify the characters that American signs recognize using hand images taken from a web camera by based on mediapipe and CNNs
Submitted: Jun 6, 2024