Sign Language Recognition
Sign language recognition (SLR) aims to automatically interpret sign language videos, bridging communication gaps for the deaf community. Current research heavily utilizes deep learning, employing architectures like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, and Graph Convolutional Networks (GCNs), often combined with techniques like transfer learning and self-supervised learning to improve accuracy and efficiency. A key focus is mitigating biases in datasets and models to ensure equitable access to technology, while also expanding to continuous sign recognition and translation, often incorporating multimodal data (hand and facial features). This field's advancements have significant implications for accessibility, education, and healthcare, providing tools for improved communication and inclusion.
Papers
Sign Languague Recognition without frame-sequencing constraints: A proof of concept on the Argentinian Sign Language
Franco Ronchetti, Facundo Manuel Quiroga, César Estrebou, Laura Lanzarini, Alejandro Rosete
LSA64: An Argentinian Sign Language Dataset
Franco Ronchetti, Facundo Manuel Quiroga, César Estrebou, Laura Lanzarini, Alejandro Rosete