Sign Language
Sign language research focuses on developing technologies to improve communication for deaf and hard-of-hearing individuals, primarily through automated sign language recognition and translation. Current research emphasizes mitigating biases in datasets and models, improving the accuracy and temporal consistency of sign language video generation, and incorporating both manual and non-manual features (facial expressions, body language) for more comprehensive understanding. This work leverages deep learning architectures, including transformers, convolutional neural networks, and recurrent neural networks, often combined with techniques like multi-stream processing and attention mechanisms, to achieve higher accuracy and robustness across diverse sign languages and environments. The ultimate goal is to create accessible and inclusive communication tools, impacting both the scientific understanding of sign languages and the daily lives of sign language users.
Papers
Sign Languague Recognition without frame-sequencing constraints: A proof of concept on the Argentinian Sign Language
Franco Ronchetti, Facundo Manuel Quiroga, César Estrebou, Laura Lanzarini, Alejandro Rosete
LSA64: An Argentinian Sign Language Dataset
Franco Ronchetti, Facundo Manuel Quiroga, César Estrebou, Laura Lanzarini, Alejandro Rosete