Sign Language

Sign language research focuses on developing technologies to improve communication for deaf and hard-of-hearing individuals, primarily through automated sign language recognition and translation. Current research emphasizes mitigating biases in datasets and models, improving the accuracy and temporal consistency of sign language video generation, and incorporating both manual and non-manual features (facial expressions, body language) for more comprehensive understanding. This work leverages deep learning architectures, including transformers, convolutional neural networks, and recurrent neural networks, often combined with techniques like multi-stream processing and attention mechanisms, to achieve higher accuracy and robustness across diverse sign languages and environments. The ultimate goal is to create accessible and inclusive communication tools, impacting both the scientific understanding of sign languages and the daily lives of sign language users.

Papers