Sign Language
Sign language research focuses on developing technologies to improve communication for deaf and hard-of-hearing individuals, primarily through automated sign language recognition and translation. Current research emphasizes mitigating biases in datasets and models, improving the accuracy and temporal consistency of sign language video generation, and incorporating both manual and non-manual features (facial expressions, body language) for more comprehensive understanding. This work leverages deep learning architectures, including transformers, convolutional neural networks, and recurrent neural networks, often combined with techniques like multi-stream processing and attention mechanisms, to achieve higher accuracy and robustness across diverse sign languages and environments. The ultimate goal is to create accessible and inclusive communication tools, impacting both the scientific understanding of sign languages and the daily lives of sign language users.
Papers
Signs as Tokens: An Autoregressive Multilingual Sign Language Generator
Ronglai Zuo, Rolandos Alexandros Potamias, Evangelos Ververas, Jiankang Deng, Stefanos Zafeiriou
DiffSLT: Enhancing Diversity in Sign Language Translation via Diffusion Model
JiHwan Moon, Jihoon Park, Jungeun Kim, Jongseong Bae, Hyeongwoo Jeon, Ha Young Kim
Leveraging the Power of MLLMs for Gloss-Free Sign Language Translation
Jungeun Kim, Hyeongwoo Jeon, Jongseong Bae, Ha Young Kim
SHuBERT: Self-Supervised Sign Language Representation Learning via Multi-Stream Cluster Prediction
Shester Gueuwou, Xiaodan Du, Greg Shakhnarovich, Karen Livescu, Alexander H. Liu
Signformer is all you need: Towards Edge AI for Sign Language
Eta Yang
AzSLD: Azerbaijani Sign Language Dataset for Fingerspelling, Word, and Sentence Translation with Baseline Software
Nigar Alishzade, Jamaladdin Hasanov
Enhanced Sign Language Translation between American Sign Language (ASL) and Indian Sign Language (ISL) Using LLMs
Malay Kumar, S. Sarvajit Visagan, Tanish Sarang Mahajan, Anisha Natarajan