Sign Language
Sign language research focuses on developing technologies to improve communication for deaf and hard-of-hearing individuals, primarily through automated sign language recognition and translation. Current research emphasizes mitigating biases in datasets and models, improving the accuracy and temporal consistency of sign language video generation, and incorporating both manual and non-manual features (facial expressions, body language) for more comprehensive understanding. This work leverages deep learning architectures, including transformers, convolutional neural networks, and recurrent neural networks, often combined with techniques like multi-stream processing and attention mechanisms, to achieve higher accuracy and robustness across diverse sign languages and environments. The ultimate goal is to create accessible and inclusive communication tools, impacting both the scientific understanding of sign languages and the daily lives of sign language users.
Papers
An Efficient Sign Language Translation Using Spatial Configuration and Motion Dynamics with LLMs
Eui Jun Hwang, Sukmin Cho, Junmyeong Lee, Jong C. Park
BAUST Lipi: A BdSL Dataset with Deep Learning Based Bangla Sign Language Recognition
Md Hadiuzzaman, Mohammed Sowket Ali, Tamanna Sultana, Abdur Raj Shafi, Abu Saleh Musa Miah, Jungpil Shin
Event Stream based Sign Language Translation: A High-Definition Benchmark Dataset and A New Algorithm
Xiao Wang, Yao Rong, Fuling Wang, Jianing Li, Lin Zhu, Bo Jiang, Yaowei Wang
Modelling the Distribution of Human Motion for Sign Language Assessment
Oliver Cory, Ozge Mercanoglu Sincan, Matthew Vowels, Alessia Battisti, Franz Holzknecht, Katja Tissi, Sandra Sidler-Miserez, Tobias Haug, Sarah Ebling, Richard Bowden
C${^2}$RL: Content and Context Representation Learning for Gloss-free Sign Language Translation and Retrieval
Zhigang Chen, Benjia Zhou, Yiqing Huang, Jun Wan, Yibo Hu, Hailin Shi, Yanyan Liang, Zhen Lei, Du Zhang