Sign Recognition
Sign recognition research aims to automatically interpret sign language, bridging communication gaps for the deaf and hard-of-hearing communities. Current efforts focus on improving accuracy and efficiency using deep learning models like CNNs, RNNs (including GRUs and LSTMs), and graph convolutional networks, often incorporating multi-modal data (e.g., handshape, location, movement, facial expressions) and leveraging techniques like curriculum learning and contrastive learning to enhance performance. These advancements are facilitated by the development of large-scale, annotated datasets and innovative approaches to data augmentation and annotation, leading to improved sign recognition systems and enabling new applications in areas such as assistive technologies and digital accessibility.
Papers
The NGT200 Dataset: Geometric Multi-View Isolated Sign Recognition
Oline Ranum, David R. Wessels, Gomer Otterspeer, Erik J. Bekkers, Floris Roelofsen, Jari I. Andersen
3D-LEX v1.0: 3D Lexicons for American Sign Language and Sign Language of the Netherlands
Oline Ranum, Gomer Otterspeer, Jari I. Andersen, Robert G. Belleman, Floris Roelofsen
Word level Bangla Sign Language Dataset for Continuous BSL Recognition
Md Shamimul Islam, A. J. M. Akhtarujjaman Joha, Md Nur Hossain, Sohaib Abdullah, Ibrahim Elwarfalli, Md Mahedi Hasan
Semi-Supervised Approach for Early Stuck Sign Detection in Drilling Operations
Andres Hernandez-Matamoros, Kohei Sugawara, Tatsuya Kaneko, Ryota Wada, Masahiko Ozaki