Hand Gesture Recognition
Hand gesture recognition aims to enable intuitive human-computer interaction by translating hand movements into digital commands. Current research focuses on improving accuracy and robustness across diverse data modalities (e.g., ultrasound, radar, sEMG, RGB-D cameras), employing various deep learning architectures like convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and spiking neural networks, often incorporating multimodal data fusion and techniques like incremental learning and channel ablation to enhance performance. This field is significant for its potential applications in assistive technologies, human-robot interaction, virtual/augmented reality, and other areas requiring natural and efficient interfaces.
Papers
EMGTFNet: Fuzzy Vision Transformer to decode Upperlimb sEMG signals for Hand Gestures Recognition
Joseph Cherre Córdova, Christian Flores, Javier Andreu-Perez
A Deep Learning Sequential Decoder for Transient High-Density Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer Learning
Golara Ahmadi Azar, Qin Hu, Melika Emami, Alyson Fletcher, Sundeep Rangan, S. Farokh Atashzar