Hand Gesture Recognition
Hand gesture recognition aims to enable intuitive human-computer interaction by translating hand movements into digital commands. Current research focuses on improving accuracy and robustness across diverse data modalities (e.g., ultrasound, radar, sEMG, RGB-D cameras), employing various deep learning architectures like convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and spiking neural networks, often incorporating multimodal data fusion and techniques like incremental learning and channel ablation to enhance performance. This field is significant for its potential applications in assistive technologies, human-robot interaction, virtual/augmented reality, and other areas requiring natural and efficient interfaces.
Papers
Hand Gesture Classification Based on Forearm Ultrasound Video Snippets Using 3D Convolutional Neural Networks
Keshav Bimbraw, Ankit Talele, Haichong K. Zhang
Improving Intersession Reproducibility for Forearm Ultrasound based Hand Gesture Classification through an Incremental Learning Approach
Keshav Bimbraw, Jack Rothenberg, Haichong K. Zhang