Gesture Recognition
Gesture recognition aims to enable computers to understand and interpret human gestures, facilitating more natural and intuitive human-computer interaction. Current research focuses on improving accuracy and robustness across diverse modalities (vision, ultrasound, sEMG, radar, and even Wi-Fi), employing various deep learning architectures like convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and spiking neural networks (SNNs), often incorporating techniques like multimodal fusion and continual learning. This field is crucial for advancing human-robot interaction, accessibility technologies for people with disabilities, and creating more immersive and intuitive interfaces for virtual and augmented reality applications.
Papers
Hand Gesture Classification Based on Forearm Ultrasound Video Snippets Using 3D Convolutional Neural Networks
Keshav Bimbraw, Ankit Talele, Haichong K. Zhang
Improving Intersession Reproducibility for Forearm Ultrasound based Hand Gesture Classification through an Incremental Learning Approach
Keshav Bimbraw, Jack Rothenberg, Haichong K. Zhang
FastTalker: Jointly Generating Speech and Conversational Gestures from Text
Zixin Guo, Jian Zhang