Hand Gesture
Hand gesture research focuses on accurately recognizing and generating human hand movements, aiming to improve human-computer interaction and enable more natural communication with machines. Current research emphasizes developing robust models, often employing deep learning architectures like convolutional neural networks (CNNs), transformers, and diffusion models, to handle diverse data sources (e.g., RGB video, radar, ultrasound, sEMG) and challenging conditions (e.g., occlusion, varying lighting). These advancements have significant implications for applications ranging from assistive technologies for motor-impaired individuals and robotic control to virtual reality interfaces and medical procedures, driving progress in both computer vision and human-computer interaction.
Papers
Robust and Context-Aware Real-Time Collaborative Robot Handling via Dynamic Gesture Commands
Rui Chen, Alvin Shek, Changliu Liu
OO-dMVMT: A Deep Multi-view Multi-task Classification Framework for Real-time 3D Hand Gesture Classification and Segmentation
Federico Cunico, Federico Girella, Andrea Avogaro, Marco Emporio, Andrea Giachetti, Marco Cristani