Gesture Recognition
Gesture recognition aims to enable computers to understand and interpret human gestures, facilitating more natural and intuitive human-computer interaction. Current research focuses on improving accuracy and robustness across diverse modalities (vision, ultrasound, sEMG, radar, and even Wi-Fi), employing various deep learning architectures like convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and spiking neural networks (SNNs), often incorporating techniques like multimodal fusion and continual learning. This field is crucial for advancing human-robot interaction, accessibility technologies for people with disabilities, and creating more immersive and intuitive interfaces for virtual and augmented reality applications.
Papers
Boosting Gesture Recognition with an Automatic Gesture Annotation Framework
Junxiao Shen, Xuhai Xu, Ran Tan, Amy Karlson, Evan Strasnick
Towards Open-World Gesture Recognition
Junxiao Shen, Matthias De Lange, Xuhai "Orson" Xu, Enmin Zhou, Ran Tan, Naveen Suda, Maciej Lazarewicz, Per Ola Kristensson, Amy Karlson, Evan Strasnick