Gesture Recognition
Gesture recognition aims to enable computers to understand and interpret human gestures, facilitating more natural and intuitive human-computer interaction. Current research focuses on improving accuracy and robustness across diverse modalities (vision, ultrasound, sEMG, radar, and even Wi-Fi), employing various deep learning architectures like convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and spiking neural networks (SNNs), often incorporating techniques like multimodal fusion and continual learning. This field is crucial for advancing human-robot interaction, accessibility technologies for people with disabilities, and creating more immersive and intuitive interfaces for virtual and augmented reality applications.
Papers
A Multi-label Classification Approach to Increase Expressivity of EMG-based Gesture Recognition
Niklas Smedemark-Margulies, Yunus Bicer, Elifnur Sunger, Stephanie Naufel, Tales Imbiriba, Eugene Tunik, Deniz Erdoğmuş, Mathew Yarossi
User Training with Error Augmentation for Electromyogram-based Gesture Classification
Yunus Bicer, Niklas Smedemark-Margulies, Basak Celik, Elifnur Sunger, Ryan Orendorff, Stephanie Naufel, Tales Imbiriba, Deniz Erdoğmuş, Eugene Tunik, Mathew Yarossi