Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
From Chatter to Matter: Addressing Critical Steps of Emotion Recognition Learning in Task-oriented Dialogue
Shutong Feng, Nurul Lubis, Benjamin Ruppik, Christian Geishauser, Michael Heck, Hsien-chin Lin, Carel van Niekerk, Renato Vukovic, Milica Gašić
Hybrid Models for Facial Emotion Recognition in Children
Rafael Zimmer, Marcos Sobral, Helio Azevedo
Efficient Labelling of Affective Video Datasets via Few-Shot & Multi-Task Contrastive Learning
Ravikiran Parameshwara, Ibrahim Radwan, Akshay Asthana, Iman Abbasnejad, Ramanathan Subramanian, Roland Goecke
Capturing Spectral and Long-term Contextual Information for Speech Emotion Recognition Using Deep Learning Techniques
Samiul Islam, Md. Maksudul Haque, Abu Jobayer Md. Sadat