Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
Neural Architecture Search for Speech Emotion Recognition
Xixin Wu, Shoukang Hu, Zhiyong Wu, Xunying Liu, Helen Meng
M-MELD: A Multilingual Multi-Party Dataset for Emotion Recognition in Conversations
Sreyan Ghosh, S Ramaneswaran, Utkarsh Tyagi, Harshvardhan Srivastava, Samden Lepcha, S Sakshi, Dinesh Manocha
MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
Sreyan Ghosh, Utkarsh Tyagi, S Ramaneswaran, Harshvardhan Srivastava, Dinesh Manocha
Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition
Vincent Karas, Mani Kumar Tellamekala, Adria Mallol-Ragolta, Michel Valstar, Björn W. Schuller
Multitask Emotion Recognition Model with Knowledge Distillation and Task Discriminator
Euiseok Jeong, Geesung Oh, Sejoon Lim
Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3
Su Zhang, Ruyi An, Yi Ding, Cuntai Guan
Multiple Emotion Descriptors Estimation at the ABAW3 Challenge
Didan Deng