Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
Mamba-Enhanced Text-Audio-Video Alignment Network for Emotion Recognition in Conversations
Xinran Li, Xiaomao Fan, Qingyang Wu, Xiaojiang Peng, Ye Li
Better Spanish Emotion Recognition In-the-wild: Bringing Attention to Deep Spectrum Voice Analysis
Elena Ortega-Beltrán, Josep Cabacas-Maso, Ismael Benito-Altamirano, Carles Ventura
Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Zhixian zhao, Haifeng Chen, Xi Li, Dongmei Jiang, Lei Xie