Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
Emotional Images: Assessing Emotions in Images and Potential Biases in Generative Models
Maneet Mehta, Cody Buntain
Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors
Yuanyuan Liu, Lin Wei, Kejun Liu, Yibing Zhan, Zijing Chen, Zhe Chen, Shiguang Shan
Revise, Reason, and Recognize: LLM-Based Emotion Recognition via Emotion-Specific Prompts and ASR Error Correction
Yuanchao Li, Yuan Gong, Chao-Han Huck Yang, Peter Bell, Catherine Lai
CA-MHFA: A Context-Aware Multi-Head Factorized Attentive Pooling for SSL-Based Speaker Verification
Junyi Peng, Ladislav Mošner, Lin Zhang, Oldřich Plchot, Themos Stafylakis, Lukáš Burget, Jan Černocký
Improving Emotion Recognition Accuracy with Personalized Clustering
Laura Gutierrez-Martin (1), Celia Lopez Ongil (1 and 2), Jose M. Lanza-Gutierrez (3), Jose A. Miranda Calero (4) ((1) Department of Electronics, Universidad Carlos III de Madrid, Spain, (2) Gender Studies Institute, Universidad Carlos III de Madrid, Spain, (3) Department of Computer Science, Universidad de Alcala, Spain, (4) Embedded Systems Laboratory, Ecole Polytechnique Federale de Lausanne, Switzerland)