Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
Acoustic and linguistic representations for speech continuous emotion recognition in call center conversations
Manon Macary, Marie Tahon, Yannick Estève, Daniel Luzzati
In the Blink of an Eye: Event-based Emotion Recognition
Haiwei Zhang, Jiqing Zhang, Bo Dong, Pieter Peers, Wenwei Wu, Xiaopeng Wei, Felix Heide, Xin Yang
InstructERC: Reforming Emotion Recognition in Conversation with a Retrieval Multi-task LLMs Framework
Shanglin Lei, Guanting Dong, Xiaoping Wang, Keheng Wang, Sirui Wang
Personalization of Affective Models to Enable Neuropsychiatric Digital Precision Health Interventions: A Feasibility Study
Ali Kargarandehkordi, Matti Kaisti, Peter Washington
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
Théo Deschamps-Berger, Lori Lamel, Laurence Devillers
A Comparison of Personalized and Generalized Approaches to Emotion Recognition Using Consumer Wearable Devices: Machine Learning Study
Joe Li, Peter Washington