Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
TED: Turn Emphasis with Dialogue Feature Attention for Emotion Recognition in Conversation
Junya Ono, Hiromi Wakaki
learning discriminative features from spectrograms using center loss for speech emotion recognition
Dongyang Dai, Zhiyong Wu, Runnan Li, Xixin Wu, Jia Jia, Helen Meng
Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition
Shreya G. Upadhyay, Woan-Shiuan Chien, Chi-Chun Lee
Bridge then Begin Anew: Generating Target-relevant Intermediate Model for Source-free Visual Emotion Adaptation
Jiankun Zhu, Sicheng Zhao, Jing Jiang, Wenbo Tang, Zhaopan Xu, Tingting Han, Pengfei Xu, Hongxun Yao
Spatio-Temporal Fuzzy-oriented Multi-Modal Meta-Learning for Fine-grained Emotion Recognition
Jingyao Wang, Yuxuan Yang, Wenwen Qiang, Changwen Zheng, Hui Xiong