Speech Emotion Recognition
Speech emotion recognition (SER) aims to automatically identify human emotions from speech, primarily focusing on improving accuracy and robustness across diverse languages and contexts. Current research emphasizes leveraging self-supervised learning models, particularly transformer-based architectures, and exploring techniques like cross-lingual adaptation, multi-modal fusion (combining speech with text or visual data), and efficient model compression for resource-constrained environments. Advances in SER have significant implications for various applications, including mental health monitoring, human-computer interaction, and personalized healthcare, by enabling more natural and empathetic interactions between humans and machines.
Papers
End-to-End Integration of Speech Emotion Recognition with Voice Activity Detection using Self-Supervised Learning Features
Natsuo Yamashita, Masaaki Yamamoto, Yohei Kawaguchi
Investigating Effective Speaker Property Privacy Protection in Federated Learning for Speech Emotion Recognition
Chao Tan, Sheng Li, Yang Cao, Zhao Ren, Tanja Schultz
SeQuiFi: Mitigating Catastrophic Forgetting in Speech Emotion Recognition with Sequential Class-Finetuning
Sarthak Jain, Orchid Chetia Phukan, Swarup Ranjan Behera, Arun Balaji Buduru, Rajesh Sharma
Enhancing Speech Emotion Recognition through Segmental Average Pooling of Self-Supervised Learning Features
Jonghwan Hyeon, Yung-Hwan Oh, Ho-Jin Choi
Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance
Huang-Cheng Chou, Haibin Wu, Chi-Chun Lee
Personalized Speech Emotion Recognition in Human-Robot Interaction using Vision Transformers
Ruchik Mishra, Andrew Frye, Madan Mohan Rayguru, Dan O. Popa
TBDM-Net: Bidirectional Dense Networks with Gender Information for Speech Emotion Recognition
Vlad Striletchi, Cosmin Striletchi, Adriana Stan