Speech Emotion Recognition
Speech emotion recognition (SER) aims to automatically identify human emotions from speech, primarily focusing on improving accuracy and robustness across diverse languages and contexts. Current research emphasizes leveraging self-supervised learning models, particularly transformer-based architectures, and exploring techniques like cross-lingual adaptation, multi-modal fusion (combining speech with text or visual data), and efficient model compression for resource-constrained environments. Advances in SER have significant implications for various applications, including mental health monitoring, human-computer interaction, and personalized healthcare, by enabling more natural and empathetic interactions between humans and machines.
Papers
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
Théo Deschamps-Berger, Lori Lamel, Laurence Devillers
Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition
Yong Wang, Cheng Lu, Yuan Zong, Hailun Lian, Yan Zhao, Sunan Li
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages
Ephrem Afele Retta, Richard Sutcliffe, Jabar Mahmood, Michael Abebe Berwo, Eiad Almekhlafi, Sajjad Ahmed Khan, Shehzad Ashraf Chaudhry, Mustafa Mhamed, Jun Feng
Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition
Weidong Chen, Xiaofen Xing, Peihao Chen, Xiangmin Xu
MFSN: Multi-perspective Fusion Search Network For Pre-training Knowledge in Speech Emotion Recognition
Haiyang Sun, Fulin Zhang, Yingying Gao, Zheng Lian, Shilei Zhang, Junlan Feng
Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus
Théo Deschamps-Berger, Lori Lamel, Laurence Devillers