Speech Emotion Recognition
Speech emotion recognition (SER) aims to automatically identify human emotions from speech, primarily focusing on improving accuracy and robustness across diverse languages and contexts. Current research emphasizes leveraging self-supervised learning models, particularly transformer-based architectures, and exploring techniques like cross-lingual adaptation, multi-modal fusion (combining speech with text or visual data), and efficient model compression for resource-constrained environments. Advances in SER have significant implications for various applications, including mental health monitoring, human-computer interaction, and personalized healthcare, by enabling more natural and empathetic interactions between humans and machines.
Papers
Deep functional multiple index models with an application to SER
Matthieu Saumard, Abir El Haj, Thibault Napoleon
Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transfer
Jeong-Yoon Kim, Seung-Ho Lee
emoDARTS: Joint Optimisation of CNN & Sequential Neural Network Architectures for Superior Speech Emotion Recognition
Thejan Rajapakshe, Rajib Rana, Sara Khalifa, Berrak Sisman, Bjorn W. Schuller, Carlos Busso
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio Benchmarks and Novel Data
Alice Baird, Rachel Manzelli, Panagiotis Tzirakis, Chris Gagne, Haoqi Li, Sadie Allen, Sander Dieleman, Brian Kulis, Shrikanth S. Narayanan, Alan Cowen
How Paralingual are Paralinguistic Representations? A Case Study in Speech Emotion Recognition
Orchid Chetia Phukan, Gautam Siddharth Kashyap, Arun Balaji Buduru, Rajesh Sharma
STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition
Yi Chang, Zhao Ren, Zixing Zhang, Xin Jing, Kun Qian, Xi Shao, Bin Hu, Tanja Schultz, Björn W. Schuller
Revealing Emotional Clusters in Speaker Embeddings: A Contrastive Learning Strategy for Speech Emotion Recognition
Ismail Rasim Ulgen, Zongyang Du, Carlos Busso, Berrak Sisman
Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Yong Wang, Cheng Lu, Hailun Lian, Yan Zhao, Björn Schuller, Yuan Zong, Wenming Zheng