Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
A multimodal dynamical variational autoencoder for audiovisual speech representation learning
Samir Sadok, Simon Leglaive, Laurent Girin, Xavier Alameda-Pineda, Renaud Séguier
A vector quantized masked autoencoder for audiovisual speech emotion recognition
Samir Sadok, Simon Leglaive, Renaud Séguier
High-Level Context Representation for Emotion Recognition in Images
Willams de Lima Costa, Estefania Talavera Martinez, Lucas Silva Figueiredo, Veronica Teichrieb
Multi-scale Transformer-based Network for Emotion Recognition from Multi Physiological Signals
Tu Vu, Van Thong Huynh, Soo-Hyung Kim
Emotions Beyond Words: Non-Speech Audio Emotion Recognition With Edge Computing
Ibrahim Malik, Siddique Latif, Sanaullah Manzoor, Muhammad Usama, Junaid Qadir, Raja Jurdak