Emotion Recognition
Emotion recognition research aims to automatically identify and interpret human emotions from various sources like facial expressions, speech, physiological signals (EEG, fNIRS), and body language. Current research focuses on improving accuracy and robustness across diverse modalities and datasets, employing techniques like multimodal fusion, contrastive learning, and large language models (LLMs) for enhanced feature extraction and classification. This field is significant for its potential applications in healthcare (mental health diagnostics), human-computer interaction, and virtual reality, offering opportunities for personalized experiences and improved well-being.
Papers
Bridging Modalities: Knowledge Distillation and Masked Training for Translating Multi-Modal Emotion Recognition to Uni-Modal, Speech-Only Emotion Recognition
Muhammad Muaz, Nathan Paull, Jahnavi Malagavalli
Multi-Source Domain Adaptation with Transformer-based Feature Generation for Subject-Independent EEG-based Emotion Recognition
Shadi Sartipi, Mujdat Cetin
LineConGraphs: Line Conversation Graphs for Effective Emotion Recognition using Graph Neural Networks
Gokul S Krishnan, Sarala Padi, Craig S. Greenberg, Balaraman Ravindran, Dinesh Manoch, Ram D. Sriram
Multimodal Speech Emotion Recognition Using Modality-specific Self-Supervised Frameworks
Rutherford Agbeshi Patamia, Paulo E. Santos, Kingsley Nketia Acheampong, Favour Ekong, Kwabena Sarpong, She Kun