Affective Computing
Affective computing aims to enable computers to recognize, interpret, and respond to human emotions, primarily focusing on improving human-computer interaction and applications in healthcare and other fields. Current research heavily utilizes multimodal data (facial expressions, speech, physiological signals) and advanced machine learning models, including transformers, large language models (LLMs), and recurrent neural networks (RNNs), often incorporating techniques like multimodal fusion, personalized clustering, and curriculum learning to enhance accuracy and generalizability. This field is significant for its potential to improve mental health diagnostics, personalized experiences, and human-robot interaction, driving advancements in both theoretical understanding of emotion and practical applications across various domains.
Papers
Affective Computing Has Changed: The Foundation Model Disruption
Björn Schuller, Adria Mallol-Ragolta, Alejandro Peña Almansa, Iosif Tsangko, Mostafa M. Amin, Anastasia Semertzidou, Lukas Christ, Shahin Amiriparian
Towards Unified Facial Action Unit Recognition Framework by Large Language Models
Guohong Hu, Xing Lan, Hanyu Jiang, Jiayi Lyu, Jian Xue