Affective Computing
Affective computing aims to enable computers to recognize, interpret, and respond to human emotions, primarily focusing on improving human-computer interaction and applications in healthcare and other fields. Current research heavily utilizes multimodal data (facial expressions, speech, physiological signals) and advanced machine learning models, including transformers, large language models (LLMs), and recurrent neural networks (RNNs), often incorporating techniques like multimodal fusion, personalized clustering, and curriculum learning to enhance accuracy and generalizability. This field is significant for its potential to improve mental health diagnostics, personalized experiences, and human-robot interaction, driving advancements in both theoretical understanding of emotion and practical applications across various domains.
Papers
The Application of Affective Measures in Text-based Emotion Aware Recommender Systems
John Kalung Leung, Igor Griva, William G. Kennedy, Jason M. Kinser, Sohyun Park, Seo Young Lee
Integrating Psychometrics and Computing Perspectives on Bias and Fairness in Affective Computing: A Case Study of Automated Video Interviews
Brandon M Booth, Louis Hickman, Shree Krishna Subburaj, Louis Tay, Sang Eun Woo, Sidney K. DMello