Target Emotion
Target emotion research focuses on accurately identifying and understanding emotions expressed in various modalities, such as speech, text, and video, aiming to improve human-computer interaction and related applications. Current research heavily utilizes large language models (LLMs) and deep learning architectures, including transformers and Siamese networks, often incorporating multimodal fusion techniques and contrastive learning for improved accuracy and robustness. This field is significant for advancing affective computing, enabling more empathetic and context-aware AI systems with applications in healthcare, education, and social robotics, as well as improving the understanding of human emotion itself.
Papers
Insights on Modelling Physiological, Appraisal, and Affective Indicators of Stress using Audio Features
Andreas Triantafyllopoulos, Sandra Zänkert, Alice Baird, Julian Konzok, Brigitte M. Kudielka, Björn W. Schuller
Empathetic Conversational Systems: A Review of Current Advances, Gaps, and Opportunities
Aravind Sesagiri Raamkumar, Yinping Yang