Target Emotion
Target emotion research focuses on accurately identifying and understanding emotions expressed in various modalities, such as speech, text, and video, aiming to improve human-computer interaction and related applications. Current research heavily utilizes large language models (LLMs) and deep learning architectures, including transformers and Siamese networks, often incorporating multimodal fusion techniques and contrastive learning for improved accuracy and robustness. This field is significant for advancing affective computing, enabling more empathetic and context-aware AI systems with applications in healthcare, education, and social robotics, as well as improving the understanding of human emotion itself.
Papers
Exploiting Emotion-Semantic Correlations for Empathetic Response Generation
Zhou Yang, Zhaochun Ren, Yufeng Wang, Xiaofei Zhu, Zhihao Chen, Tiecheng Cai, Yunbing Wu, Yisong Su, Sibo Ju, Xiangwen Liao
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion Recognition
Cam-Van Thi Nguyen, Cao-Bach Nguyen, Quang-Thuy Ha, Duc-Trong Le