Emotion Intensity
Emotion intensity research focuses on understanding and modeling the nuanced strength of emotions expressed in various modalities, such as speech, text, and facial expressions. Current research employs diverse approaches, including deep learning models like transformers and recurrent neural networks, often incorporating multimodal data fusion (audio-visual) and techniques like best-worst scaling for improved annotation and intensity prediction. This work is significant for advancing affective computing, enabling more realistic and expressive human-computer interaction, and improving applications in areas like mental health assessment and speech synthesis.
Papers
Emotional Reaction Intensity Estimation Based on Multimodal Data
Shangfei Wang, Jiaqiang Wu, Feiyi Zheng, Xin Li, Xuewei Li, Suwen Wang, Yi Wu, Yanan Chang, Xiangyu Miao
Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers
Jia Li, Yin Chen, Xuesong Zhang, Jiantao Nie, Ziqiang Li, Yangchen Yu, Yan Zhang, Richang Hong, Meng Wang
Facial Affect Recognition based on Transformer Encoder and Audiovisual Fusion for the ABAW5 Challenge
Ziyang Zhang, Liuwei An, Zishun Cui, Ao xu, Tengteng Dong, Yueqi Jiang, Jingyi Shi, Xin Liu, Xiao Sun, Meng Wang