Emotion Intensity
Emotion intensity research focuses on understanding and modeling the nuanced strength of emotions expressed in various modalities, such as speech, text, and facial expressions. Current research employs diverse approaches, including deep learning models like transformers and recurrent neural networks, often incorporating multimodal data fusion (audio-visual) and techniques like best-worst scaling for improved annotation and intensity prediction. This work is significant for advancing affective computing, enabling more realistic and expressive human-computer interaction, and improving applications in areas like mental health assessment and speech synthesis.
Papers
Unimodal Multi-Task Fusion for Emotional Mimicry Intensity Prediction
Tobias Hallmen, Fabian Deuser, Norbert Oswald, Elisabeth André
Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation
Jun Yu, Wangyuan Zhu, Jichao Zhu
HSEmotion Team at the 6th ABAW Competition: Facial Expressions, Valence-Arousal and Emotion Intensity Prediction
Andrey V. Savchenko
Computer Vision Estimation of Emotion Reaction Intensity in the Wild
Yang Qian, Ali Kargarandehkordi, Onur Cezmi Mutlu, Saimourya Surabhi, Mohammadmahdi Honarmand, Dennis Paul Wall, Peter Washington
How People Respond to the COVID-19 Pandemic on Twitter: A Comparative Analysis of Emotional Expressions from US and India
Brandon Siyuan Loh, Raj Kumar Gupta, Ajay Vishwanath, Andrew Ortony, Yinping Yang