Multimodal Dual Emotion
Multimodal dual emotion research focuses on understanding and modeling human emotions by integrating information from multiple sources like facial expressions, speech, and body language. Current research emphasizes developing robust models, often employing deep learning architectures such as recurrent neural networks and transformers, to analyze these diverse modalities and extract meaningful emotional representations, often focusing on challenges like cross-cultural understanding and handling noisy or incomplete data. This field is crucial for advancing affective computing, with applications ranging from improved human-computer interaction to more nuanced analysis of social dynamics and mental health.
Papers
June 11, 2024
March 31, 2024
January 19, 2024
May 18, 2023
May 5, 2023
February 3, 2023
April 30, 2022
April 25, 2022
November 30, 2021