Multimodal Sentiment
Multimodal sentiment analysis aims to understand human emotions by integrating information from various sources like text, images, and audio, surpassing the limitations of text-only sentiment analysis. Current research focuses on improving the fusion of these modalities, exploring techniques like transformers, contrastive learning, and Bayesian methods to address challenges such as weak inter-modality correlations, missing data, and dataset biases. This field is significant for its potential to enhance applications ranging from social media monitoring and advertising effectiveness to mental health assessment and human-computer interaction, offering more nuanced and accurate emotion detection than unimodal approaches.
Papers
October 2, 2024
August 5, 2024
July 9, 2024
June 12, 2024
May 15, 2024
April 19, 2024
April 15, 2024
April 13, 2024
March 29, 2024
March 8, 2024
January 20, 2024
October 14, 2023
October 9, 2023
September 4, 2023
June 27, 2023
May 15, 2023
November 21, 2022
November 12, 2022
October 28, 2022
July 24, 2022