MUSE Humor Sub Challenge
The MUSE Humor Sub-Challenge focuses on developing robust multimodal models for automatically detecting humor in audio-visual data, primarily leveraging datasets of spontaneous human interactions. Current research emphasizes hybrid multimodal fusion strategies, incorporating transformer networks, recurrent neural networks (like GRUs and LSTMs), and attention mechanisms to effectively integrate information from different modalities (audio, visual, and potentially text). Success in this area has significant implications for improving human-computer interaction, enabling more nuanced understanding of social cues in applications like virtual assistants and automated content analysis.
Papers
October 14, 2024
October 4, 2024
September 26, 2024
August 20, 2024
August 9, 2024
July 8, 2024
June 11, 2024
June 7, 2024
March 21, 2024
March 2, 2024
September 29, 2023
July 29, 2023
May 5, 2023
March 3, 2023
January 2, 2023
September 24, 2022
September 21, 2022
August 5, 2022
June 23, 2022