Multimodal Meme
Multimodal meme research focuses on automatically understanding the meaning conveyed in memes, which combine images and text in complex ways. Current research emphasizes developing robust machine learning models, often employing contrastive learning and multimodal neural networks, to classify memes based on attributes like sentiment, hate speech, and the presence of social abuse. This work is driven by the need to address the spread of harmful content online and improve the safety of large multimodal models. The development of large, annotated datasets and improved model architectures are key to advancing this field and informing the design of safer online environments.
Papers
September 23, 2024
August 11, 2024
January 3, 2024
August 1, 2023
May 29, 2023
March 3, 2023