Visual Argument
Visual argumentation research focuses on understanding how images persuade, requiring AI systems to selectively interpret visual information within a broader context. Current research employs large language models (LLMs) and vision-language models (VLMs) to analyze images and associated text, often focusing on benchmark datasets designed to evaluate the ability of these models to identify relevant visual cues and deduce conclusions from complex visual arguments. This work is significant because it addresses the limitations of current AI in understanding nuanced visual communication, with implications for applications ranging from accessibility tools for the visually impaired to improved analysis of persuasive media.
Papers
July 11, 2024
June 27, 2024
March 3, 2024
October 3, 2023
August 1, 2023
October 17, 2022
February 10, 2022