Visual Argument

Visual argumentation research focuses on understanding how images persuade, requiring AI systems to selectively interpret visual information within a broader context. Current research employs large language models (LLMs) and vision-language models (VLMs) to analyze images and associated text, often focusing on benchmark datasets designed to evaluate the ability of these models to identify relevant visual cues and deduce conclusions from complex visual arguments. This work is significant because it addresses the limitations of current AI in understanding nuanced visual communication, with implications for applications ranging from accessibility tools for the visually impaired to improved analysis of persuasive media.

Papers