Multimodal Attack
Multimodal attacks target the vulnerabilities of artificial intelligence systems that process multiple data types (e.g., images and text), aiming to manipulate their outputs through carefully crafted adversarial perturbations in one or more modalities. Current research focuses on developing effective attack strategies, often employing gradient-based optimization or evolutionary algorithms, and exploring the robustness of various model architectures, including vision-language pre-trained models and diffusion models. Understanding and mitigating these attacks is crucial for ensuring the reliability and security of increasingly prevalent multimodal AI systems across diverse applications, from image classification to text generation.
Papers
October 17, 2024
September 26, 2024
August 24, 2024
July 31, 2024
April 30, 2024
April 13, 2024
March 31, 2024
March 16, 2024
November 29, 2023
May 16, 2023
May 7, 2023