Adversarial Visual Instruction
Adversarial visual instruction research focuses on evaluating and improving the robustness of large vision-language models (LVLMs) against malicious or unintended inputs, encompassing both image and text manipulations. Current efforts concentrate on developing methods to detect and defend against these attacks, including techniques like "spotlighting" to improve input source identification and algorithms that generate adversarial prompts to assess model vulnerabilities. This work is crucial for ensuring the safety and reliability of LVLMs in real-world applications, highlighting the need for more robust and secure AI systems.
Papers
May 21, 2024
March 20, 2024
March 14, 2024
December 7, 2023