Modal Attack

Modal attacks exploit vulnerabilities in machine learning models that process multiple data modalities (e.g., image and text, or LiDAR and electromagnetic signals). Current research focuses on developing and evaluating these attacks across various model architectures, including vision-language models and those used in autonomous driving, often leveraging techniques like adversarial perturbations and backdoor injections to manipulate model outputs. Understanding and mitigating these attacks is crucial for ensuring the safety and reliability of increasingly prevalent multimodal AI systems in diverse applications, from robotics to person re-identification. The field is actively exploring both the effectiveness of different attack strategies and the development of robust defenses.

Papers