Diffusion Attack

Diffusion attacks exploit the vulnerabilities of diffusion models, a class of generative AI models, to create adversarial examples—inputs subtly altered to deceive the model's predictions or functionality. Current research focuses on developing effective attack strategies across various modalities (images, videos, policies) and exploring the use of diffusion models themselves to enhance attack transferability and naturalness, often targeting both digital and physical applications. This research is crucial for understanding and mitigating the security risks posed by increasingly prevalent diffusion models in diverse fields, ranging from computer vision and robotics to virtual reality and watermarking.

Papers