Diffusion Attack
Diffusion attacks exploit the vulnerabilities of diffusion models, a class of generative AI models, to create adversarial examples—inputs subtly altered to deceive the model's predictions or functionality. Current research focuses on developing effective attack strategies across various modalities (images, videos, policies) and exploring the use of diffusion models themselves to enhance attack transferability and naturalness, often targeting both digital and physical applications. This research is crucial for understanding and mitigating the security risks posed by increasingly prevalent diffusion models in diverse fields, ranging from computer vision and robotics to virtual reality and watermarking.
Papers
August 10, 2024
May 29, 2024
March 21, 2024
January 8, 2024
November 18, 2023
September 9, 2023
August 30, 2023
November 20, 2022