Distortion Efficient Adversarial Attack
Distortion-efficient adversarial attacks aim to create minimally perceptible adversarial examples that still fool machine learning models, focusing on reducing the magnitude and sparsity of perturbations. Current research explores various norms (e.g., L0, L2, Lp) to quantify distortion and employs techniques like adaptive diffusion models and novel optimization schemes to generate these subtle attacks. This research is crucial for improving the robustness of machine learning models in safety-critical applications, such as autonomous driving and medical image analysis, where imperceptible adversarial examples pose significant risks.
Papers
July 3, 2024
November 29, 2022
June 21, 2022
April 29, 2022