Adversarial Style Perturbation
Adversarial style perturbation focuses on creating subtle alterations to data, such as images or videos, that mislead machine learning models while appearing natural to humans. Current research emphasizes generating these perturbations using diffusion models and adversarial training techniques, often incorporating strategies like environmental matching or feature mixup to improve the realism and effectiveness of the attacks. This area is crucial for evaluating the robustness of machine learning systems and for developing more resilient models, with implications for applications ranging from autonomous vehicle safety to image recognition.
Papers
May 13, 2024
August 24, 2023
April 4, 2023
December 13, 2022
October 12, 2022