Unrestricted Adversarial
Unrestricted adversarial attacks aim to create maliciously perturbed data that fool machine learning models while appearing natural to humans, unlike traditional attacks limited by specific perturbation norms. Current research focuses on generating these attacks using diffusion models and generative adversarial networks (GANs), often incorporating techniques like latent space manipulation, semantic guidance from large language models, and recursive token merging for improved realism and transferability across different models. This research is crucial for evaluating the robustness of machine learning systems in real-world scenarios and informing the development of more resilient models and defenses against sophisticated attacks.
Papers
October 3, 2024
August 10, 2024
April 21, 2024
April 19, 2024
April 16, 2024
November 27, 2023
October 6, 2023
July 24, 2023
May 18, 2023
January 22, 2023
August 31, 2022
August 24, 2022
January 4, 2022