Realistic Adversarial
Realistic adversarial attacks aim to create adversarial examples that are plausible and effective in real-world scenarios, unlike many previous studies focusing on unrealistic perturbations. Current research emphasizes developing methods to generate these realistic attacks across various domains, including image classification, natural language processing, and network security, often employing techniques like generative adversarial networks (GANs), diffusion models, and reinforcement learning to craft subtle yet impactful perturbations. This focus on realism is crucial for improving the robustness of machine learning models in practical applications and for developing more effective defense mechanisms against real-world threats.
Papers
August 11, 2024
June 25, 2024
December 18, 2023
August 13, 2023
August 3, 2023
January 30, 2023
December 26, 2022
September 19, 2022
August 12, 2022
June 3, 2022
March 19, 2022
March 8, 2022