Imperceptible Adversarial Perturbation
Imperceptible adversarial perturbations are subtly altered inputs designed to mislead deep learning models while remaining undetectable to humans. Current research focuses on generating these perturbations for various modalities (images, audio, video) using techniques like generative adversarial networks, diffusion models, and evolutionary algorithms, often targeting specific model architectures or aiming for improved transferability across different models. This field is crucial for assessing and improving the robustness of AI systems in security-sensitive applications, impacting areas like facial recognition, image classification, and speech recognition, where vulnerabilities to such attacks pose significant risks.
Papers
September 20, 2024
July 29, 2024
April 30, 2024
April 4, 2024
December 18, 2023
November 30, 2023
August 2, 2023
July 27, 2023
July 2, 2023
June 25, 2023
June 5, 2023
May 15, 2023
March 29, 2023
February 13, 2023
February 7, 2023
January 25, 2023
December 24, 2022
December 18, 2022
December 14, 2022
November 17, 2022