Imperceptible Adversarial Perturbation
Imperceptible adversarial perturbations are subtly altered inputs designed to mislead deep learning models while remaining undetectable to humans. Current research focuses on generating these perturbations for various modalities (images, audio, video) using techniques like generative adversarial networks, diffusion models, and evolutionary algorithms, often targeting specific model architectures or aiming for improved transferability across different models. This field is crucial for assessing and improving the robustness of AI systems in security-sensitive applications, impacting areas like facial recognition, image classification, and speech recognition, where vulnerabilities to such attacks pose significant risks.
Papers
July 31, 2022
June 30, 2022
June 14, 2022
May 30, 2022
March 26, 2022
March 19, 2022
March 18, 2022
March 9, 2022