Imperceptible Adversarial Perturbation

Imperceptible adversarial perturbations are subtly altered inputs designed to mislead deep learning models while remaining undetectable to humans. Current research focuses on generating these perturbations for various modalities (images, audio, video) using techniques like generative adversarial networks, diffusion models, and evolutionary algorithms, often targeting specific model architectures or aiming for improved transferability across different models. This field is crucial for assessing and improving the robustness of AI systems in security-sensitive applications, impacting areas like facial recognition, image classification, and speech recognition, where vulnerabilities to such attacks pose significant risks.

Papers