Universal Adversarial Perturbation
Universal adversarial perturbations (UAPs) are small, fixed alterations to images or videos designed to consistently fool deep learning models, regardless of the specific input. Current research focuses on improving UAP effectiveness across diverse model architectures (including CNNs and Vision Transformers) and data types (images and videos), exploring techniques like texture manipulation and temporal inconsistency exploitation to enhance attack success rates. This research is crucial for evaluating the robustness of deep learning systems and informing the development of more resilient models, with implications for security in applications like facial recognition and video analysis.
Papers
June 10, 2024
November 17, 2023
June 20, 2023
October 10, 2022
September 27, 2022
April 7, 2022