ImageNet Attack
ImageNet attacks explore vulnerabilities in deep learning models, particularly convolutional neural networks (CNNs), by crafting subtly altered images that cause misclassification. Current research focuses on developing increasingly effective attack methods, targeting various aspects like illumination, latent space manipulation, and even pixel-level perturbations across different $\ell_p$-norms, often using gradient-based or reinforcement learning approaches. These studies highlight the fragility of these models and drive the development of more robust defenses, impacting the reliability and security of AI systems in diverse applications such as image recognition, video processing, and large language models.
Papers
October 6, 2024
September 5, 2024
July 22, 2024
September 25, 2023
March 30, 2023
December 5, 2022
November 15, 2022
April 30, 2022