ImageNet Attack

ImageNet attacks explore vulnerabilities in deep learning models, particularly convolutional neural networks (CNNs), by crafting subtly altered images that cause misclassification. Current research focuses on developing increasingly effective attack methods, targeting various aspects like illumination, latent space manipulation, and even pixel-level perturbations across different $\ell_p$-norms, often using gradient-based or reinforcement learning approaches. These studies highlight the fragility of these models and drive the development of more robust defenses, impacting the reliability and security of AI systems in diverse applications such as image recognition, video processing, and large language models.

Papers