Adversarial Patch Attack
Adversarial patch attacks involve strategically placing small, visually inconspicuous image perturbations to deceive deep learning models, primarily in computer vision tasks like image classification and object detection. Current research focuses on developing more effective attacks using techniques like diffusion models and zeroth-order optimization, as well as creating robust defenses employing methods such as attention refinement, anomaly detection, and inpainting. These attacks pose a significant threat to the reliability of AI systems in safety-critical applications (e.g., autonomous driving), driving ongoing efforts to improve both attack and defense strategies and to better understand the underlying vulnerabilities of different model architectures, including vision transformers and convolutional neural networks.