Attack Patch
Attack patches are small, strategically designed image perturbations that can fool deep learning models, particularly in computer vision, into misclassifying images or making incorrect predictions. Current research focuses on optimizing these patches for effectiveness while minimizing their visual conspicuousness, exploring various attack strategies (e.g., brightness-restricted patches, carpet-bombing patches) and analyzing their impact on different model architectures, including convolutional neural networks and vision transformers. This research is crucial for understanding and mitigating vulnerabilities in deployed AI systems, impacting the safety and reliability of applications ranging from autonomous driving to medical image analysis.