Patch Based Attack

Patch-based attacks exploit the vulnerabilities of deep learning models by applying small, strategically designed adversarial patches to input images, causing misclassification or missed detections. Current research focuses on developing both more robust attacks (e.g., using decoupled patches to improve real-world effectiveness) and more effective defenses, often leveraging attention mechanisms to refine feature extraction and identify outlier regions indicative of attacks or employing masking techniques to neutralize adversarial effects. This research area is crucial for enhancing the security and reliability of AI systems in safety-critical applications like autonomous driving and security surveillance, where adversarial manipulation could have severe consequences.

Papers