Adversarial Patch
Adversarial patches are small, carefully designed images that can fool deep learning models, causing misclassifications or incorrect predictions in computer vision tasks. Current research focuses on developing increasingly realistic and effective patches, often leveraging diffusion models and optimization techniques to create patches that are both potent and visually inconspicuous, targeting various applications like autonomous driving and traffic sign recognition. This research is crucial for understanding and mitigating vulnerabilities in AI systems, particularly in safety-critical applications where reliable performance is paramount. The development of robust defenses against these attacks is a parallel and equally important area of investigation.
Papers
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
Adversarial Texture for Fooling Person Detectors in the Physical World
Zhanhao Hu, Siyuan Huang, Xiaopei Zhu, Fuchun Sun, Bo Zhang, Xiaolin Hu