Adversarial Patch
Adversarial patches are small, carefully designed images that can fool deep learning models, causing misclassifications or incorrect predictions in computer vision tasks. Current research focuses on developing increasingly realistic and effective patches, often leveraging diffusion models and optimization techniques to create patches that are both potent and visually inconspicuous, targeting various applications like autonomous driving and traffic sign recognition. This research is crucial for understanding and mitigating vulnerabilities in AI systems, particularly in safety-critical applications where reliable performance is paramount. The development of robust defenses against these attacks is a parallel and equally important area of investigation.
Papers
RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches on Face Recognition
Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
NeRFTAP: Enhancing Transferability of Adversarial Patches on Face Recognition using Neural Radiance Fields
Xiaoliang Liu, Furao Shen, Feng Han, Jian Zhao, Changhai Nie