Physical Adversarial Patch
Physical adversarial patches are small, printed images designed to fool computer vision systems, such as those used in autonomous vehicles and facial recognition, by causing misclassification or missed detections. Current research focuses on developing increasingly stealthy and robust patches, employing techniques like fluorescent inks and acoustic triggers, as well as exploring defenses against these attacks through methods such as occlusion-aware detection and entropy-based patch localization. This research area is significant due to its implications for the security and reliability of AI systems in safety-critical applications, driving advancements in both attack and defense strategies.
Papers
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems
Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein, Chaim Baskin
Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches
Zhiyuan Cheng, James Liang, Hongjun Choi, Guanhong Tao, Zhiwen Cao, Dongfang Liu, Xiangyu Zhang