Physical World Attack
Physical-world attacks target the robustness of machine learning models by introducing real-world perturbations, such as adversarial patches or manipulated lighting, to deceive perception systems like those used in autonomous driving and biometric authentication. Current research focuses on developing increasingly stealthy and effective attacks, exploring various methods including generative adversarial networks (GANs) for anomaly detection and optimization algorithms to design effective yet inconspicuous perturbations, often leveraging natural phenomena like shadows or reflections. These attacks highlight critical vulnerabilities in deployed AI systems, driving the need for improved model robustness and the development of effective countermeasures to ensure safety and security in real-world applications.
Papers
Self-supervised Adversarial Training of Monocular Depth Estimation against Physical-World Attacks
Zhiyuan Cheng, Cheng Han, James Liang, Qifan Wang, Xiangyu Zhang, Dongfang Liu
ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving
Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, Chao Shen