Physical World Attack

Physical-world attacks target the robustness of machine learning models by introducing real-world perturbations, such as adversarial patches or manipulated lighting, to deceive perception systems like those used in autonomous driving and biometric authentication. Current research focuses on developing increasingly stealthy and effective attacks, exploring various methods including generative adversarial networks (GANs) for anomaly detection and optimization algorithms to design effective yet inconspicuous perturbations, often leveraging natural phenomena like shadows or reflections. These attacks highlight critical vulnerabilities in deployed AI systems, driving the need for improved model robustness and the development of effective countermeasures to ensure safety and security in real-world applications.

Papers