Physical Adversarial

Physical adversarial attacks exploit vulnerabilities in AI systems, particularly computer vision models, by introducing carefully crafted physical perturbations to real-world objects, causing misclassification or evasion. Current research focuses on benchmarking attack effectiveness across various object detection models (e.g., YOLO, Faster R-CNN) and developing robust attack methods, including those utilizing 3D modeling and dynamic optical perturbations, to overcome challenges like varying viewpoints and lighting conditions. This research is crucial for enhancing the safety and security of AI-powered systems, especially in critical applications like autonomous driving, where misidentification of objects can have severe consequences.

Papers