Physical Adversarial
Physical adversarial attacks exploit vulnerabilities in AI systems, particularly computer vision models, by introducing carefully crafted physical perturbations to real-world objects, causing misclassification or evasion. Current research focuses on benchmarking attack effectiveness across various object detection models (e.g., YOLO, Faster R-CNN) and developing robust attack methods, including those utilizing 3D modeling and dynamic optical perturbations, to overcome challenges like varying viewpoints and lighting conditions. This research is crucial for enhancing the safety and security of AI-powered systems, especially in critical applications like autonomous driving, where misidentification of objects can have severe consequences.
Papers
September 15, 2024
August 17, 2024
May 16, 2024
March 8, 2024
November 14, 2023
November 1, 2023
August 23, 2023
August 7, 2023
July 24, 2023
April 11, 2023
September 28, 2022
March 23, 2022
March 18, 2022
March 7, 2022
February 22, 2022
January 17, 2022