Adversarial Object
Adversarial objects are designed to deceive artificial intelligence systems, particularly those used in autonomous driving and object detection, by causing misclassifications or errors in perception. Current research focuses on creating realistic-looking adversarial objects using techniques like gradient-based texture optimization and differentiable rendering, often incorporating "judges" to assess realism and employing heterogeneous graph neural networks for scene understanding and object manipulation. This research is crucial for evaluating the robustness of AI systems and improving their safety and reliability in real-world applications, particularly in safety-critical domains like autonomous vehicles and security systems.
Papers
September 13, 2024
May 19, 2024
October 18, 2023
September 27, 2023
September 14, 2023
July 4, 2023
February 19, 2023
July 19, 2022
July 11, 2022
June 10, 2022
March 14, 2022