Adversarial Object

Adversarial objects are designed to deceive artificial intelligence systems, particularly those used in autonomous driving and object detection, by causing misclassifications or errors in perception. Current research focuses on creating realistic-looking adversarial objects using techniques like gradient-based texture optimization and differentiable rendering, often incorporating "judges" to assess realism and employing heterogeneous graph neural networks for scene understanding and object manipulation. This research is crucial for evaluating the robustness of AI systems and improving their safety and reliability in real-world applications, particularly in safety-critical domains like autonomous vehicles and security systems.

Papers