Object Generation Attack

Object generation attacks target the object detection capabilities of deep learning models, particularly in autonomous driving systems, aiming to either fabricate objects or prevent the detection of existing ones. Current research focuses on understanding vulnerabilities in various model architectures, including those using Large Language Models (LLMs) and self-supervised learning (SSL), and developing both attack methods (e.g., using LiDAR manipulation or backdoor triggers) and defense mechanisms (e.g., leveraging 3D shadow analysis). The significance lies in the potential for these attacks to compromise the safety and reliability of AI-powered systems, highlighting the critical need for robust security measures in applications where accurate object detection is paramount.

Papers