Adversarial Trajectory

Adversarial trajectory research focuses on crafting malicious inputs—slightly altered trajectories—to deceive trajectory prediction models, primarily those used in autonomous vehicles and other AI systems that rely on predicting movement. Current research emphasizes developing sophisticated attack methods, often employing optimization-based algorithms and deep learning models, to generate realistic and stealthy adversarial trajectories that maximize prediction errors or force specific, undesirable predictions. This work is crucial for assessing the robustness and safety of these AI systems, highlighting vulnerabilities that could lead to accidents or privacy breaches, and informing the development of more resilient and secure algorithms.

Papers