Adversarial Trajectory
Adversarial trajectory research focuses on crafting malicious inputs—slightly altered trajectories—to deceive trajectory prediction models, primarily those used in autonomous vehicles and other AI systems that rely on predicting movement. Current research emphasizes developing sophisticated attack methods, often employing optimization-based algorithms and deep learning models, to generate realistic and stealthy adversarial trajectories that maximize prediction errors or force specific, undesirable predictions. This work is crucial for assessing the robustness and safety of these AI systems, highlighting vulnerabilities that could lead to accidents or privacy breaches, and informing the development of more resilient and secure algorithms.
Papers
April 19, 2024
March 19, 2024
September 20, 2023
August 2, 2023
December 13, 2022
December 8, 2022
September 19, 2022