Paper ID: 2111.13350

Jointly Learning Agent and Lane Information for Multimodal Trajectory Prediction

Jie Wang, Caili Guo, Minan Guo, Jiujiu Chen

Predicting the plausible future trajectories of nearby agents is a core challenge for the safety of Autonomous Vehicles and it mainly depends on two external cues: the dynamic neighbor agents and static scene context. Recent approaches have made great progress in characterizing the two cues separately. However, they ignore the correlation between the two cues and most of them are difficult to achieve map-adaptive prediction. In this paper, we use lane as scene data and propose a staged network that Jointly learning Agent and Lane information for Multimodal Trajectory Prediction (JAL-MTP). JAL-MTP use a Social to Lane (S2L) module to jointly represent the static lane and the dynamic motion of the neighboring agents as instance-level lane, a Recurrent Lane Attention (RLA) mechanism for utilizing the instance-level lanes to predict the map-adaptive future trajectories and two selectors to identify the typical and reasonable trajectories. The experiments conducted on the public Argoverse dataset demonstrate that JAL-MTP significantly outperforms the existing models in both quantitative and qualitative.

Submitted: Nov 26, 2021