Paper ID: 2209.05996
M$^2$-3DLaneNet: Exploring Multi-Modal 3D Lane Detection
Yueru Luo, Xu Yan, Chaoda Zheng, Chao Zheng, Shuqi Mei, Tang Kun, Shuguang Cui, Zhen Li
Estimating accurate lane lines in 3D space remains challenging due to their sparse and slim nature. Previous works mainly focused on using images for 3D lane detection, leading to inherent projection error and loss of geometry information. To address these issues, we explore the potential of leveraging LiDAR for 3D lane detection, either as a standalone method or in combination with existing monocular approaches. In this paper, we propose M$^2$-3DLaneNet to integrate complementary information from multiple sensors. Specifically, M$^2$-3DLaneNet lifts 2D features into 3D space by incorporating geometry information from LiDAR data through depth completion. Subsequently, the lifted 2D features are further enhanced with LiDAR features through cross-modality BEV fusion. Extensive experiments on the large-scale OpenLane dataset demonstrate the effectiveness of M$^2$-3DLaneNet, regardless of the range (75m or 100m).
Submitted: Sep 13, 2022