Paper ID: 2212.06244
PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra
Fusing 3D LiDAR features with 2D camera features is a promising technique for enhancing the accuracy of 3D detection, thanks to their complementary physical properties. While most of the existing methods focus on directly fusing camera features with raw LiDAR point clouds or shallow-level 3D features, it is observed that directly combining 2D and 3D features in deeper layers actually leads to a decrease in accuracy due to feature misalignment. The misalignment, which stems from the aggregation of features learned from large receptive fields, becomes increasingly more severe as we delve into deeper layers. In this paper, we propose PathFusion as a solution to enable the alignment of semantically coherent LiDAR-camera deep feature fusion. PathFusion introduces a path consistency loss at multiple stages within the network, encouraging the 2D backbone and its fusion path to transform 2D features in a way that aligns semantically with the transformation of the 3D backbone. This ensures semantic consistency between 2D and 3D features, even in deeper layers, and amplifies the usage of the network's learning capacity. We apply PathFusion to improve a prior-art fusion baseline, Focals Conv, and observe an improvement of over 1.6% in mAP on the nuScenes test split consistently with and without testing-time data augmentations, and moreover, PathFusion also improves KITTI $\text{AP}_{\text{3D}}$ (R11) by about 0.6% on the moderate level.
Submitted: Dec 12, 2022