Paper ID: 2207.07933

Consistency of Implicit and Explicit Features Matters for Monocular 3D Object Detection

Qian Ye, Ling Jiang, Wang Zhen, Yuyang Du

Low-cost autonomous agents including autonomous driving vehicles chiefly adopt monocular 3D object detection to perceive surrounding environment. This paper studies 3D intermediate representation methods which generate intermediate 3D features for subsequent tasks. For example, the 3D features can be taken as input for not only detection, but also end-to-end prediction and/or planning that require a bird's-eye-view feature representation. In the study, we found that in generating 3D representation previous methods do not maintain the consistency between the objects' implicit poses in the latent space, especially orientations, and the explicitly observed poses in the Euclidean space, which can substantially hurt model performance. To tackle this problem, we present a novel monocular detection method, the first one being aware of the poses to purposefully guarantee that they are consistent between the implicit and explicit features. Additionally, we introduce a local ray attention mechanism to efficiently transform image features to voxels at accurate 3D locations. Thirdly, we propose a handcrafted Gaussian positional encoding function, which outperforms the sinusoidal encoding function while retaining the benefit of being continuous. Results show that our method improves the state-of-the-art 3D intermediate representation method by 3.15%. We are ranked 1st among all the reported monocular methods on both 3D and BEV detection benchmark on KITTI leaderboard as of th result's submission time.

Submitted: Jul 16, 2022