3D Detection
3D object detection aims to accurately identify and locate objects in three-dimensional space from various sensor inputs, primarily for applications like autonomous driving and robotics. Current research emphasizes improving robustness and generalization across diverse datasets and challenging conditions (e.g., varying weather, occlusions) using techniques like multi-dataset training, temporal information fusion, and diffusion models. Prominent approaches involve transformer-based architectures, BEV (bird's-eye-view) transformations, and innovative data augmentation strategies to address data scarcity and annotation costs. Advancements in this field are crucial for enhancing the safety and reliability of autonomous systems and other applications requiring precise 3D scene understanding.
Papers
Point-DETR3D: Leveraging Imagery Data with Spatial Point Prior for Weakly Semi-supervised 3D Object Detection
Hongzhi Gao, Zheng Chen, Zehui Chen, Lin Chen, Jiaming Liu, Shanghang Zhang, Feng Zhao
CR3DT: Camera-RADAR Fusion for 3D Detection and Tracking
Nicolas Baumann, Michael Baumgartner, Edoardo Ghignone, Jonas Kühne, Tobias Fischer, Yung-Hsu Yang, Marc Pollefeys, Michele Magno
Sunshine to Rainstorm: Cross-Weather Knowledge Distillation for Robust 3D Object Detection
Xun Huang, Hai Wu, Xin Li, Xiaoliang Fan, Chenglu Wen, Cheng Wang
OccTransformer: Improving BEVFormer for 3D camera-only occupancy prediction
Jian Liu, Sipeng Zhang, Chuixin Kong, Wenyuan Zhang, Yuhang Wu, Yikang Ding, Borun Xu, Ruibo Ming, Donglai Wei, Xianming Liu