3D Detection
3D object detection aims to accurately identify and locate objects in three-dimensional space from various sensor inputs, primarily for applications like autonomous driving and robotics. Current research emphasizes improving robustness and generalization across diverse datasets and challenging conditions (e.g., varying weather, occlusions) using techniques like multi-dataset training, temporal information fusion, and diffusion models. Prominent approaches involve transformer-based architectures, BEV (bird's-eye-view) transformations, and innovative data augmentation strategies to address data scarcity and annotation costs. Advancements in this field are crucial for enhancing the safety and reliability of autonomous systems and other applications requiring precise 3D scene understanding.
Papers
MonoWAD: Weather-Adaptive Diffusion Model for Robust Monocular 3D Object Detection
Youngmin Oh, Hyung-Il Kim, Seong Tae Kim, Jung Uk Kim
LiCROcc: Teach Radar for Accurate Semantic Occupancy Prediction using LiDAR and Camera
Yukai Ma, Jianbiao Mei, Xuemeng Yang, Licheng Wen, Weihua Xu, Jiangning Zhang, Botian Shi, Yong Liu, Xingxing Zuo