Semantic Scene Completion
Semantic scene completion (SSC) aims to reconstruct complete 3D scenes, including both geometry and semantic labels, from partial or incomplete sensor data like sparse LiDAR point clouds or single images. Current research heavily utilizes deep learning, focusing on transformer-based architectures, diffusion models, and hybrid approaches combining neural radiance fields (NeRFs) with transformers to improve accuracy and handle occlusions. This field is crucial for advancing autonomous driving and robotics, providing richer scene understanding for safer and more efficient navigation by enabling robust perception in challenging conditions.
Papers
SLCF-Net: Sequential LiDAR-Camera Fusion for Semantic Scene Completion using a 3D Recurrent U-Net
Helin Cao, Sven Behnke
MonoOcc: Digging into Monocular Semantic Occupancy Prediction
Yupeng Zheng, Xiang Li, Pengfei Li, Yuhang Zheng, Bu Jin, Chengliang Zhong, Xiaoxiao Long, Hao Zhao, Qichao Zhang
OccFiner: Offboard Occupancy Refinement with Hybrid Propagation
Hao Shi, Song Wang, Jiaming Zhang, Xiaoting Yin, Zhongdao Wang, Zhijian Zhao, Guangming Wang, Jianke Zhu, Kailun Yang, Kaiwei Wang