Semantic Scene Completion
Semantic scene completion (SSC) aims to reconstruct complete 3D scenes, including both geometry and semantic labels, from partial or incomplete sensor data like sparse LiDAR point clouds or single images. Current research heavily utilizes deep learning, focusing on transformer-based architectures, diffusion models, and hybrid approaches combining neural radiance fields (NeRFs) with transformers to improve accuracy and handle occlusions. This field is crucial for advancing autonomous driving and robotics, providing richer scene understanding for safer and more efficient navigation by enabling robust perception in challenging conditions.
Papers
Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
Haoyi Jiang, Tianheng Cheng, Naiyu Gao, Haoyang Zhang, Tianwei Lin, Wenyu Liu, Xinggang Wang
SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion
Jianbiao Mei, Yu Yang, Mengmeng Wang, Tianxin Huang, Xuemeng Yang, Yong Liu
LODE: Locally Conditioned Eikonal Implicit Scene Completion from Sparse LiDAR
Pengfei Li, Ruowen Zhao, Yongliang Shi, Hao Zhao, Jirui Yuan, Guyue Zhou, Ya-Qin Zhang
OccDepth: A Depth-Aware Method for 3D Semantic Scene Completion
Ruihang Miao, Weizhou Liu, Mingrui Chen, Zheng Gong, Weixin Xu, Chen Hu, Shuchang Zhou