3D Scene Reconstruction
3D scene reconstruction aims to create realistic three-dimensional models of environments from various input data, such as images, LiDAR scans, and sensor readings. Current research heavily focuses on implicit neural representations, including Neural Radiance Fields (NeRFs) and Gaussian Splatting, which offer high-fidelity rendering and efficient processing, respectively, often enhanced by techniques like octree structures and multimodal fusion. These advancements are significantly impacting robotics, cultural heritage preservation, and autonomous driving by enabling accurate 3D mapping, object recognition, and improved navigation in complex environments.
Papers
Every Dataset Counts: Scaling up Monocular 3D Object Detection with Joint Datasets Training
Fulong Ma, Xiaoyang Yan, Guoyang Zhao, Xiaojie Xu, Yuxuan Liu, Jun Ma, Ming Liu
PC-NeRF: Parent-Child Neural Radiance Fields under Partial Sensor Data Loss in Autonomous Driving Environments
Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, Junyi Ma
Improving Neural Indoor Surface Reconstruction with Mask-Guided Adaptive Consistency Constraints
Xinyi Yu, Liqin Lu, Jintao Rong, Guangkai Xu, Linlin Ou
Robust Geometry-Preserving Depth Estimation Using Differentiable Rendering
Chi Zhang, Wei Yin, Gang Yu, Zhibin Wang, Tao Chen, Bin Fu, Joey Tianyi Zhou, Chunhua Shen
PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs
Wentao Hu, Jia Zheng, Zixin Zhang, Xiaojun Yuan, Jian Yin, Zihan Zhou
FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models
Guangkai Xu, Wei Yin, Hao Chen, Chunhua Shen, Kai Cheng, Feng Zhao