Surface Reconstruction
Surface reconstruction aims to create accurate 3D models from various input data, such as images, point clouds, or sensor readings, with a primary objective of achieving high-fidelity geometric detail and efficient processing. Recent research heavily emphasizes implicit neural representations, particularly Gaussian splatting and signed distance functions (SDFs), often combined with techniques like planar-based representations and multi-view consistency constraints to improve accuracy and scalability. These advancements are crucial for applications ranging from autonomous driving and urban planning to scientific visualization and cultural heritage preservation, enabling more realistic simulations and detailed analyses of complex 3D scenes.
Papers
Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation
Hang Du, Xuejun Yan, Jingjing Wang, Di Xie, Shiliang Pu
Depth-NeuS: Neural Implicit Surfaces Learning for Multi-view Reconstruction Based on Depth Information Optimization
Hanqi Jiang, Cheng Zeng, Runnan Chen, Shuai Liang, Yinhe Han, Yichao Gao, Conglin Wang
Dynamic Multi-View Scene Reconstruction Using Neural Implicit Surface
Decai Chen, Haofei Lu, Ingo Feldmann, Oliver Schreer, Peter Eisert
HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization
Zhihao Liang, Zhangjin Huang, Changxing Ding, Kui Jia