3D Reconstruction
3D reconstruction aims to create three-dimensional models from various two-dimensional data sources, such as images or videos, with applications spanning diverse fields. Current research emphasizes improving accuracy and efficiency, particularly in challenging scenarios like sparse viewpoints, dynamic scenes, and occluded objects. Popular approaches utilize neural radiance fields (NeRFs), Gaussian splatting, and other deep learning architectures, often incorporating techniques like active view selection and multi-view stereo to enhance reconstruction quality. These advancements are driving progress in areas such as robotics, medical imaging, and remote sensing, enabling more accurate and detailed 3D models for various applications.
Papers
3D reconstruction from spherical images: A review of techniques, applications, and prospects
San Jiang, Yaxin Li, Duojie Weng, Kan You, Wu Chen
PredRecon: A Prediction-boosted Planning Framework for Fast and High-quality Autonomous Aerial Reconstruction
Chen Feng, Haojia Li, Fei Gao, Boyu Zhou, Shaojie Shen
Towards Live 3D Reconstruction from Wearable Video: An Evaluation of V-SLAM, NeRF, and Videogrammetry Techniques
David Ramirez, Suren Jayasuriya, Andreas Spanias
Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion
Dario Pavllo, David Joseph Tan, Marie-Julie Rakotosaona, Federico Tombari