3D Scene Reconstruction
3D scene reconstruction aims to create realistic three-dimensional models of environments from various input data, such as images, LiDAR scans, and sensor readings. Current research heavily focuses on implicit neural representations, including Neural Radiance Fields (NeRFs) and Gaussian Splatting, which offer high-fidelity rendering and efficient processing, respectively, often enhanced by techniques like octree structures and multimodal fusion. These advancements are significantly impacting robotics, cultural heritage preservation, and autonomous driving by enabling accurate 3D mapping, object recognition, and improved navigation in complex environments.
Papers
T-3DGS: Removing Transient Objects for 3D Scene Reconstruction
Vadim Pryadilshchikov, Alexander Markin, Artem Komarichev, Ruslan Rakhimov, Peter Wonka, Evgeny Burnaev
Robust Bayesian Scene Reconstruction by Leveraging Retrieval-Augmented Priors
Herbert Wright, Weiming Zhi, Matthew Johnson-Roberson, Tucker Hermans