3D Scene Reconstruction
3D scene reconstruction aims to create realistic three-dimensional models of environments from various input data, such as images, LiDAR scans, and sensor readings. Current research heavily focuses on implicit neural representations, including Neural Radiance Fields (NeRFs) and Gaussian Splatting, which offer high-fidelity rendering and efficient processing, respectively, often enhanced by techniques like octree structures and multimodal fusion. These advancements are significantly impacting robotics, cultural heritage preservation, and autonomous driving by enabling accurate 3D mapping, object recognition, and improved navigation in complex environments.
Papers
June 12, 2024
June 8, 2024
May 30, 2024
May 29, 2024
May 16, 2024
May 7, 2024
May 3, 2024
April 16, 2024
April 4, 2024
April 1, 2024
March 18, 2024
March 13, 2024
March 4, 2024
March 1, 2024
February 19, 2024
February 9, 2024
February 5, 2024
February 2, 2024
November 16, 2023