Large Scale Scene
Large-scale scene reconstruction aims to create detailed 3D models of expansive environments from multiple images or videos, overcoming challenges in scalability, memory consumption, and rendering speed. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and Gaussian splatting, often employing techniques like scene partitioning, multi-resolution representations, and efficient data structures (e.g., hash grids, octrees) to handle the vast amount of data. These advancements enable high-fidelity novel view synthesis and real-time rendering of large scenes, impacting fields like virtual and augmented reality, robotics, and autonomous navigation.
Papers
April 19, 2024
April 18, 2024
April 9, 2024
April 1, 2024
March 28, 2024
March 19, 2024
March 14, 2024
March 13, 2024
February 7, 2024
December 27, 2023
December 13, 2023
November 9, 2023
November 3, 2023
October 30, 2023
October 20, 2023
October 2, 2023
August 12, 2023
July 15, 2023
June 26, 2023