Large Scale Scene
Large-scale scene reconstruction aims to create detailed 3D models of expansive environments from multiple images or videos, overcoming challenges in scalability, memory consumption, and rendering speed. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and Gaussian splatting, often employing techniques like scene partitioning, multi-resolution representations, and efficient data structures (e.g., hash grids, octrees) to handle the vast amount of data. These advancements enable high-fidelity novel view synthesis and real-time rendering of large scenes, impacting fields like virtual and augmented reality, robotics, and autonomous navigation.
Papers
November 1, 2024
October 27, 2024
October 16, 2024
September 19, 2024
September 10, 2024
August 10, 2024
July 22, 2024
June 27, 2024
May 30, 2024
May 22, 2024
May 7, 2024
April 24, 2024
April 19, 2024
April 18, 2024
April 9, 2024
April 1, 2024
March 28, 2024
March 19, 2024
March 14, 2024