Large Scale Scene

Large-scale scene reconstruction aims to create detailed 3D models of expansive environments from multiple images or videos, overcoming challenges in scalability, memory consumption, and rendering speed. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and Gaussian splatting, often employing techniques like scene partitioning, multi-resolution representations, and efficient data structures (e.g., hash grids, octrees) to handle the vast amount of data. These advancements enable high-fidelity novel view synthesis and real-time rendering of large scenes, impacting fields like virtual and augmented reality, robotics, and autonomous navigation.

Papers