Large Scale Scene
Large-scale scene reconstruction aims to create detailed 3D models of expansive environments from multiple images or videos, overcoming challenges in scalability, memory consumption, and rendering speed. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and Gaussian splatting, often employing techniques like scene partitioning, multi-resolution representations, and efficient data structures (e.g., hash grids, octrees) to handle the vast amount of data. These advancements enable high-fidelity novel view synthesis and real-time rendering of large scenes, impacting fields like virtual and augmented reality, robotics, and autonomous navigation.
Papers
CrossView-GS: Cross-view Gaussian Splatting For Large-scale Scene Reconstruction
Chenhao Zhang, Yuanping Cao, Lei Zhang
PG-SAG: Parallel Gaussian Splatting for Fine-Grained Large-Scale Urban Buildings Reconstruction via Semantic-Aware Grouping
Tengfei Wang, Xin Wang, Yongmao Hou, Yiwei Xu, Wendi Zhang, Zongqian Zhan