Large Scale Scene
Large-scale scene reconstruction aims to create detailed 3D models of expansive environments from multiple images or videos, overcoming challenges in scalability, memory consumption, and rendering speed. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and Gaussian splatting, often employing techniques like scene partitioning, multi-resolution representations, and efficient data structures (e.g., hash grids, octrees) to handle the vast amount of data. These advancements enable high-fidelity novel view synthesis and real-time rendering of large scenes, impacting fields like virtual and augmented reality, robotics, and autonomous navigation.
Papers
Grid-guided Neural Radiance Fields for Large Urban Scenes
Linning Xu, Yuanbo Xiangli, Sida Peng, Xingang Pan, Nanxuan Zhao, Christian Theobalt, Bo Dai, Dahua Lin
Progressively Optimized Local Radiance Fields for Robust View Synthesis
Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H. Kim, Johannes Kopf