Urban Scene Reconstruction
Urban scene reconstruction aims to create detailed 3D models of cities from various data sources, primarily images and LiDAR scans, enabling realistic simulations and novel view synthesis. Current research focuses on improving efficiency and scalability using neural radiance fields (NeRFs) and Gaussian splatting, often incorporating techniques like incremental view selection and federated learning to handle large datasets. These advancements are significant for applications in autonomous driving, urban planning, and virtual/augmented reality, providing accurate and photorealistic representations of complex urban environments.
Papers
August 29, 2024
July 26, 2024
July 3, 2024
May 17, 2024
March 18, 2024
February 9, 2024
January 25, 2024
November 30, 2023
October 9, 2023
July 11, 2023