Indoor Scene Reconstruction
Indoor scene reconstruction aims to create accurate 3D models of indoor environments from various input data, such as images or depth scans. Current research heavily focuses on improving the accuracy and efficiency of these models, employing techniques like neural radiance fields (NeRFs), signed distance functions (SDFs), and Gaussian splatting, often incorporating geometric priors and hybrid representations to handle challenges like textureless regions and occlusions. These advancements are significant for applications in augmented reality, robotics, and virtual reality, enabling more realistic and detailed virtual environments and improved scene understanding for autonomous systems.
Papers
November 1, 2024
August 28, 2024
August 16, 2024
July 23, 2024
May 30, 2024
May 7, 2024
May 1, 2024
April 3, 2024
February 13, 2024
September 14, 2023
May 22, 2023
March 14, 2023
March 1, 2023
October 13, 2022