Scene Reconstruction
Scene reconstruction aims to create detailed 3D models of environments from various input sources, such as images, LiDAR scans, and radar data. Current research heavily focuses on improving the robustness and efficiency of reconstruction methods, particularly for large-scale and dynamic scenes, employing architectures like neural radiance fields (NeRFs) and Gaussian splatting. These advancements are crucial for applications ranging from autonomous driving and robotics to virtual and augmented reality, enabling more realistic simulations and improved human-computer interaction.
Papers
April 25, 2024
April 24, 2024
April 22, 2024
April 19, 2024
April 9, 2024
April 4, 2024
March 31, 2024
March 20, 2024
March 18, 2024
March 15, 2024
February 27, 2024
February 22, 2024
February 14, 2024
February 1, 2024
January 29, 2024
December 18, 2023
December 15, 2023
December 10, 2023