Scene Reconstruction
Scene reconstruction aims to create detailed 3D models of environments from various input sources, such as images, LiDAR scans, and radar data. Current research heavily focuses on improving the robustness and efficiency of reconstruction methods, particularly for large-scale and dynamic scenes, employing architectures like neural radiance fields (NeRFs) and Gaussian splatting. These advancements are crucial for applications ranging from autonomous driving and robotics to virtual and augmented reality, enabling more realistic simulations and improved human-computer interaction.
Papers
November 2, 2024
November 1, 2024
October 29, 2024
October 26, 2024
October 21, 2024
October 20, 2024
October 15, 2024
September 29, 2024
September 28, 2024
September 25, 2024
September 19, 2024
GaRField++: Reinforced Gaussian Radiance Fields for Large-Scale 3D Scene Reconstruction
Hanyue Zhang, Zhiliu Yang, Xinhe Zuo, Yuxin Tong, Ying Long, Chen Liu
DrivingForward: Feed-forward 3D Gaussian Splatting for Driving Scene Reconstruction from Flexible Surround-view Input
Qijian Tian, Xin Tan, Yuan Xie, Lizhuang Ma
September 17, 2024
September 16, 2024
September 13, 2024
September 10, 2024
September 6, 2024
September 4, 2024
September 3, 2024