Scene Reconstruction
Scene reconstruction aims to create detailed 3D models of environments from various input sources, such as images, LiDAR scans, and radar data. Current research heavily focuses on improving the robustness and efficiency of reconstruction methods, particularly for large-scale and dynamic scenes, employing architectures like neural radiance fields (NeRFs) and Gaussian splatting. These advancements are crucial for applications ranging from autonomous driving and robotics to virtual and augmented reality, enabling more realistic simulations and improved human-computer interaction.
Papers
ActiveGS: Active Scene Reconstruction using Gaussian Splatting
Liren Jin, Xingguang Zhong, Yue Pan, Jens Behley, Cyrill Stachniss, Marija Popović
CoSurfGS:Collaborative 3D Surface Gaussian Splatting with Distributed Learning for Large Scene Reconstruction
Yuanyuan Gao, Yalun Dai, Hao Li, Weicai Ye, Junyi Chen, Danpeng Chen, Dingwen Zhang, Tong He, Guofeng Zhang, Junwei Han