3D Scene Reconstruction
3D scene reconstruction aims to create realistic three-dimensional models of environments from various input data, such as images, LiDAR scans, and sensor readings. Current research heavily focuses on implicit neural representations, including Neural Radiance Fields (NeRFs) and Gaussian Splatting, which offer high-fidelity rendering and efficient processing, respectively, often enhanced by techniques like octree structures and multimodal fusion. These advancements are significantly impacting robotics, cultural heritage preservation, and autonomous driving by enabling accurate 3D mapping, object recognition, and improved navigation in complex environments.
Papers
LI-GS: Gaussian Splatting with LiDAR Incorporated for Accurate Large-Scale Reconstruction
Changjian Jiang, Ruilan Gao, Kele Shao, Yue Wang, Rong Xiong, Yu Zhang
Enhancing Agricultural Environment Perception via Active Vision and Zero-Shot Learning
Michele Carlo La Greca, Mirko Usuelli, Matteo Matteucci