Scene Reconstruction
Scene reconstruction aims to create detailed 3D models of environments from various input sources, such as images, LiDAR scans, and radar data. Current research heavily focuses on improving the robustness and efficiency of reconstruction methods, particularly for large-scale and dynamic scenes, employing architectures like neural radiance fields (NeRFs) and Gaussian splatting. These advancements are crucial for applications ranging from autonomous driving and robotics to virtual and augmented reality, enabling more realistic simulations and improved human-computer interaction.
Papers
September 3, 2024
August 29, 2024
August 28, 2024
August 26, 2024
August 23, 2024
August 6, 2024
July 29, 2024
July 22, 2024
July 13, 2024
July 11, 2024
July 2, 2024
June 23, 2024
June 6, 2024
Flash3D: Feed-Forward Generalisable 3D Scene Reconstruction from a Single Image
Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F. Henriques, Christian Rupprecht, Andrea Vedaldi
Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction
Diwen Wan, Ruijie Lu, Gang Zeng
June 5, 2024
May 30, 2024
May 28, 2024
May 27, 2024
May 26, 2024
May 7, 2024