Generalizable 3D
Generalizable 3D reconstruction aims to create accurate and complete 3D models from limited input, such as a single image or a sparse set of views, adapting well to unseen scenes and objects. Current research heavily utilizes neural networks, particularly those based on implicit representations like neural radiance fields (NeRFs) and Gaussian splatting, along with techniques like self-supervision and divide-and-conquer strategies to improve generalization and efficiency. These advancements are significant for various applications, including augmented and virtual reality, robotics, and autonomous navigation, by enabling more robust and versatile 3D scene understanding.
Papers
October 31, 2024
October 24, 2024
October 16, 2024
June 6, 2024
April 10, 2024
April 4, 2024
March 30, 2024
March 24, 2024
March 17, 2024
December 19, 2023
December 14, 2023