3D Consistency
3D consistency in computer vision and graphics focuses on developing methods that generate and maintain accurate and coherent three-dimensional representations from various input sources, such as images, point clouds, or sparse sensor data. Current research emphasizes techniques like neural radiance fields (NeRFs), diffusion models, and iterative closest point (ICP) algorithms, often incorporating contrastive learning and geometric constraints to improve the accuracy and efficiency of 3D reconstruction and manipulation. This pursuit of 3D consistency is crucial for advancing applications ranging from robotics and autonomous navigation to high-fidelity 3D modeling, virtual and augmented reality, and medical imaging.
Papers
October 17, 2024
October 1, 2024
September 13, 2024
August 19, 2024
August 18, 2024
July 25, 2024
July 2, 2024
June 24, 2024
June 13, 2024
June 2, 2024
May 6, 2024
April 4, 2024
April 3, 2024
April 1, 2024
March 25, 2024
March 14, 2024
January 31, 2024
January 8, 2024
January 5, 2024