3D Consistency
3D consistency in computer vision and graphics focuses on developing methods that generate and maintain accurate and coherent three-dimensional representations from various input sources, such as images, point clouds, or sparse sensor data. Current research emphasizes techniques like neural radiance fields (NeRFs), diffusion models, and iterative closest point (ICP) algorithms, often incorporating contrastive learning and geometric constraints to improve the accuracy and efficiency of 3D reconstruction and manipulation. This pursuit of 3D consistency is crucial for advancing applications ranging from robotics and autonomous navigation to high-fidelity 3D modeling, virtual and augmented reality, and medical imaging.
Papers
Consistent-1-to-3: Consistent Image to 3D View Synthesis via Geometry-aware Diffusion Models
Jianglong Ye, Peng Wang, Kejie Li, Yichun Shi, Heng Wang
T$^3$Bench: Benchmarking Current Progress in Text-to-3D Generation
Yuze He, Yushi Bai, Matthieu Lin, Wang Zhao, Yubin Hu, Jenny Sheng, Ran Yi, Juanzi Li, Yong-Jin Liu