Image to 3D
Image-to-3D generation aims to create realistic three-dimensional models from single or multiple two-dimensional images, focusing on improving both the speed and quality of 3D asset creation. Current research emphasizes using diffusion models, often combined with techniques like Gaussian splatting or neural radiance fields (NeRFs), to generate multi-view consistent images and high-resolution meshes. These advancements are significant for various applications, including virtual and augmented reality, computer-aided design, and digital content creation, by offering faster and more efficient methods for generating high-fidelity 3D models.
Papers
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
Wang Zhao, Yan-Pei Cao, Jiale Xu, Yuejiang Dong, Ying Shan
LiftRefine: Progressively Refined View Synthesis from 3D Lifting with Volume-Triplane Representations
Tung Do, Thuan Hoang Nguyen, Anh Tuan Tran, Rang Nguyen, Binh-Son Hua