Image to 3D
Image-to-3D generation aims to create realistic three-dimensional models from single or multiple two-dimensional images, focusing on improving both the speed and quality of 3D asset creation. Current research emphasizes using diffusion models, often combined with techniques like Gaussian splatting or neural radiance fields (NeRFs), to generate multi-view consistent images and high-resolution meshes. These advancements are significant for various applications, including virtual and augmented reality, computer-aided design, and digital content creation, by offering faster and more efficient methods for generating high-fidelity 3D models.
Papers
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing
Hansheng Chen, Ruoxi Shi, Yulin Liu, Bokui Shen, Jiayuan Gu, Gordon Wetzstein, Hao Su, Leonidas Guibas
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion
Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, Varun Jampani
Diffusion Models are Geometry Critics: Single Image 3D Editing Using Pre-Trained Diffusion Priors
Ruicheng Wang, Jianfeng Xiang, Jiaolong Yang, Xin Tong