3D Generation
3D generation research focuses on creating realistic three-dimensional models from various inputs like text, images, or existing 3D models. Current efforts center on improving the quality, efficiency, and controllability of generation, employing techniques such as diffusion models, autoregressive transformers, and neural radiance fields, often within a multi-view framework. These advancements are significant for fields like computer graphics, virtual reality, and product design, enabling faster and more intuitive creation of high-fidelity 3D assets. The development of efficient and robust methods for handling diverse data types and achieving high-resolution, consistent outputs remains a key focus.
Papers
Learn to Optimize Denoising Scores for 3D Generation: A Unified and Improved Diffusion Prior on NeRF and 3D Gaussian Splatting
Xiaofeng Yang, Yiwen Chen, Cheng Chen, Chi Zhang, Yi Xu, Xulei Yang, Fayao Liu, Guosheng Lin
RL Dreams: Policy Gradient Optimization for Score Distillation based 3D Generation
Aradhya N. Mathur, Phu Pham, Aniket Bera, Ojaswa Sharma
HumanRef: Single Image to 3D Human Generation via Reference-Guided Diffusion
Jingbo Zhang, Xiaoyu Li, Qi Zhang, Yanpei Cao, Ying Shan, Jing Liao
RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D
Lingteng Qiu, Guanying Chen, Xiaodong Gu, Qi Zuo, Mutian Xu, Yushuang Wu, Weihao Yuan, Zilong Dong, Liefeng Bo, Xiaoguang Han