3D Generation
3D generation research focuses on creating realistic three-dimensional models from various inputs like text, images, or existing 3D models. Current efforts center on improving the quality, efficiency, and controllability of generation, employing techniques such as diffusion models, autoregressive transformers, and neural radiance fields, often within a multi-view framework. These advancements are significant for fields like computer graphics, virtual reality, and product design, enabling faster and more intuitive creation of high-fidelity 3D assets. The development of efficient and robust methods for handling diverse data types and achieving high-resolution, consistent outputs remains a key focus.
Papers
DiffTF++: 3D-aware Diffusion Transformer for Large-Vocabulary 3D Generation
Ziang Cao, Fangzhou Hong, Tong Wu, Liang Pan, Ziwei Liu
Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning
Wenqi Dong, Bangbang Yang, Lin Ma, Xiao Liu, Liyuan Cui, Hujun Bao, Yuewen Ma, Zhaopeng Cui
Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion
Fan Yang, Jianfeng Zhang, Yichun Shi, Bowen Chen, Chenxu Zhang, Huichao Zhang, Xiaofeng Yang, Jiashi Feng, Guosheng Lin
DreamView: Injecting View-specific Text Guidance into Text-to-3D Generation
Junkai Yan, Yipeng Gao, Qize Yang, Xihan Wei, Xuansong Xie, Ancong Wu, Wei-Shi Zheng
Hash3D: Training-free Acceleration for 3D Generation
Xingyi Yang, Xinchao Wang
LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation
Yushi Lan, Fangzhou Hong, Shuai Yang, Shangchen Zhou, Xuyi Meng, Bo Dai, Xingang Pan, Chen Change Loy
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion
Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, Varun Jampani