3D Generative
3D generative modeling aims to create realistic three-dimensional objects and scenes from various inputs like text, images, or point clouds. Current research heavily utilizes diffusion models, often coupled with efficient 3D representations such as Gaussian splatting or signed distance functions, to generate high-fidelity outputs with improved speed and control. This field is significant for its potential to automate 3D content creation across diverse applications, from virtual reality and gaming to robotics and drug discovery, while also advancing our understanding of generative AI itself.
Papers
CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner
Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan, Xiaoxiao Long
Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer
Shuang Wu, Youtian Lin, Feihu Zhang, Yifei Zeng, Jingxi Xu, Philip Torr, Xun Cao, Yao Yao
GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling
Bowen Zhang, Yiji Cheng, Jiaolong Yang, Chunyu Wang, Feng Zhao, Yansong Tang, Dong Chen, Baining Guo
Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation
Yujin Chen, Yinyu Nie, Benjamin Ummenhofer, Reiner Birkl, Michael Paulitsch, Matthias Müller, Matthias Nießner
ID-NeRF: Indirect Diffusion-guided Neural Radiance Fields for Generalizable View Synthesis
Yaokun Li, Chao Gou, Guang Tan
A Comprehensive Survey on 3D Content Generation
Jian Liu, Xiaoshui Huang, Tianyu Huang, Lu Chen, Yuenan Hou, Shixiang Tang, Ziwei Liu, Wanli Ouyang, Wangmeng Zuo, Junjun Jiang, Xianming Liu