3D Generation
3D generation research focuses on creating realistic three-dimensional models from various inputs like text, images, or existing 3D models. Current efforts center on improving the quality, efficiency, and controllability of generation, employing techniques such as diffusion models, autoregressive transformers, and neural radiance fields, often within a multi-view framework. These advancements are significant for fields like computer graphics, virtual reality, and product design, enabling faster and more intuitive creation of high-fidelity 3D assets. The development of efficient and robust methods for handling diverse data types and achieving high-resolution, consistent outputs remains a key focus.
Papers
DreamCraft3D++: Efficient Hierarchical 3D Generation with Multi-Plane Reconstruction Model
Jingxiang Sun, Cheng Peng, Ruizhi Shao, Yuan-Chen Guo, Xiaochen Zhao, Yangguang Li, Yanpei Cao, Bo Zhang, Yebin Liu
TV-3DG: Mastering Text-to-3D Customized Generation with Visual Prompt
Jiahui Yang, Donglin Di, Baorui Ma, Xun Yang, Yongjia Ma, Wenzhang Sun, Wei Chen, Jianxun Cui, Zhou Xue, Meng Wang, Yebin Liu
ControLRM: Fast and Controllable 3D Generation via Large Reconstruction Model
Hongbin Xu, Weitao Chen, Zhipeng Zhou, Feng Xiao, Baigui Sun, Mike Zheng Shou, Wenxiong Kang
Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors
Hritam Basak, Hadi Tabatabaee, Shreekant Gayaka, Ming-Feng Li, Xin Yang, Cheng-Hao Kuo, Arnie Sen, Min Sun, Zhaozheng Yin
Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text
Xinyang Li, Zhangyu Lai, Linning Xu, Yansong Qu, Liujuan Cao, Shengchuan Zhang, Bo Dai, Rongrong Ji
Masked Generative Extractor for Synergistic Representation and 3D Generation of Point Clouds
Hongliang Zeng, Ping Zhang, Fang Li, Jiahua Wang, Tingyu Ye, Pengteng Guo