3D Generative
3D generative modeling aims to create realistic three-dimensional objects and scenes from various inputs like text, images, or point clouds. Current research heavily utilizes diffusion models, often coupled with efficient 3D representations such as Gaussian splatting or signed distance functions, to generate high-fidelity outputs with improved speed and control. This field is significant for its potential to automate 3D content creation across diverse applications, from virtual reality and gaming to robotics and drug discovery, while also advancing our understanding of generative AI itself.
Papers
Mosaic-SDF for 3D Generative Models
Lior Yariv, Omri Puny, Natalia Neverova, Oran Gafni, Yaron Lipman
PI3D: Efficient Text-to-3D Generation with Pseudo-Image Diffusion
Ying-Tian Liu, Yuan-Chen Guo, Guan Luo, Heyi Sun, Wei Yin, Song-Hai Zhang
GOEnFusion: Gradient Origin Encodings for 3D Forward Diffusion Models
Animesh Karnewar, Andrea Vedaldi, Niloy J. Mitra, David Novotny