3D Diffusion Model
3D diffusion models are generative deep learning models designed to create realistic three-dimensional data, addressing the challenges of limited 3D datasets and computationally expensive 3D generation. Current research focuses on improving generation quality, efficiency, and controllability through various architectures, including those leveraging implicit representations (e.g., signed distance functions, Gaussian splatting), hybrid approaches combining 2D and 3D diffusion, and techniques like score distillation and masking to enhance both speed and fidelity. These advancements have significant implications across diverse fields, enabling applications such as medical image analysis, 3D object classification, novel view synthesis, and efficient creation of high-quality 3D assets for gaming, virtual reality, and digital heritage preservation.
Papers
DiffESM: Conditional Emulation of Temperature and Precipitation in Earth System Models with 3D Diffusion Models
Seth Bassetti, Brian Hutchinson, Claudia Tebaldi, Ben Kravitz
Enhanced segmentation of femoral bone metastasis in CT scans of patients using synthetic data generation with 3D diffusion models
Emile Saillard, Aurélie Levillain, David Mitton, Jean-Baptiste Pialat, Cyrille Confavreux, Hélène Follet, Thomas Grenier