Denoising Diffusion Probabilistic Model
Denoising Diffusion Probabilistic Models (DDPMs) are generative AI models that create new data by reversing a noise diffusion process, aiming to learn complex data distributions and generate high-fidelity samples. Current research focuses on improving model efficiency and fidelity, exploring variations like conditional DDPMs and integrating them with other architectures such as transformers and VAEs for specific tasks (e.g., image inpainting, medical image synthesis, and graph generation). DDPMs are proving impactful across diverse fields, enabling advancements in areas like medical imaging, autonomous driving, and financial forecasting through improved data generation, anomaly detection, and prediction capabilities.
Papers
Structured Generations: Using Hierarchical Clusters to guide Diffusion Models
Jorge da Silva Goncalves, Laura Manduchi, Moritz Vandenhirtz, Julia E. Vogt
Minutes to Seconds: Speeded-up DDPM-based Image Inpainting with Coarse-to-Fine Sampling
Lintao Zhang, Xiangcheng Du, LeoWu TomyEnrique, Yiqun Wang, Yingbin Zheng, Cheng Jin