Generative Diffusion Model
Generative diffusion models are a class of deep learning models that generate data by reversing a diffusion process, gradually removing noise from random data until a realistic sample is obtained. Current research focuses on improving efficiency, addressing limitations like handling conditional distributions and mitigating vulnerabilities to backdoor attacks, and exploring diverse applications through model architectures such as diffusion transformers and variations incorporating contrastive learning or edge-preserving noise. These models are proving impactful across various fields, including image generation, time series forecasting, medical image analysis, and even scientific simulations like weather prediction and particle physics, offering significant advancements in data generation and analysis.
Papers
PureDiffusion: Using Backdoor to Counter Backdoor in Generative Diffusion Models
Vu Tuan Truong, Long Bao Le
Deep Learning based Optical Image Super-Resolution via Generative Diffusion Models for Layerwise in-situ LPBF Monitoring
Francis Ogoke, Sumesh Kalambettu Suresh, Jesse Adamczyk, Dan Bolintineanu, Anthony Garland, Michael Heiden, Amir Barati Farimani