Generative Diffusion Model
Generative diffusion models are a class of deep learning models that generate data by reversing a diffusion process, gradually removing noise from random data until a realistic sample is obtained. Current research focuses on improving efficiency, addressing limitations like handling conditional distributions and mitigating vulnerabilities to backdoor attacks, and exploring diverse applications through model architectures such as diffusion transformers and variations incorporating contrastive learning or edge-preserving noise. These models are proving impactful across various fields, including image generation, time series forecasting, medical image analysis, and even scientific simulations like weather prediction and particle physics, offering significant advancements in data generation and analysis.
Papers
DDM$^2$: Self-Supervised Diffusion MRI Denoising with Generative Diffusion Models
Tiange Xiang, Mahmut Yurt, Ali B Syed, Kawin Setsompop, Akshay Chaudhari
Structure and Content-Guided Video Synthesis with Diffusion Models
Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis
Generative Diffusion Models on Graphs: Methods and Applications
Chengyi Liu, Wenqi Fan, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li