Generative Diffusion Model
Generative diffusion models are a class of deep learning models that generate data by reversing a diffusion process, gradually removing noise from random data until a realistic sample is obtained. Current research focuses on improving efficiency, addressing limitations like handling conditional distributions and mitigating vulnerabilities to backdoor attacks, and exploring diverse applications through model architectures such as diffusion transformers and variations incorporating contrastive learning or edge-preserving noise. These models are proving impactful across various fields, including image generation, time series forecasting, medical image analysis, and even scientific simulations like weather prediction and particle physics, offering significant advancements in data generation and analysis.
Papers
FreeStyle: Free Lunch for Text-guided Style Transfer using Diffusion Models
Feihong He, Gang Li, Fuhui Sun, Mengyuan Zhang, Lingyu Si, Xiaoyan Wang, Li Shen
BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry
Xiang Xu, Joseph G. Lambourne, Pradeep Kumar Jayaraman, Zhengqing Wang, Karl D.D. Willis, Yasutaka Furukawa