Generative Diffusion Model
Generative diffusion models are a class of deep learning models that generate data by reversing a diffusion process, gradually removing noise from random data until a realistic sample is obtained. Current research focuses on improving efficiency, addressing limitations like handling conditional distributions and mitigating vulnerabilities to backdoor attacks, and exploring diverse applications through model architectures such as diffusion transformers and variations incorporating contrastive learning or edge-preserving noise. These models are proving impactful across various fields, including image generation, time series forecasting, medical image analysis, and even scientific simulations like weather prediction and particle physics, offering significant advancements in data generation and analysis.
Papers
GDM4MMIMO: Generative Diffusion Models for Massive MIMO Communications
Zhenzhou Jin, Li You, Huibin Zhou, Yuanshuo Wang, Xiaofeng Liu, Xinrui Gong, Xiqi Gao, Derrick Wing Kwan Ng, Xiang-Gen Xia
Schödinger Bridge Type Diffusion Models as an Extension of Variational Autoencoders
Kentaro Kaba, Reo Shimizu, Masayuki Ohzeki, Yuki Sughiyama
OSMamba: Omnidirectional Spectral Mamba with Dual-Domain Prior Generator for Exposure Correction
Gehui Li, Bin Chen, Chen Zhao, Lei Zhang, Jian Zhang
TEXGen: a Generative Diffusion Model for Mesh Textures
Xin Yu, Ze Yuan, Yuan-Chen Guo, Ying-Tian Liu, JianHui Liu, Yangguang Li, Yan-Pei Cao, Ding Liang, Xiaojuan Qi