Generative Diffusion
Generative diffusion models are a powerful class of probabilistic models that generate data by reversing a diffusion process, transforming noise into structured samples. Current research focuses on extending these models to conditional generation, improving efficiency through techniques like single-step diffusion and minimax optimization, and applying them to diverse domains including image restoration, 3D scene generation, and sequential recommendation. This rapidly evolving field is significantly impacting various scientific disciplines and practical applications by enabling high-fidelity data generation, improved data analysis, and the development of novel algorithms for tasks such as anomaly detection and medical image analysis.
Papers
Generative Diffusion Model-based Downscaling of Observed Sea Surface Height over Kuroshio Extension since 2000
Qiuchang Han, Xingliang Jiang, Yang Zhao, Xudong Wang, Zhijin Li, Renhe Zhang
DimeRec: A Unified Framework for Enhanced Sequential Recommendation via Generative Diffusion Models
Wuchao Li, Rui Huang, Haijun Zhao, Chi Liu, Kai Zheng, Qi Liu, Na Mou, Guorui Zhou, Defu Lian, Yang Song, Wentian Bao, Enyun Yu, Wenwu Ou