Conditional Diffusion Model
Conditional diffusion models are generative AI models designed to produce outputs conditioned on specific inputs, aiming for high-fidelity and controllable generation across diverse data types. Current research emphasizes improving control and reducing artifacts through techniques like classifier-free guidance and its variants, exploring training-free approaches for specific applications (e.g., stochastic dynamical systems), and developing methods to aggregate multiple diffusion models for enhanced fine-grained control. These advancements have significant implications for various fields, including medical imaging (e.g., CT reconstruction, MRI editing), time series analysis (e.g., imputation, forecasting), and scientific simulation (e.g., weather prediction, nuclear fusion), by enabling more accurate, efficient, and interpretable data generation and analysis.
Papers
AID: Attention Interpolation of Text-to-Image Diffusion
Qiyuan He, Jinghao Wang, Ziwei Liu, Angela Yao
Boosting Diffusion Models with Moving Average Sampling in Frequency Domain
Yurui Qian, Qi Cai, Yingwei Pan, Yehao Li, Ting Yao, Qibin Sun, Tao Mei
CT Synthesis with Conditional Diffusion Models for Abdominal Lymph Node Segmentation
Yongrui Yu, Hanyu Chen, Zitian Zhang, Qiong Xiao, Wenhui Lei, Linrui Dai, Yu Fu, Hui Tan, Guan Wang, Peng Gao, Xiaofan Zhang
Building Bridges across Spatial and Temporal Resolutions: Reference-Based Super-Resolution via Change Priors and Conditional Diffusion Model
Runmin Dong, Shuai Yuan, Bin Luo, Mengxuan Chen, Jinxiao Zhang, Lixian Zhang, Weijia Li, Juepeng Zheng, Haohuan Fu