Conditional Diffusion Model
Conditional diffusion models are generative AI models designed to produce outputs conditioned on specific inputs, aiming for high-fidelity and controllable generation across diverse data types. Current research emphasizes improving control and reducing artifacts through techniques like classifier-free guidance and its variants, exploring training-free approaches for specific applications (e.g., stochastic dynamical systems), and developing methods to aggregate multiple diffusion models for enhanced fine-grained control. These advancements have significant implications for various fields, including medical imaging (e.g., CT reconstruction, MRI editing), time series analysis (e.g., imputation, forecasting), and scientific simulation (e.g., weather prediction, nuclear fusion), by enabling more accurate, efficient, and interpretable data generation and analysis.
Papers
Semantic Image Synthesis for Abdominal CT
Yan Zhuang, Benjamin Hou, Tejas Sudharshan Mathai, Pritam Mukherjee, Boah Kim, Ronald M. Summers
SP-DiffDose: A Conditional Diffusion Model for Radiation Dose Prediction Based on Multi-Scale Fusion of Anatomical Structures, Guided by SwinTransformer and Projector
Linjie Fu, Xia Li, Xiuding Cai, Yingkai Wang, Xueyao Wang, Yu Yao, Yali Shen