Conditional Diffusion Model
Conditional diffusion models are generative AI models designed to produce outputs conditioned on specific inputs, aiming for high-fidelity and controllable generation across diverse data types. Current research emphasizes improving control and reducing artifacts through techniques like classifier-free guidance and its variants, exploring training-free approaches for specific applications (e.g., stochastic dynamical systems), and developing methods to aggregate multiple diffusion models for enhanced fine-grained control. These advancements have significant implications for various fields, including medical imaging (e.g., CT reconstruction, MRI editing), time series analysis (e.g., imputation, forecasting), and scientific simulation (e.g., weather prediction, nuclear fusion), by enabling more accurate, efficient, and interpretable data generation and analysis.
Papers
Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling
Jiatao Gu, Ying Shen, Shuangfei Zhai, Yizhe Zhang, Navdeep Jaitly, Joshua M. Susskind
MegActor: Harness the Power of Raw Video for Vivid Portrait Animation
Shurong Yang, Huadong Li, Juhao Wu, Minhao Jing, Linze Li, Renhe Ji, Jiajun Liang, Haoqiang Fan