Conditional Diffusion
Conditional diffusion models are generative AI models that produce samples conditioned on specific inputs, such as text prompts, labels, or other images, aiming to generate high-quality, realistic outputs while adhering to the given constraints. Current research focuses on improving sampling efficiency, enhancing the robustness of conditional guidance (especially with noisy or unreliable inputs), and applying these models to diverse tasks including image generation, time series forecasting, medical image analysis, and inverse problems. This rapidly developing field has significant implications for various scientific domains and practical applications, offering powerful tools for data generation, analysis, and manipulation across diverse data modalities.
Papers
Improved Patch Denoising Diffusion Probabilistic Models for Magnetic Resonance Fingerprinting
Perla Mayo, Carolin M. Pirkl, Alin Achim, Bjoern H. Menze, Mohammad Golbabaee
CT to PET Translation: A Large-scale Dataset and Domain-Knowledge-Guided Diffusion Approach
Dac Thai Nguyen, Trung Thanh Nguyen, Huu Tien Nguyen, Thanh Trung Nguyen, Huy Hieu Pham, Thanh Hung Nguyen, Thao Nguyen Truong, Phi Le Nguyen
Discrete Modeling via Boundary Conditional Diffusion Processes
Yuxuan Gu, Xiaocheng Feng, Lei Huang, Yingsheng Wu, Zekun Zhou, Weihong Zhong, Kun Zhu, Bing Qin
Hierarchical Clustering for Conditional Diffusion in Image Generation
Jorge da Silva Goncalves, Laura Manduchi, Moritz Vandenhirtz, Julia E. Vogt
Traj-Explainer: An Explainable and Robust Multi-modal Trajectory Prediction Approach
Pei Liu (1), Haipeng Liu (2), Yiqun Li (3), Tianyu Shi (4), Meixin Zhu (1), Ziyuan Pu (3) ((1) Intelligent Transportation Thrust, Systems Hub, The Hong Kong University of Science and Technology (Guangzhou), (2) Li Auto Inc, (3) School of Transportation, Southeast University, (4) Department of Computer Science, University of Toronto)