Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Adaptive Self-Supervised Consistency-Guided Diffusion Model for Accelerated MRI Reconstruction
Mojtaba Safari, Zach Eidex, Shaoyan Pan, Richard L. J. Qiu, Xiaofeng Yang
Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning
Xu Han, Fangfang Fan, Jingzhao Rong, Xiaofeng Liu
Consistency Models Made Easy
Zhengyang Geng, Ashwini Pokle, William Luo, Justin Lin, J. Zico Kolter
CollaFuse: Collaborative Diffusion Models
Simeon Allmendinger, Domenique Zipperling, Lukas Struppek, Niklas Kühl
HeartBeat: Towards Controllable Echocardiography Video Synthesis with Multimodal Conditions-Guided Diffusion Models
Xinrui Zhou, Yuhao Huang, Wufeng Xue, Haoran Dou, Jun Cheng, Han Zhou, Dong Ni
A Practical Diffusion Path for Sampling
Omar Chehab, Anna Korba
Synthesizing Multimodal Electronic Health Records via Predictive Diffusion Models
Yuan Zhong, Xiaochen Wang, Jiaqi Wang, Xiaokun Zhang, Yaqing Wang, Mengdi Huai, Cao Xiao, Fenglong Ma
Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics
Weitong Zhang, Chengqi Zang, Liu Li, Sarah Cechnicka, Cheng Ouyang, Bernhard Kainz
AniFaceDiff: High-Fidelity Face Reenactment via Facial Parametric Conditioned Diffusion Models
Ken Chen, Sachith Seneviratne, Wei Wang, Dongting Hu, Sanjay Saha, Md. Tarek Hasan, Sanka Rasnayaka, Tamasha Malepathirana, Mingming Gong, Saman Halgamuge
Surgical Triplet Recognition via Diffusion Model
Daochang Liu, Axel Hu, Mubarak Shah, Chang Xu
Diffusion Model-based FOD Restoration from High Distortion in dMRI
Shuo Huang, Lujia Zhong, Yonggang Shi
Evaluating the design space of diffusion-based generative models
Yuqing Wang, Ye He, Molei Tao
Neural Approximate Mirror Maps for Constrained Diffusion Models
Berthy T. Feng, Ricardo Baptista, Katherine L. Bouman
Training Diffusion Models with Federated Learning
Matthijs de Goede, Bart Cox, Jérémie Decouchant
Planning Using Schr\"odinger Bridge Diffusion Models
Adarsh Srivastava
Adding Conditional Control to Diffusion Models with Reinforcement Learning
Yulai Zhao, Masatoshi Uehara, Gabriele Scalia, Tommaso Biancalani, Sergey Levine, Ehsan Hajiramezanali
ARTIST: Improving the Generation of Text-rich Images with Disentangled Diffusion Models
Jianyi Zhang, Yufan Zhou, Jiuxiang Gu, Curtis Wigington, Tong Yu, Yiran Chen, Tong Sun, Ruiyi Zhang
Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models
Bingqi Ma, Zhuofan Zong, Guanglu Song, Hongsheng Li, Yu Liu
Diffusion Models in Low-Level Vision: A Survey
Chunming He, Yuqi Shen, Chengyu Fang, Fengyang Xiao, Longxiang Tang, Yulun Zhang, Wangmeng Zuo, Zhenhua Guo, Xiu Li