Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics
Weitong Zhang, Chengqi Zang, Liu Li, Sarah Cechnicka, Cheng Ouyang, Bernhard Kainz
AniFaceDiff: Animating Stylized Avatars via Parametric Conditioned Diffusion Models
Ken Chen, Sachith Seneviratne, Wei Wang, Dongting Hu, Sanjay Saha, Md. Tarek Hasan, Sanka Rasnayaka, Tamasha Malepathirana, Mingming Gong, Saman Halgamuge
Surgical Triplet Recognition via Diffusion Model
Daochang Liu, Axel Hu, Mubarak Shah, Chang Xu
Diffusion Model-based FOD Restoration from High Distortion in dMRI
Shuo Huang, Lujia Zhong, Yonggang Shi
Evaluating the design space of diffusion-based generative models
Yuqing Wang, Ye He, Molei Tao
Neural Approximate Mirror Maps for Constrained Diffusion Models
Berthy T. Feng, Ricardo Baptista, Katherine L. Bouman
Training Diffusion Models with Federated Learning
Matthijs de Goede, Bart Cox, Jérémie Decouchant
Planning Using Schr\"odinger Bridge Diffusion Models
Adarsh Srivastava
Adding Conditional Control to Diffusion Models with Reinforcement Learning
Yulai Zhao, Masatoshi Uehara, Gabriele Scalia, Tommaso Biancalani, Sergey Levine, Ehsan Hajiramezanali
ARTIST: Improving the Generation of Text-rich Images with Disentangled Diffusion Models and Large Language Models
Jianyi Zhang, Yufan Zhou, Jiuxiang Gu, Curtis Wigington, Tong Yu, Yiran Chen, Tong Sun, Ruiyi Zhang
Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models
Bingqi Ma, Zhuofan Zong, Guanglu Song, Hongsheng Li, Yu Liu
Diffusion Models in Low-Level Vision: A Survey
Chunming He, Yuqi Shen, Chengyu Fang, Fengyang Xiao, Longxiang Tang, Yulun Zhang, Wangmeng Zuo, Zhenhua Guo, Xiu Li
An Analysis on Quantizing Diffusion Transformers
Yuewei Yang, Jialiang Wang, Xiaoliang Dai, Peizhao Zhang, Hongbo Zhang
Improving Probabilistic Diffusion Models With Optimal Covariance Matching
Zijing Ou, Mingtian Zhang, Andi Zhang, Tim Z. Xiao, Yingzhen Li, David Barber
Ab Initio Structure Solutions from Nanocrystalline Powder Diffraction Data
Gabe Guo, Tristan Saidi, Maxwell Terban, Michele Valsecchi, Simon JL Billinge, Hod Lipson