Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Fast-DDPM: Fast Denoising Diffusion Probabilistic Models for Medical Image-to-Image Generation
Hongxu Jiang, Muhammad Imran, Linhai Ma, Teng Zhang, Yuyin Zhou, Muxuan Liang, Kuang Gong, Wei Shao
Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy
Shengfang Zhai, Huanran Chen, Yinpeng Dong, Jiajun Li, Qingni Shen, Yansong Gao, Hang Su, Yang Liu
RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance
Zhicheng Sun, Zhenhao Yang, Yang Jin, Haozhe Chi, Kun Xu, Kun Xu, Liwei Chen, Hao Jiang, Yang Song, Kun Gai, Yadong Mu
Multistable Shape from Shading Emerges from Patch Diffusion
Xinran Nicole Han, Todd Zickler, Ko Nishino
Adversarial Schr\"odinger Bridge Matching
Nikita Gushchin, Daniil Selikhanovych, Sergei Kholkin, Evgeny Burnaev, Alexander Korotin
Reliable Trajectory Prediction and Uncertainty Quantification with Conditioned Diffusion Models
Marion Neumeier, Sebastian Dorn, Michael Botsch, Wolfgang Utschick
Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors
Emile Pierret, Bruno Galerne
Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization
Zexi Li, Lingzhi Gao, Chao Wu
Enhancing Image Layout Control with Loss-Guided Diffusion Models
Zakaria Patel, Kirill Serkh
A Study of Posterior Stability for Time-Series Latent Diffusion
Yangming Li, Yixin Cheng, Mihaela van der Schaar
Conditioning diffusion models by explicit forward-backward bridging
Adrien Corenflos, Zheng Zhao, Simo Särkkä, Jens Sjölund, Thomas B. Schön
A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation
Gwanghyun Kim, Alonso Martinez, Yu-Chuan Su, Brendan Jou, José Lezama, Agrim Gupta, Lijun Yu, Lu Jiang, Aren Jansen, Jacob Walker, Krishna Somandepalli
Learning Diffusion Priors from Observations by Expectation Maximization
François Rozet, Gérôme Andry, François Lanusse, Gilles Louppe
Prompt Mixing in Diffusion Models using the Black Scholes Algorithm
Divya Kothandaraman, Ming Lin, Dinesh Manocha
TauAD: MRI-free Tau Anomaly Detection in PET Imaging via Conditioned Diffusion Models
Lujia Zhong, Shuo Huang, Jiaxin Yue, Jianwei Zhang, Zhiwei Deng, Wenhao Chi, Yonggang Shi
Personalized Residuals for Concept-Driven Text-to-Image Generation
Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz
CustomText: Customized Textual Image Generation using Diffusion Models
Shubham Paliwal, Arushi Jain, Monika Sharma, Vikram Jamwal, Lovekesh Vig
Nonequilbrium physics of generative diffusion models
Zhendong Yu, Haiping Huang
Diff-BGM: A Diffusion Model for Video Background Music Generation
Sizhe Li, Yiming Qin, Minghang Zheng, Xin Jin, Yang Liu
Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models
Xiyu Wang, Yufei Wang, Satoshi Tsutsui, Weisi Lin, Bihan Wen, Alex C. Kot