Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Reliable Trajectory Prediction and Uncertainty Quantification with Conditioned Diffusion Models
Marion Neumeier, Sebastian Dorn, Michael Botsch, Wolfgang Utschick
Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors
Emile Pierret, Bruno Galerne
Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization
Zexi Li, Lingzhi Gao, Chao Wu
Enhancing Image Layout Control with Loss-Guided Diffusion Models
Zakaria Patel, Kirill Serkh
A Study of Posterior Stability for Time-Series Latent Diffusion
Yangming Li, Yixin Cheng, Mihaela van der Schaar
Conditioning diffusion models by explicit forward-backward bridging
Adrien Corenflos, Zheng Zhao, Simo Särkkä, Jens Sjölund, Thomas B. Schön
A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation
Gwanghyun Kim, Alonso Martinez, Yu-Chuan Su, Brendan Jou, José Lezama, Agrim Gupta, Lijun Yu, Lu Jiang, Aren Jansen, Jacob Walker, Krishna Somandepalli
Learning Diffusion Priors from Observations by Expectation Maximization
François Rozet, Gérôme Andry, François Lanusse, Gilles Louppe
Prompt Mixing in Diffusion Models using the Black Scholes Algorithm
Divya Kothandaraman, Ming Lin, Dinesh Manocha
TauAD: MRI-free Tau Anomaly Detection in PET Imaging via Conditioned Diffusion Models
Lujia Zhong, Shuo Huang, Jiaxin Yue, Jianwei Zhang, Zhiwei Deng, Wenhao Chi, Yonggang Shi
Personalized Residuals for Concept-Driven Text-to-Image Generation
Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz
CustomText: Customized Textual Image Generation using Diffusion Models
Shubham Paliwal, Arushi Jain, Monika Sharma, Vikram Jamwal, Lovekesh Vig
Nonequilbrium physics of generative diffusion models
Zhendong Yu, Haiping Huang
Diff-BGM: A Diffusion Model for Video Background Music Generation
Sizhe Li, Yiming Qin, Minghang Zheng, Xin Jin, Yang Liu
Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models
Xiyu Wang, Yufei Wang, Satoshi Tsutsui, Weisi Lin, Bihan Wen, Alex C. Kot
ViViD: Video Virtual Try-on using Diffusion Models
Zixun Fang, Wei Zhai, Aimin Su, Hongliang Song, Kai Zhu, Mao Wang, Yu Chen, Zhiheng Liu, Yang Cao, Zheng-Jun Zha
Diffusion Models for Generating Ballistic Spacecraft Trajectories
Tyler Presser, Agnimitra Dasgupta, Daniel Erwin, Assad Oberai
Uncertainty-Aware PPG-2-ECG for Enhanced Cardiovascular Diagnosis using Diffusion Models
Omer Belhasin, Idan Kligvasser, George Leifman, Regev Cohen, Erin Rainaldi, Li-Fang Cheng, Nishant Verma, Paul Varghese, Ehud Rivlin, Michael Elad
Diffusion-Based Hierarchical Image Steganography
Youmin Xu, Xuanyu Zhang, Jiwen Yu, Chong Mou, Xiandong Meng, Jian Zhang