Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Variational Diffusion Posterior Sampling with Midpoint Guidance
Badr Moufad, Yazid Janati, Lisa Bedin, Alain Durmus, Randal Douc, Eric Moulines, Jimmy Olsson
Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy
Hancheng Ye, Jiakang Yuan, Renqiu Xia, Xiangchao Yan, Tao Chen, Junchi Yan, Botian Shi, Bo Zhang
Intermediate Representations for Enhanced Text-To-Image Generation Using Diffusion Models
Ran Galun, Sagie Benaim
DuoDiff: Accelerating Diffusion Models with a Dual-Backbone Approach
Daniel Gallo Fernández, Rǎzvan-Andrei Matişan, Alejandro Monroy Muñoz, Ana-Maria Vasilcoiu, Janusz Partyka, Tin Hadži Veljković, Metod Jazbec
Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors
Hritam Basak, Hadi Tabatabaee, Shreekant Gayaka, Ming-Feng Li, Xin Yang, Cheng-Hao Kuo, Arnie Sen, Min Sun, Zhaozheng Yin
TD-Paint: Faster Diffusion Inpainting Through Time Aware Pixel Conditioning
Tsiry Mayet, Pourya Shamsolmoali, Simon Bernard, Eric Granger, Romain Hérault, Clement Chatelain
Linear Convergence of Diffusion Models Under the Manifold Hypothesis
Peter Potaptchik, Iskander Azangulov, George Deligiannidis
Gait Sequence Upsampling using Diffusion Models for Single LiDAR Sensors
Jeongho Ahn, Kazuto Nakashima, Koki Yoshino, Yumi Iwashita, Ryo Kurazume
Diffusion Models Need Visual Priors for Image Generation
Xiaoyu Yue, Zidong Wang, Zeyu Lu, Shuyang Sun, Meng Wei, Wanli Ouyang, Lei Bai, Luping Zhou
Avoiding mode collapse in diffusion models fine-tuned with reinforcement learning
Roberto Barceló, Cristóbal Alcázar, Felipe Tobar
Dynamics of Concept Learning and Compositional Generalization
Yongyi Yang, Core Francisco Park, Ekdeep Singh Lubana, Maya Okawa, Wei Hu, Hidenori Tanaka
DICE: Discrete Inversion Enabling Controllable Editing for Multinomial Diffusion and Masked Generative Models
Xiaoxiao He, Ligong Han, Quan Dao, Song Wen, Minhao Bai, Di Liu, Han Zhang, Martin Renqiang Min, Felix Juefei-Xu, Chaowei Tan, Bo Liu, Kang Li, Hongdong Li, Junzhou Huang, Faez Ahmed, Akash Srivastava, Dimitris Metaxas
Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis
Jinbin Bai, Tian Ye, Wei Chow, Enxin Song, Qing-Guo Chen, Xiangtai Li, Zhen Dong, Lei Zhu, Shuicheng Yan
DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation
Jiatao Gu, Yuyang Wang, Yizhe Zhang, Qihang Zhang, Dinghuai Zhang, Navdeep Jaitly, Josh Susskind, Shuangfei Zhai
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
Vinith M. Suriyakumar, Rohan Alur, Ayush Sekhari, Manish Raghavan, Ashia C. Wilson
Generated Bias: Auditing Internal Bias Dynamics of Text-To-Image Generative Models
Abhishek Mandal, Susan Leavy, Suzanne Little
$\textit{Jump Your Steps}$: Optimizing Sampling Schedule of Discrete Diffusion Models
Yong-Hyun Park, Chieh-Hsin Lai, Satoshi Hayakawa, Yuhta Takida, Yuki Mitsufuji
Synthesizing Multi-Class Surgical Datasets with Anatomy-Aware Diffusion Models
Danush Kumar Venkatesh, Dominik Rivoir, Micha Pfeiffer, Fiona Kolbinger, Stefanie Speidel