Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
LogoSticker: Inserting Logos into Diffusion Models for Customized Generation
Mingkang Zhu, Xi Chen, Zhongdao Wang, Hengshuang Zhao, Jiaya Jia
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review
Masatoshi Uehara, Yulai Zhao, Tommaso Biancalani, Sergey Levine
EnergyDiff: Universal Time-Series Energy Data Generation using Diffusion Models
Nan Lin, Peter Palensky, Pedro P. Vergara
Denoising Diffusions in Latent Space for Medical Image Segmentation
Fahim Ahmed Zaman, Mathews Jacob, Amanda Chang, Kan Liu, Milan Sonka, Xiaodong Wu
NL2Contact: Natural Language Guided 3D Hand-Object Contact Modeling with Diffusion Model
Zhongqun Zhang, Hengfei Wang, Ziwei Yu, Yihua Cheng, Angela Yao, Hyung Jin Chang
SlimFlow: Training Smaller One-Step Diffusion Models with Rectified Flow
Yuanzhi Zhu, Xingchao Liu, Qiang Liu
CoSIGN: Few-Step Guidance of ConSIstency Model to Solve General INverse Problems
Jiankun Zhao, Bowen Song, Liyue Shen
GeoGuide: Geometric guidance of diffusion models
Mateusz Poleski, Jacek Tabor, Przemysław Spurek
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Attribution Maps
Junseo Park, Hyeryung Jang
Beta Sampling is All You Need: Efficient Image Generation Strategy for Diffusion Models using Stepwise Spectral Analysis
Haeil Lee, Hansang Lee, Seoyeon Gye, Junmo Kim
Bellman Diffusion Models
Liam Schramm, Abdeslam Boularias
Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design
Leo Klarner, Tim G. J. Rudner, Garrett M. Morris, Charlotte M. Deane, Yee Whye Teh
Mask-guided cross-image attention for zero-shot in-silico histopathologic image generation with a diffusion model
Dominik Winter, Nicolas Triltsch, Marco Rosati, Anatoliy Shumilov, Ziya Kokaragac, Yuri Popov, Thomas Padel, Laura Sebastian Monasor, Ross Hill, Markus Schick, Nicolas Brieu
Scaling Diffusion Transformers to 16 Billion Parameters
Zhengcong Fei, Mingyuan Fan, Changqian Yu, Debang Li, Junshi Huang
DiNO-Diffusion. Scaling Medical Diffusion via Self-Supervised Pre-Training
Guillermo Jimenez-Perez, Pedro Osorio, Josef Cersovsky, Javier Montalt-Tordera, Jens Hooge, Steffen Vogler, Sadegh Mohammadi
Self-Guided Generation of Minority Samples Using Diffusion Models
Soobin Um, Jong Chul Ye
Isometric Representation Learning for Disentangled Latent Space of Diffusion Models
Jaehoon Hahm, Junho Lee, Sunghyun Kim, Joonseok Lee
Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems
Yaşar Utku Alçalar, Mehmet Akçakaya