Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
DiNO-Diffusion. Scaling Medical Diffusion via Self-Supervised Pre-Training
Guillermo Jimenez-Perez, Pedro Osorio, Josef Cersovsky, Javier Montalt-Tordera, Jens Hooge, Steffen Vogler, Sadegh Mohammadi
Self-Guided Generation of Minority Samples Using Diffusion Models
Soobin Um, Jong Chul Ye
Isometric Representation Learning for Disentangled Latent Space of Diffusion Models
Jaehoon Hahm, Junho Lee, Sunghyun Kim, Joonseok Lee
Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems
Yaşar Utku Alçalar, Mehmet Akçakaya
Integrating Amortized Inference with Diffusion Models for Learning Clean Distribution from Corrupted Images
Yifei Wang, Weimin Bai, Weijian Luo, Wenzheng Chen, He Sun
Discrete generative diffusion models without stochastic differential equations: a tensor network approach
Luke Causer, Grant M. Rotskoff, Juan P. Garrahan
Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion
Yongyuan Liang, Tingqiang Xu, Kaizhe Hu, Guangqi Jiang, Furong Huang, Huazhe Xu
Optical Diffusion Models for Image Generation
Ilker Oguz, Niyazi Ulas Dinc, Mustafa Yildirim, Junjie Ke, Innfarn Yoo, Qifei Wang, Feng Yang, Christophe Moser, Demetri Psaltis
LiteFocus: Accelerated Diffusion Inference for Long Audio Synthesis
Zhenxiong Tan, Xinyin Ma, Gongfan Fang, Xinchao Wang
DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models
Yiwei Yang, Zheyuan Liu, Jun Jia, Zhongpai Gao, Yunhao Li, Wei Sun, Xiaohong Liu, Guangtao Zhai
Tree-D Fusion: Simulation-Ready Tree Dataset from Single Images with Diffusion Priors
Jae Joong Lee, Bosheng Li, Sara Beery, Jonathan Huang, Songlin Fei, Raymond A. Yeh, Bedrich Benes
Transferable 3D Adversarial Shape Completion using Diffusion Models
Xuelong Dai, Bin Xiao
What Appears Appealing May Not be Significant! -- A Clinical Perspective of Diffusion Models
Vanshali Sharma
Salt & Pepper Heatmaps: Diffusion-informed Landmark Detection Strategy
Julian Wyatt, Irina Voiculescu
TCAN: Animating Human Images with Temporally Consistent Pose Guidance using Diffusion Models
Jeongho Kim, Min-Jung Kim, Junsoo Lee, Jaegul Choo
Your Diffusion Model is Secretly a Noise Classifier and Benefits from Contrastive Training
Yunshu Wu, Yingtao Luo, Xianghao Kong, Evangelos E. Papalexakis, Greg Ver Steeg
Unsupervised Anomaly Detection Using Diffusion Trend Analysis
Eunwoo Kim, Un Yang, Cheol Lae Roh, Stefano Ermon