Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Prompt-guided Precise Audio Editing with Diffusion Models
Manjie Xu, Chenxing Li, Duzhen zhang, Dan Su, Wei Liang, Dong Yu
Non-confusing Generation of Customized Concepts in Diffusion Models
Wang Lin, Jingyuan Chen, Jiaxin Shi, Yichen Zhu, Chen Liang, Junzhong Miao, Tao Jin, Zhou Zhao, Fei Wu, Shuicheng Yan, Hanwang Zhang
Self-Consistent Recursive Diffusion Bridge for Medical Image Translation
Fuat Arslan, Bilal Kabas, Onat Dalmaz, Muzaffer Ozbey, Tolga Çukur
Shape Conditioned Human Motion Generation with Diffusion Model
Kebing Xue, Hyewon Seo
Prior-guided Diffusion Model for Cell Segmentation in Quantitative Phase Imaging
Zhuchen Shao, Mark A. Anastasio, Hua Li
Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask
Zineb Senane, Lele Cao, Valentin Leonhard Buchner, Yusuke Tashiro, Lei You, Pawel Herman, Mats Nordahl, Ruibo Tu, Vilhelm von Ehrenheim
DDPM-MoCo: Advancing Industrial Surface Defect Generation and Detection with Generative and Contrastive Learning
Yangfan He, Xinyan Wang, Tianyu Shi
DP-MDM: Detail-Preserving MR Reconstruction via Multiple Diffusion Models
Mengxiao Geng, Jiahao Zhu, Xiaolin Zhu, Qiqing Liu, Dong Liang, Qiegen Liu
StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework
Yiheng Huang, Hui Yang, Chuanchen Luo, Yuxi Wang, Shibiao Xu, Zhaoxiang Zhang, Man Zhang, Junran Peng
A Survey on Personalized Content Synthesis with Diffusion Models
Xulu Zhang, Xiao-Yong Wei, Wengyu Zhang, Jinlin Wu, Zhaoxiang Zhang, Zhen Lei, Qing Li
Diffusion-HMC: Parameter Inference with Diffusion Model driven Hamiltonian Monte Carlo
Nayantara Mudur, Carolina Cuesta-Lazaro, Douglas P. Finkbeiner
Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu
Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation
Jonas Kohler, Albert Pumarola, Edgar Schönfeld, Artsiom Sanakoyeu, Roshan Sumbaly, Peter Vajda, Ali Thabet
FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models
Jinglin Xu, Yijie Guo, Yuxin Peng
Discrepancy-based Diffusion Models for Lesion Detection in Brain MRI
Keqiang Fan, Xiaohao Cai, Mahesan Niranjan
Remote Diffusion
Kunal Sunil Kasodekar
TexControl: Sketch-Based Two-Stage Fashion Image Generation Using Diffusion Model
Yongming Zhang, Tianyu Zhang, Haoran Xie
BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models
Eloi Moliner, Jean-Marie Lemercier, Simon Welker, Timo Gerkmann, Vesa Välimäki
Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models
Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu
Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model
Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu