Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Four-hour thunderstorm nowcasting using deep diffusion models of satellite
Kuai Dai, Xutao Li, Junying Fang, Yunming Ye, Demin Yu, Di Xian, Danyu Qin, Jingsong Wang
SparseDM: Toward Sparse Efficient Diffusion Models
Kafeng Wang, Jianfei Chen, He Li, Zhenpeng Mi, Jun Zhu
Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models
Qi Guo, Shanmin Pang, Xiaojun Jia, Yang Liu, Qing Guo
Diffscaler: Enhancing the Generative Prowess of Diffusion Transformers
Nithin Gopalakrishnan Nair, Jeya Maria Jose Valanarasu, Vishal M. Patel
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
Han Lin, Jaemin Cho, Abhay Zala, Mohit Bansal
Digging into contrastive learning for robust depth estimation with diffusion models
Jiyuan Wang, Chunyu Lin, Lang Nie, Kang Liao, Shuwei Shao, Yao Zhao
Equipping Diffusion Models with Differentiable Spatial Entropy for Low-Light Image Enhancement
Wenyi Lian, Wenjing Lian, Ziwei Luo
TMPQ-DM: Joint Timestep Reduction and Quantization Precision Selection for Efficient Diffusion Models
Haojun Sun, Chen Tang, Zhi Wang, Yuan Meng, Jingyan jiang, Xinzhu Ma, Wenwu Zhu
Watermark-embedded Adversarial Examples for Copyright Protection against Diffusion Models
Peifei Zhu, Tsubasa Takahashi, Hirokatsu Kataoka
An Overview of Diffusion Models: Applications, Guided Generation, Statistical Rates and Optimization
Minshuo Chen, Song Mei, Jianqing Fan, Mengdi Wang
Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
Tuomas Kynkäänniemi, Miika Aittala, Tero Karras, Samuli Laine, Timo Aila, Jaakko Lehtinen
GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models
Zewei Zhang, Huan Liu, Jun Chen, Xiangyu Xu
Generative inpainting of incomplete Euclidean distance matrices of trajectories generated by a fractional Brownian motion
Alexander Lobashev, Dmitry Guskov, Kirill Polovnikov
Fine color guidance in diffusion models and its application to image compression at extremely low bitrates
Tom Bordin, Thomas Maugey