Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models
Artur Kasymov, Marcin Sendera, Michał Stypułkowski, Maciej Zięba, Przemysław Spurek
ShieldDiff: Suppressing Sexual Content Generation from Diffusion Models through Reinforcement Learning
Dong Han, Salaheldin Mohamed, Yong Li
Real-World Benchmarks Make Membership Inference Attacks Fail on Diffusion Models
Chumeng Liang, Jiaxuan You
Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features
Benyuan Meng, Qianqian Xu, Zitai Wang, Xiaochun Cao, Qingming Huang
Diffusion State-Guided Projected Gradient for Inverse Problems
Rayhan Zirvi, Bahareh Tolooshams, Anima Anandkumar
Dynamic Diffusion Transformer
Wangbo Zhao, Yizeng Han, Jiasheng Tang, Kai Wang, Yibing Song, Gao Huang, Fan Wang, Yang You
Latent Abstractions in Generative Diffusion Models
Giulio Franzese, Mattia Martini, Giulio Corallo, Paolo Papotti, Pietro Michiardi
Multi-Robot Motion Planning with Diffusion Models
Yorai Shaoul, Itamar Mishani, Shivam Vats, Jiaoyang Li, Maxim Likhachev
Revealing the Unseen: Guiding Personalized Diffusion Models to Expose Training Data
Xiaoyu Wu, Jiaru Zhang, Steven Wu
Learning Optimal Control and Dynamical Structure of Global Trajectory Search Problems with Diffusion Models
Jannik Graebner, Anjian Li, Amlan Sinha, Ryne Beeson
SteerDiff: Steering towards Safe Text-to-Image Diffusion Models
Hongxiang Zhang, Yifeng He, Hao Chen
GUD: Generation with Unified Diffusion
Mathis Gerdes, Max Welling, Miranda C. N. Cheng
Diffusion Models are Evolutionary Algorithms
Yanbo Zhang, Benedikt Hartl, Hananel Hazan, Michael Levin
Extracting Training Data from Unconditional Diffusion Models
Yunhao Chen, Shujie Wang, Difan Zou, Xingjun Ma
Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models
Seyedmorteza Sadat, Otmar Hilliges, Romann M. Weber
Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation
Muzhi Zhu, Yang Liu, Zekai Luo, Chenchen Jing, Hao Chen, Guangkai Xu, Xinlong Wang, Chunhua Shen
Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis
Zikun Zhang, Zixiang Chen, Quanquan Gu
SoundMorpher: Perceptually-Uniform Sound Morphing with Diffusion Model
Xinlei Niu, Jing Zhang, Charles Patrick Martin