Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Dynamic Negative Guidance of Diffusion Models
Felix Koulischer, Johannes Deleu, Gabriel Raya, Thomas Demeester, Luca Ambrogioni
Mitigating Embedding Collapse in Diffusion Models for Categorical Data
Bac Nguyen, and Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Stefano Ermon, Yuki Mitsufuji
ERDDCI: Exact Reversible Diffusion via Dual-Chain Inversion for High-Quality Image Editing
Jimin Dai, Yingzhen Zhang, Shuo Chen, Jian Yang, Lei Luo
Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers
Runjia Li, Qiwei Di, Quanquan Gu
Assessing Open-world Forgetting in Generative Image Model Customization
Héctor Laria, Alex Gomez-Villa, Imad Eddine Marouf, Kai Wang, Bogdan Raducanu, Joost van de Weijer
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks
Xinxin Liu, Zhongliang Guo, Siyuan Huang, Chun Pong Lau
On Diffusion Models for Multi-Agent Partial Observability: Shared Attractors, Error Bounds, and Composite Flow
Tonghan Wang, Heng Dong, Yanchen Jiang, David C. Parkes, Milind Tambe
Influence Functions for Scalable Data Attribution in Diffusion Models
Bruno Mlodozeniec, Runa Eschenhagen, Juhan Bae, Alexander Immer, David Krueger, Richard Turner
Arbitrarily-Conditioned Multi-Functional Diffusion for Multi-Physics Emulation
Da Long, Zhitong Xu, Guang Yang, Akil Narayan, Shandian Zhe
Probing the Latent Hierarchical Structure of Data via Diffusion Models
Antonio Sclocchi, Alessandro Favero, Noam Itzhak Levi, Matthieu Wyart
Improved Convergence Rate for Diffusion Probabilistic Models
Gen Li, Yuchen Jiao
Diffusion Curriculum: Synthetic-to-Real Generative Curriculum Learning via Image-Guided Diffusion
Yijun Liang, Shweta Bhardwaj, Tianyi Zhou
Fine-Tuning Discrete Diffusion Models via Reward Optimization with Applications to DNA and Protein Design
Chenyu Wang, Masatoshi Uehara, Yichun He, Amy Wang, Tommaso Biancalani, Avantika Lal, Tommi Jaakkola, Sergey Levine, Hanchen Wang, Aviv Regev
L3DG: Latent 3D Gaussian Diffusion
Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, Matthias Nießner
Solving Prior Distribution Mismatch in Diffusion Models via Optimal Transport
Zhanpeng Wang, Shenghao Li, Chen Wang, Shuting Cao, Na Lei, Zhongxuan Luo
MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models
Donghao Zhou, Jiancheng Huang, Jinbin Bai, Jiaze Wang, Hao Chen, Guangyong Chen, Xiaowei Hu, Pheng-Ann Heng
DiffImp: Efficient Diffusion Model for Probabilistic Time Series Imputation with Bidirectional Mamba Backbone
Hongfan Gao, Wangmeng Shen, Xiangfei Qiu, Ronghui Xu, Jilin Hu, Bin Yang
FDF: Flexible Decoupled Framework for Time Series Forecasting with Conditional Denoising and Polynomial Modeling
Jintao Zhang, Mingyue Cheng, Xiaoyu Tao, Zhiding Liu, Daoyu Wang
Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance
Jiwan Hur, Dong-Jae Lee, Gyojin Han, Jaehyun Choi, Yunho Jeon, Junmo Kim