Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
On conditional diffusion models for PDE simulations
Aliaksandra Shysheya, Cristiana Diaconu, Federico Bergamin, Paris Perdikaris, José Miguel Hernández-Lobato, Richard E. Turner, Emile Mathieu
Exploring how deep learning decodes anomalous diffusion via Grad-CAM
Jaeyong Bae, Yongjoo Baek, Hawoong Jeong
CamI2V: Camera-Controlled Image-to-Video Diffusion Model
Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, Xi Li
Evaluating the Posterior Sampling Ability of Plug&Play Diffusion Methods in Sparse-View CT
Liam Moroy, Guillaume Bourmaud, Frédéric Champagnat, Jean-François Giovannelli
Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation
Anh Bui, Long Vuong, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
Truncated Consistency Models
Sangyun Lee, Yilun Xu, Tomas Geffner, Giulia Fanti, Karsten Kreis, Arash Vahdat, Weili Nie
ANT: Adaptive Noise Schedule for Time Series Diffusion Models
Seunghan Lee, Kibok Lee, Taeyoung Park
DRL Optimization Trajectory Generation via Wireless Network Intent-Guided Diffusion Models for Optimizing Resource Allocation
Junjie Wu, Xuming Fang, Dusit Niyato, Jiacheng Wang, Jingyu Wang
FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models
Rui Hu, Qian He, Gaofeng He, Jiedong Zhuang, Huang Chen, Huafeng Liu, Huamin Wang
Dynamic Negative Guidance of Diffusion Models
Felix Koulischer, Johannes Deleu, Gabriel Raya, Thomas Demeester, Luca Ambrogioni
Mitigating Embedding Collapse in Diffusion Models for Categorical Data
Bac Nguyen, and Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Stefano Ermon, Yuki Mitsufuji
ERDDCI: Exact Reversible Diffusion via Dual-Chain Inversion for High-Quality Image Editing
Jimin Dai, Yingzhen Zhang, Shuo Chen, Jian Yang, Lei Luo
Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers
Runjia Li, Qiwei Di, Quanquan Gu
Assessing Open-world Forgetting in Generative Image Model Customization
Héctor Laria, Alex Gomez-Villa, Imad Eddine Marouf, Kai Wang, Bogdan Raducanu, Joost van de Weijer