Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization
Haowei Zhu, Dehua Tang, Ji Liu, Mingjie Lu, Jintu Zheng, Jinzhang Peng, Dong Li, Yu Wang, Fan Jiang, Lu Tian, Spandan Tiwari, Ashish Sirasao, Jun-Hai Yong, Bin Wang, Emad Barsoum
VistaDream: Sampling multiview consistent images for single-view scene reconstruction
Haiping Wang, Yuan Liu, Ziwei Liu, Wenping Wang, Zhen Dong, Bisheng Yang
MPDS: A Movie Posters Dataset for Image Generation with Diffusion Model
Meng Xu (1), Tong Zhang (1), Fuyun Wang (1), Yi Lei (1), Xin Liu (2), Zhen Cui (1) ((1) Nanjing University of Science and Technology, Nanjing, China., (2) SeetaCloud, Nanjing, China.)
LLM-Assisted Red Teaming of Diffusion Models through "Failures Are Fated, But Can Be Faded"
Som Sagar, Aditya Taparia, Ransalu Senanayake
Dual-Model Defense: Safeguarding Diffusion Models from Membership Inference Attacks through Disjoint Data Splitting
Bao Q. Tran, Viet Nguyen, Anh Tran, Toan Tran
TopoDiffusionNet: A Topology-aware Diffusion Model
Saumya Gupta, Dimitris Samaras, Chao Chen
On conditional diffusion models for PDE simulations
Aliaksandra Shysheya, Cristiana Diaconu, Federico Bergamin, Paris Perdikaris, José Miguel Hernández-Lobato, Richard E. Turner, Emile Mathieu
Exploring how deep learning decodes anomalous diffusion via Grad-CAM
Jaeyong Bae, Yongjoo Baek, Hawoong Jeong
CamI2V: Camera-Controlled Image-to-Video Diffusion Model
Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, Xi Li
Evaluating the Posterior Sampling Ability of Plug&Play Diffusion Methods in Sparse-View CT
Liam Moroy, Guillaume Bourmaud, Frédéric Champagnat, Jean-François Giovannelli
Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation
Anh Bui, Long Vuong, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
Truncated Consistency Models
Sangyun Lee, Yilun Xu, Tomas Geffner, Giulia Fanti, Karsten Kreis, Arash Vahdat, Weili Nie
ANT: Adaptive Noise Schedule for Time Series Diffusion Models
Seunghan Lee, Kibok Lee, Taeyoung Park
DRL Optimization Trajectory Generation via Wireless Network Intent-Guided Diffusion Models for Optimizing Resource Allocation
Junjie Wu, Xuming Fang, Dusit Niyato, Jiacheng Wang, Jingyu Wang
FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models
Rui Hu, Qian He, Gaofeng He, Jiedong Zhuang, Huang Chen, Huafeng Liu, Huamin Wang