Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Underwater Image Enhancement by Diffusion Model with Customized CLIP-Classifier
Shuaixin Liu, Kunqian Li, Yilin Ding, Qi Qi
Towards Black-Box Membership Inference Attack for Diffusion Models
Jingwei Li, Jing Dong, Tianxing He, Jingzhao Zhang
Diffusion-Reward Adversarial Imitation Learning
Chun-Mao Lai, Hsiang-Chun Wang, Ping-Chun Hsieh, Yu-Chiang Frank Wang, Min-Hung Chen, Shao-Hua Sun
AIGB: Generative Auto-bidding via Conditional Diffusion Modeling
Jiayan Guo, Yusen Huo, Zhilin Zhang, Tianyu Wang, Chuan Yu, Jian Xu, Yan Zhang, Bo Zheng
Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity
Haoxuan Chen, Yinuo Ren, Lexing Ying, Grant M. Rotskoff
CausalConceptTS: Causal Attributions for Time Series Classification using High Fidelity Diffusion Models
Juan Miguel Lopez Alcaraz, Nils Strodthoff
Reducing the cost of posterior sampling in linear inverse problems via task-dependent score learning
Fabian Schneider, Duc-Lam Duong, Matti Lassas, Maarten V. de Hoop, Tapio Helin
Out of Many, One: Designing and Scaffolding Proteins at the Scale of the Structural Universe with Genie 2
Yeqing Lin, Minji Lee, Zhao Zhang, Mohammed AlQuraishi
Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model
Mingyang Yi, Aoxue Li, Yi Xin, Zhenguo Li
Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient
Yongliang Wu, Shiji Zhou, Mingzhuo Yang, Lianzhe Wang, Wenbo Zhu, Heng Chang, Xiao Zhou, Xu Yang
StyleMaster: Towards Flexible Stylized Image Generation with Diffusion Models
Chengming Xu, Kai Hu, Donghao Luo, Jiangning Zhang, Wei Li, Yanhao Ge, Chengjie Wang
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, Sijia Liu
DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception
Run Luo, Yunshui Li, Longze Chen, Wanwei He, Ting-En Lin, Ziqiang Liu, Lei Zhang, Zikai Song, Xiaobo Xia, Tongliang Liu, Min Yang, Binyuan Hui
NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation
Vikas Thamizharasan, Difan Liu, Matthew Fisher, Nanxuan Zhao, Evangelos Kalogerakis, Michal Lukac
ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models
Jingyuan Zhu, Shiyu Li, Yuxuan Liu, Ping Huang, Jiulong Shan, Huimin Ma, Jian Yuan
FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via Selective Tensor Freezing
Kai Huang, Wei Gao
AdjointDEIS: Efficient Gradients for Diffusion Models
Zander W. Blasingame, Chen Liu
SFDDM: Single-fold Distillation for Diffusion models
Chi Hong, Jiyue Huang, Robert Birke, Dick Epema, Stefanie Roos, Lydia Y. Chen
Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models
Gen Li, Yuling Yan
PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher
Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon