Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Diffusion Posterior Sampling is Computationally Intractable
Shivam Gupta, Ajil Jalal, Aditya Parulekar, Eric Price, Zhiyang Xun
SingVisio: Visual Analytics of Diffusion Model for Singing Voice Conversion
Liumeng Xue, Chaoren Wang, Mingxuan Wang, Xueyao Zhang, Jun Han, Zhizheng Wu
DiffusionNOCS: Managing Symmetry and Uncertainty in Sim2Real Multi-Modal Category-level Pose Estimation
Takuya Ikeda, Sergey Zakharov, Tianyi Ko, Muhammad Zubair Irshad, Robert Lee, Katherine Liu, Rares Ambrus, Koichi Nishiwaki
Hierarchical Bayes Approach to Personalized Federated Unsupervised Learning
Kaan Ozkara, Bruce Huang, Ruida Zhou, Suhas Diggavi
FiT: Flexible Vision Transformer for Diffusion Model
Zeyu Lu, Zidong Wang, Di Huang, Chengyue Wu, Xihui Liu, Wanli Ouyang, Lei Bai
Text Diffusion with Reinforced Conditioning
Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation
Yuan Yuan, Chenyang Shao, Jingtao Ding, Depeng Jin, Yong Li
UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models
Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Rao Kompella, Xiaoming Liu, Sijia Liu
Statistical Test for Generated Hypotheses by Diffusion Models
Teruyuki Katsuoka, Tomohiro Shiraishi, Daiki Miwa, Vo Nguyen Le Duy, Ichiro Takeuchi
Speaking in Wavelet Domain: A Simple and Efficient Approach to Speed up Speech Diffusion Model
Xiangyu Zhang, Daijiao Liu, Hexin Liu, Qiquan Zhang, Hanyu Meng, Leibny Paola Garcia, Eng Siong Chng, Lina Yao
Explaining generative diffusion models via visual analysis for interpretable decision-making process
Ji-Hoon Park, Yeong-Joon Ju, Seong-Whan Lee
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
Huizhuo Yuan, Zixiang Chen, Kaixuan Ji, Quanquan Gu
Classification Diffusion Models: Revitalizing Density Ratio Estimation
Shahar Yadin, Noam Elata, Tomer Michaeli
Diffusion Models Meet Contextual Bandits with Large Action Spaces
Imad Aouali
Accelerating Parallel Sampling of Diffusion Models
Zhiwei Tang, Jiasheng Tang, Hao Luo, Fan Wang, Tsung-Hui Chang
Diffusion Models for Audio Restoration
Jean-Marie Lemercier, Julius Richter, Simon Welker, Eloi Moliner, Vesa Välimäki, Timo Gerkmann
Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement
Tao Yang, Cuiling Lan, Yan Lu, Nanning zheng