Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
BiRoDiff: Diffusion policies for bipedal robot locomotion on unseen terrains
GVS Mothish, Manan Tayal, Shishir Kolathaya
Enhancing Label-efficient Medical Image Segmentation with Text-guided Diffusion Models
Chun-Mei Feng
An Improved Method for Personalizing Diffusion Models
Yan Zeng, Masanori Suganuma, Takayuki Okatani
Replication in Visual Diffusion Models: A Survey and Outlook
Wenhao Wang, Yifan Sun, Zongxin Yang, Zhengdong Hu, Zhentao Tan, Yi Yang
Advances in Diffusion Models for Image Data Augmentation: A Review of Methods, Models, Evaluation Metrics and Future Research Directions
Panagiotis Alimisis, Ioannis Mademlis, Panagiotis Radoglou-Grammatikis, Panagiotis Sarigiannidis, Georgios Th. Papadopoulos
Model Collapse in the Self-Consuming Chain of Diffusion Finetuning: A Novel Perspective from Quantitative Trait Modeling
Youngseok Yoon, Dainong Hu, Iain Weissburg, Yao Qin, Haewon Jeong
Timestep-Aware Correction for Quantized Diffusion Models
Yuzhe Yao, Feng Tian, Jun Chen, Haonan Lin, Guang Dai, Yong Liu, Jingdong Wang
DisCo-Diff: Enhancing Continuous Diffusion Models with Discrete Latents
Yilun Xu, Gabriele Corso, Tommi Jaakkola, Arash Vahdat, Karsten Kreis
Improved Noise Schedule for Diffusion Training
Tiankai Hang, Shuyang Gu, Xin Geng, Baining Guo
Frequency-Controlled Diffusion Model for Versatile Text-Guided Image-to-Image Translation
Xiang Gao, Zhengbo Xu, Junhan Zhao, Jiaying Liu
Single Image Rolling Shutter Removal with Diffusion Models
Zhanglei Yang, Haipeng Li, Mingbo Hong, Bing Zeng, Shuaicheng Liu
Robot Shape and Location Retention in Video Generation Using Diffusion Models
Peng Wang, Zhihao Guo, Abdul Latheef Sait, Minh Huy Pham
Highly Accelerated MRI via Implicit Neural Representation Guided Posterior Sampling of Diffusion Models
Jiayue Chu, Chenhe Du, Xiyue Lin, Yuyao Zhang, Hongjiang Wei
No Training, No Problem: Rethinking Classifier-Free Guidance for Diffusion Models
Seyedmorteza Sadat, Manuel Kansy, Otmar Hilliges, Romann M. Weber
Boosting Consistency in Story Visualization with Rich-Contextual Conditional Diffusion Models
Fei Shen, Hu Ye, Sibo Liu, Jun Zhang, Cong Wang, Xiao Han, Wei Yang
Diffusion Models for Tabular Data Imputation and Synthetic Data Generation
Mario Villaizán-Vallelado, Matteo Salvatori, Carlos Segura, Ioannis Arapakis
GlyphDraw2: Automatic Generation of Complex Glyph Posters with Diffusion Models and Large Language Models
Jian Ma, Yonglin Deng, Chen Chen, Haonan Lu, Zhenyu Yang
SwiftDiffusion: Efficient Diffusion Model Serving with Add-on Modules
Suyi Li, Lingyun Yang, Xiaoxiao Jiang, Hanfeng Lu, Dakai An, Zhipeng Di, Weiyi Lu, Jiawei Chen, Kan Liu, Yinghao Yu, Tao Lan, Guodong Yang, Lin Qu, Liping Zhang, Wei Wang