Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Thunder : Unified Regression-Diffusion Speech Enhancement with a Single Reverse Step using Brownian Bridge
Thanapat Trachu, Chawan Piansaddhayanon, Ekapol Chuangsuwanich
ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models
Meng-Li Shih, Wei-Chiu Ma, Lorenzo Boyice, Aleksander Holynski, Forrester Cole, Brian L. Curless, Janne Kontkanen
Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training
Ke Niu, Haiyang Yu, Xuelin Qian, Teng Fu, Bin Li, Xiangyang Xue
Efficient Shapley Values for Attributing Global Properties of Diffusion Models to Data Group
Chris Lin, Mingyu Lu, Chanwoo Kim, Su-In Lee
Improving Antibody Design with Force-Guided Sampling in Diffusion Models
Paulina Kulytė, Francisco Vargas, Simon Valentin Mathis, Yu Guang Wang, José Miguel Hernández-Lobato, Pietro Liò
Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis
Zanlin Ni, Yulin Wang, Renping Zhou, Jiayi Guo, Jinyi Hu, Zhiyuan Liu, Shiji Song, Yuan Yao, Gao Huang
3D MRI Synthesis with Slice-Based Latent Diffusion Models: Improving Tumor Segmentation Tasks in Data-Scarce Regimes
Aghiles Kebaili, Jérôme Lapuyade-Lahorgue, Pierre Vera, Su Ruan
Efficient Differentially Private Fine-Tuning of Diffusion Models
Jing Liu, Andrew Lowy, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
Learning Divergence Fields for Shift-Robust Graph Representations
Qitian Wu, Fan Nie, Chenxiao Yang, Junchi Yan
Lifelong Learning of Video Diffusion Models From a Single Video Stream
Jason Yoo, Yingchen He, Saeid Naderiparizi, Dylan Green, Gido M. van de Ven, Geoff Pleiss, Frank Wood
MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models
Sanjoy Chowdhury, Sayan Nag, K J Joseph, Balaji Vasan Srinivasan, Dinesh Manocha
Diffusion Models in $\textit{De Novo}$ Drug Design
Amira Alakhdar, Barnabas Poczos, Newell Washburn
BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Yang Sui, Yanyu Li, Anil Kag, Yerlan Idelbayev, Junli Cao, Ju Hu, Dhritiman Sagar, Bo Yuan, Sergey Tulyakov, Jian Ren
Simplified and Generalized Masked Diffusion for Discrete Data
Jiaxin Shi, Kehang Han, Zhe Wang, Arnaud Doucet, Michalis K. Titsias
Diffusion-based image inpainting with internal learning
Nicolas Cherel, Andrés Almansa, Yann Gousseau, Alasdair Newson
Single Exposure Quantitative Phase Imaging with a Conventional Microscope using Diffusion Models
Gabriel della Maggiora, Luis Alberto Croquevielle, Harry Horsley, Thomas Heinis, Artur Yakimovich
Multistep Distillation of Diffusion Models via Moment Matching
Tim Salimans, Thomas Mensink, Jonathan Heek, Emiel Hoogeboom
Enhancing Weather Predictions: Super-Resolution via Deep Diffusion Models
Jan Martinů, Petr Šimánek
Bayesian Power Steering: An Effective Approach for Domain Adaptation of Diffusion Models
Ding Huang, Ting Li, Jian Huang