Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Cometh: A continuous-time discrete-state graph diffusion model
Antoine Siraudin, Fragkiskos D. Malliaros, Christopher Morris
Margin-aware Preference Optimization for Aligning Diffusion Models without Reference
Jiwoo Hong, Sayak Paul, Noah Lee, Kashif Rasul, James Thorne, Jongheon Jeong
Diffusion-RPO: Aligning Diffusion Models through Relative Preference Optimization
Yi Gu, Zhendong Wang, Yueqin Yin, Yujia Xie, Mingyuan Zhou
Tuning-Free Visual Customization via View Iterative Self-Attention Control
Xiaojie Li, Chenghao Gu, Shuzhao Xie, Yunpeng Bai, Weixiang Zhang, Zhi Wang
Thunder : Unified Regression-Diffusion Speech Enhancement with a Single Reverse Step using Brownian Bridge
Thanapat Trachu, Chawan Piansaddhayanon, Ekapol Chuangsuwanich
ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models
Meng-Li Shih, Wei-Chiu Ma, Lorenzo Boyice, Aleksander Holynski, Forrester Cole, Brian L. Curless, Janne Kontkanen
Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training
Ke Niu, Haiyang Yu, Xuelin Qian, Teng Fu, Bin Li, Xiangyang Xue
Efficient Shapley Values for Attributing Global Properties of Diffusion Models to Data Group
Chris Lin, Mingyu Lu, Chanwoo Kim, Su-In Lee
Improving Antibody Design with Force-Guided Sampling in Diffusion Models
Paulina Kulytė, Francisco Vargas, Simon Valentin Mathis, Yu Guang Wang, José Miguel Hernández-Lobato, Pietro Liò
Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis
Zanlin Ni, Yulin Wang, Renping Zhou, Jiayi Guo, Jinyi Hu, Zhiyuan Liu, Shiji Song, Yuan Yao, Gao Huang
3D MRI Synthesis with Slice-Based Latent Diffusion Models: Improving Tumor Segmentation Tasks in Data-Scarce Regimes
Aghiles Kebaili, Jérôme Lapuyade-Lahorgue, Pierre Vera, Su Ruan
Efficient Differentially Private Fine-Tuning of Diffusion Models
Jing Liu, Andrew Lowy, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
Learning Divergence Fields for Shift-Robust Graph Representations
Qitian Wu, Fan Nie, Chenxiao Yang, Junchi Yan
Online Continual Learning of Video Diffusion Models From a Single Video Stream
Jason Yoo, Dylan Green, Geoff Pleiss, Frank Wood
MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models
Sanjoy Chowdhury, Sayan Nag, K J Joseph, Balaji Vasan Srinivasan, Dinesh Manocha
Diffusion Models in $\textit{De Novo}$ Drug Design
Amira Alakhdar, Barnabas Poczos, Newell Washburn
BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Yang Sui, Yanyu Li, Anil Kag, Yerlan Idelbayev, Junli Cao, Ju Hu, Dhritiman Sagar, Bo Yuan, Sergey Tulyakov, Jian Ren
Simplified and Generalized Masked Diffusion for Discrete Data
Jiaxin Shi, Kehang Han, Zhe Wang, Arnaud Doucet, Michalis K. Titsias
Diffusion-based image inpainting with internal learning
Nicolas Cherel, Andrés Almansa, Yann Gousseau, Alasdair Newson