Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
GRIN: Zero-Shot Metric Depth with Pixel-Level Diffusion
Vitor Guizilini, Pavel Tokmakov, Achal Dave, Rares Ambrus
E-Commerce Inpainting with Mask Guidance in Controlnet for Reducing Overcompletion
Guandong Li
HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models
Tingwei Meng, Zongren Zou, Jérôme Darbon, George Em Karniadakis
DreamMover: Leveraging the Prior of Diffusion Models for Image Interpolation with Large Motion
Liao Shen, Tianqi Liu, Huiqiang Sun, Xinyi Ye, Baopu Li, Jianming Zhang, Zhiguo Cao
Bias Begets Bias: The Impact of Biased Embeddings on Diffusion Models
Sahil Kuchlous, Marvin Li, Jeffrey G. Wang
Real-world Adversarial Defense against Patch Attacks based on Diffusion Model
Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su
Towards Diverse and Efficient Audio Captioning via Diffusion Models
Manjie Xu, Chenxing Li, Xinyi Tu, Yong Ren, Ruibo Fu, Wei Liang, Dong Yu
Enhancing EEG Signal Generation through a Hybrid Approach Integrating Reinforcement Learning and Diffusion Models
Yang An, Yuhao Tong, Weikai Wang, Steven W. Su
Adaptive Multi-Modal Control of Digital Human Hand Synthesis Using a Region-Aware Cycle Loss
Qifan Fu, Xiaohang Yang, Muhammad Asad, Changjae Oh, Shanxin Yuan, Gregory Slabaugh
Neural Message Passing Induced by Energy-Constrained Diffusion
Qitian Wu, David Wipf, Junchi Yan
Gaussian is All You Need: A Unified Framework for Solving Inverse Problems via Diffusion Posterior Sampling
Nebiyou Yismaw, Ulugbek S. Kamilov, M. Salman Asif
DX2CT: Diffusion Model for 3D CT Reconstruction from Bi or Mono-planar 2D X-ray(s)
Yun Su Jeong, Hye Bin Yoo, Il Yong Chun
Think Twice Before You Act: Improving Inverse Problem Solving With MCMC
Yaxuan Zhu, Zehao Dou, Haoxin Zheng, Yasi Zhang, Ying Nian Wu, Ruiqi Gao
Cross-conditioned Diffusion Model for Medical Image to Image Translation
Zhaohu Xing, Sicheng Yang, Sixiang Chen, Tian Ye, Yijun Yang, Jing Qin, Lei Zhu
Risks When Sharing LoRA Fine-Tuned Diffusion Model Weights
Dixi Yao
Integrating Neural Operators with Diffusion Models Improves Spectral Representation in Turbulence Modeling
Vivek Oommen, Aniruddha Bora, Zhen Zhang, George Em Karniadakis
Efficient and Unbiased Sampling of Boltzmann Distributions via Consistency Models
Fengzhe Zhang, Jiajun He, Laurence I. Midgley, Javier Antorán, José Miguel Hernández-Lobato
Exploring User-level Gradient Inversion with a Diffusion Prior
Zhuohang Li, Andrew Lowy, Jing Liu, Toshiaki Koike-Akino, Bradley Malin, Kieran Parsons, Ye Wang