Diffusion Explainer
Diffusion explainers are generative models that leverage the principles of diffusion processes to create new data samples, primarily images and other high-dimensional data, by reversing a noise-addition process. Current research focuses on improving efficiency (e.g., one-step diffusion), enhancing controllability (e.g., through classifier-free guidance and conditioning on various modalities like text and 3D priors), and addressing challenges like data replication and mode collapse. These advancements are impacting diverse fields, from image super-resolution and medical imaging to robotics, recommendation systems, and even scientific simulations, by providing powerful tools for data generation, manipulation, and analysis.
Papers
Frequency-Aware Guidance for Blind Image Restoration via Diffusion Models
Jun Xiao, Zihang Lyu, Hao Xie, Cong Zhang, Yakun Ju, Changjian Shui, Kin-Man Lam
Constant Rate Schedule: Constant-Rate Distributional Change for Efficient Training and Sampling in Diffusion Models
Shuntaro Okada, Kenji Doi, Ryota Yoshihashi, Hirokatsu Kataoka, Tomohiro Tanaka
SmoothCache: A Universal Inference Acceleration Technique for Diffusion Transformers
Joseph Liu, Joshua Geddes, Ziyu Guo, Haomiao Jiang, Mahesh Kumar Nandwana
Towards Multi-View Consistent Style Transfer with One-Step Diffusion via Vision Conditioning
Yushen Zuo, Jun Xiao, Kin-Chung Chan, Rongkang Dong, Cuixin Yang, Zongqi He, Hao Xie, Kin-Man Lam
A Polarization Image Dehazing Method Based on the Principle of Physical Diffusion
Zhenjun Zhang, Lijun Tang, Hongjin Wang, Lilian Zhang, Yunze He, Yaonan Wang
Parameter Inference via Differentiable Diffusion Bridge Importance Sampling
Nicklas Boserup, Gefan Yang, Michael Lind Severinsen, Christy Anna Hipsley, Stefan Sommer
V2X-R: Cooperative LiDAR-4D Radar Fusion for 3D Object Detection with Denoising Diffusion
Xun Huang, Jinlong Wang, Qiming Xia, Siheng Chen, Bisheng Yang, Cheng Wang, Chenglu Wen
Diverse capability and scaling of diffusion and auto-regressive models when learning abstract rules
Binxu Wang, Jiaqi Shang, Haim Sompolinsky
Leveraging Previous Steps: A Training-free Fast Solver for Flow Diffusion
Kaiyu Song, Hanjiang Lai
Unraveling the Connections between Flow Matching and Diffusion Probabilistic Models in Training-free Conditional Generation
Kaiyu Song, Hanjiang Lai
Tracing the Roots: Leveraging Temporal Dynamics in Diffusion Trajectories for Origin Attribution
Andreas Floros, Seyed-Mohsen Moosavi-Dezfooli, Pier Luigi Dragotti
SVDQunat: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Muyang Li, Yujun Lin, Zhekai Zhang, Tianle Cai, Xiuyu Li, Junxian Guo, Enze Xie, Chenlin Meng, Jun-Yan Zhu, Song Han
Diff-2-in-1: Bridging Generation and Dense Perception with Diffusion Models
Shuhong Zheng, Zhipeng Bao, Ruoyu Zhao, Martial Hebert, Yu-Xiong Wang
Multivariate Data Augmentation for Predictive Maintenance using Diffusion
Andrew Thompson, Alexander Sommers, Alicia Russell-Gilbert, Logan Cummins, Sudip Mittal, Shahram Rahimi, Maria Seale, Joseph Jaboure, Thomas Arnold, Joshua Church
Sub-DM:Subspace Diffusion Model with Orthogonal Decomposition for MRI Reconstruction
Yu Guan, Qinrong Cai, Wei Li, Qiuyun Fan, Dong Liang, Qiegen Liu