Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
ASD-Diffusion: Anomalous Sound Detection with Diffusion Models
Fengrun Zhang, Xiang Xie, Kai Guo
Aided design of bridge aesthetics based on Stable Diffusion fine-tuning
Leye Zhang, Xiangxiang Tian, Chengli Zhang, Hongjun Zhang
TFG: Unified Training-Free Guidance for Diffusion Models
Haotian Ye, Haowei Lin, Jiaqi Han, Minkai Xu, Sheng Liu, Yitao Liang, Jianzhu Ma, James Zou, Stefano Ermon
ImPoster: Text and Frequency Guidance for Subject Driven Action Personalization using Diffusion Models
Divya Kothandaraman, Kuldeep Kulkarni, Sumit Shekhar, Balaji Vasan Srinivasan, Dinesh Manocha
Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection
Alireza Ganjdanesh, Yan Kang, Yuchen Liu, Richard Zhang, Zhe Lin, Heng Huang
Learning Diverse Robot Striking Motions with Diffusion Models and Kinematically Constrained Gradient Guidance
Kin Man Lee, Sean Ye, Qingyu Xiao, Zixuan Wu, Zulfiqar Zaidi, David B. D'Ambrosio, Pannag R. Sanketi, Matthew Gombolay
Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections
Ankit Dhiman, Manan Shah, Rishubh Parihar, Yash Bhalgat, Lokesh R Boregowda, R Venkatesh Babu
What does guidance do? A fine-grained analysis in a simple setting
Muthu Chidambaram, Khashayar Gatmiry, Sitan Chen, Holden Lee, Jianfeng Lu
LVCD: Reference-based Lineart Video Colorization with Diffusion Models
Zhitong Huang, Mohan Zhang, Jing Liao
Bayesian-Optimized One-Step Diffusion Model with Knowledge Distillation for Real-Time 3D Human Motion Prediction
Sibo Tian, Minghui Zheng, Xiao Liang
Denoising diffusion models for high-resolution microscopy image restoration
Pamela Osuna-Vargas, Maren H. Wehrheim, Lucas Zinz, Johanna Rahm, Ashwin Balakrishnan, Alexandra Kaminer, Mike Heilemann, Matthias Kaschube
Generation of Complex 3D Human Motion by Temporal and Spatial Composition of Diffusion Models
Lorenzo Mandelli, Stefano Berretti
DPI-TTS: Directional Patch Interaction for Fast-Converging and Style Temporal Modeling in Text-to-Speech
Xin Qi, Ruibo Fu, Zhengqi Wen, Tao Wang, Chunyu Qiang, Jianhua Tao, Chenxing Li, Yi Lu, Shuchen Shi, Zhiyong Wang, Xiaopeng Wang, Yuankun Xie, Yukun Liu, Xuefei Liu, Guanjun Li
InverseMeetInsert: Robust Real Image Editing via Geometric Accumulation Inversion in Guided Diffusion Models
Yan Zheng, Lemeng Wu
DiffESM: Conditional Emulation of Temperature and Precipitation in Earth System Models with 3D Diffusion Models
Seth Bassetti, Brian Hutchinson, Claudia Tebaldi, Ben Kravitz
Ultrasound Image Enhancement with the Variance of Diffusion Models
Yuxin Zhang, Clément Huneau, Jérôme Idier, Diana Mateus
DroneDiffusion: Robust Quadrotor Dynamics Learning with Diffusion Models
Avirup Das, Rishabh Dev Yadav, Sihao Sun, Mingfei Sun, Samuel Kaski, Wei Pan