Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Generating Realistic X-ray Scattering Images Using Stable Diffusion and Human-in-the-loop Annotations
Zhuowen Zhao, Xiaoya Chong, Tanny Chavez, Alexander Hexemer
Variance reduction of diffusion model's gradients with Taylor approximation-based control variate
Paul Jeha, Will Grathwohl, Michael Riis Andersen, Carl Henrik Ek, Jes Frellsen
ZipGait: Bridging Skeleton and Silhouette with Diffusion Model for Advancing Gait Recognition
Fanxu Min, Qing Cai, Shaoxiang Guo, Yang Yu, Hao Fan, Junyu Dong
Pixel Is Not A Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models
Chun-Yen Shih, Li-Xuan Peng, Jia-Wei Liao, Ernie Chu, Cheng-Fu Chou, Jun-Cheng Chen
HumanCoser: Layered 3D Human Generation via Semantic-Aware Diffusion Model
Yi Wang, Jian Ma, Ruizhi Shao, Qiao Feng, Yu-kun Lai, Kun Li
MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning
Haoning Wu, Shaocheng Shen, Qiang Hu, Xiaoyun Zhang, Ya Zhang, Yanfeng Wang
Novel Change Detection Framework in Remote Sensing Imagery Using Diffusion Models and Structural Similarity Index (SSIM)
Andrew Kiruluta, Eric Lundy, Andreas Lemos
Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models
Cong Wan, Yuhang He, Xiang Song, Yihong Gong
Beyond Local Views: Global State Inference with Diffusion Models for Cooperative Multi-Agent Reinforcement Learning
Zhiwei Xu, Hangyu Mao, Nianmin Zhang, Xin Xin, Pengjie Ren, Dapeng Li, Bin Zhang, Guoliang Fan, Zhumin Chen, Changwei Wang, Jiangjin Yin
FD2Talk: Towards Generalized Talking Head Generation with Facial Decoupled Diffusion Model
Ziyu Yao, Xuxin Cheng, Zhiqi Huang
PFDiff: Training-free Acceleration of Diffusion Models through the Gradient Guidance of Past and Future
Guangyi Wang, Yuren Cai, Lijiang Li, Wei Peng, Songzhi Su
Generative Dataset Distillation Based on Diffusion Model
Duo Su, Junjie Hou, Guang Li, Ren Togo, Rui Song, Takahiro Ogawa, Miki Haseyama
Diffusion Model for Planning: A Systematic Literature Review
Toshihide Ubukata, Jialong Li, Kenji Tei
Linear combinations of Gaussian latents in generative models: interpolation and beyond
Erik Bodin, Carl Henrik Ek, Henry Moss
Inverse design with conditional cascaded diffusion models
Milad Habibi, Mark Fuge