Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Discovery of 2D Materials via Symmetry-Constrained Diffusion Model
Shihang Xu, Shibing Chu, Rami Mrad, Zhejun Zhang, Zhelin Li, Runxian Jiao, Yuanping Chen
Schödinger Bridge Type Diffusion Models as an Extension of Variational Autoencoders
Kentaro Kaba, Reo Shimizu, Masayuki Ohzeki, Yuki Sughiyama
Stochastic Control for Fine-tuning Diffusion Models: Optimality, Regularity, and Convergence
Yinbin Han, Meisam Razaviyayn, Renyuan Xu
Dense-Face: Personalized Face Generation Model via Dense Annotation Prediction
Xiao Guo, Manh Tran, Jiaxin Cheng, Xiaoming Liu
The Superposition of Diffusion Models Using the Itô Density Estimator
Marta Skreta, Lazar Atanackovic, Avishek Joey Bose, Alexander Tong, Kirill Neklyudov
DreamFit: Garment-Centric Human Generation via a Lightweight Anything-Dressing Encoder
Ente Lin, Xujie Zhang, Fuwei Zhao, Yuxuan Luo, Xin Dong, Long Zeng, Xiaodan Liang
Broadband Ground Motion Synthesis by Diffusion Model with Minimal Condition
Jaeheun Jung, Jaehyuk Lee, Chang-Hae Jung, Hanyoung Kim, Bosung Jung, Donghun Lee
Discriminative Image Generation with Diffusion Models for Zero-Shot Learning
Dingjie Fu, Wenjin Hou, Shiming Chen, Shuhuang Chen, Xinge You, Salman Khan, Fahad Shahbaz Khan
Differentially Private Federated Learning of Diffusion Models for Synthetic Tabular Data Generation
Timur Sattarov, Marco Schreyer, Damian Borth
Semi-Supervised Adaptation of Diffusion Models for Handwritten Text Generation
Kai Brandenbusch
PromptLA: Towards Integrity Verification of Black-box Text-to-Image Diffusion Models
Zhuomeng Zhang, Fangqi Li, Chong Di, Shilin Wang
ChangeDiff: A Multi-Temporal Change Detection Data Generator with Flexible Text Prompts via Diffusion Model
Qi Zang, Jiayi Yang, Shuang Wang, Dong Zhao, Wenjun Yi, Zhun Zhong
Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models
Reza Shirkavand, Peiran Yu, Shangqian Gao, Gowthami Somepalli, Tom Goldstein, Heng Huang
AV-Link: Temporally-Aligned Diffusion Features for Cross-Modal Audio-Video Generation
Moayed Haji-Ali, Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Alper Canberk, Kwot Sin Lee, Vicente Ordonez, Sergey Tulyakov
DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space
Mang Ning, Mingxiao Li, Jianlin Su, Haozhe Jia, Lanmiao Liu, Martin Beneš, Albert Ali Salah, Itir Onal Ertugrul
Diffusion priors for Bayesian 3D reconstruction from incomplete measurements
Julian L. Möbius, Michael Habeck