Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Osmosis: RGBD Diffusion Prior for Underwater Image Restoration
Opher Bar Nathan, Deborah Levy, Tali Treibitz, Dan Rosenbaum
ReNoise: Real Image Inversion Through Iterative Noising
Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar Averbuch-Elor, Daniel Cohen-Or
Style-Extracting Diffusion Models for Semi-Supervised Histopathology Segmentation
Mathias Öttl, Frauke Wilm, Jana Steenpass, Jingna Qiu, Matthias Rübner, Arndt Hartmann, Matthias Beckmann, Peter Fasching, Andreas Maier, Ramona Erber, Bernhard Kainz, Katharina Breininger
DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning
Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero-Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo
Physics-Informed Diffusion Models
Jan-Hendrik Bastek, WaiChing Sun, Dennis M. Kochmann
Open-Vocabulary Attention Maps with Token Optimization for Semantic Segmentation in Diffusion Models
Pablo Marcos-Manchón, Roberto Alcover-Couso, Juan C. SanMiguel, Jose M. Martínez
Diffusion Models with Ensembled Structure-Based Anomaly Scoring for Unsupervised Anomaly Detection
Finn Behrendt, Debayan Bhattacharya, Lennart Maack, Julia Krüger, Roland Opfer, Robin Mieling, Alexander Schlaefer
Protein Conformation Generation via Force-Guided SE(3) Diffusion Models
Yan Wang, Lihao Wang, Yuning Shen, Yiqun Wang, Huizhuo Yuan, Yue Wu, Quanquan Gu
QSMDiff: Unsupervised 3D Diffusion Models for Quantitative Susceptibility Mapping
Zhuang Xiong, Wei Jiang, Yang Gao, Feng Liu, Hongfu Sun
LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models
Hantao Zhang, Yuhe Liu, Jiancheng Yang, Shouhong Wan, Xinyuan Wang, Wei Peng, Pascal Fua
DiffSTOCK: Probabilistic relational Stock Market Predictions using Diffusion Models
Divyanshu Daiya, Monika Yadav, Harshit Singh Rao
Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques
W. Tang, D. Figueroa, D. Liu, K. Johnsson, A. Sopasakis
Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data
Giannis Daras, Alexandros G. Dimakis, Constantinos Daskalakis
S2DM: Sector-Shaped Diffusion Models for Video Generation
Haoran Lang, Yuxuan Ge, Zheng Tian
On-the-fly Learning to Transfer Motion Style with Diffusion Models: A Semantic Guidance Approach
Lei Hu, Zihao Zhang, Yongjing Ye, Yiwen Xu, Shihong Xia
Diffusion Model for Data-Driven Black-Box Optimization
Zihao Li, Hui Yuan, Kaixuan Huang, Chengzhuo Ni, Yinyu Ye, Minshuo Chen, Mengdi Wang
DreamDA: Generative Data Augmentation with Diffusion Models
Yunxiang Fu, Chaoqi Chen, Yu Qiao, Yizhou Yu
WaveFace: Authentic Face Restoration with Efficient Frequency Recovery
Yunqi Miao, Jiankang Deng, Jungong Han
Towards Controllable Face Generation with Semantic Latent Diffusion Models
Alex Ergasti, Claudio Ferrari, Tomaso Fontanini, Massimo Bertozzi, Andrea Prati