Pre Trained Diffusion Model
Pre-trained diffusion models are generative models used as powerful priors for solving various inverse problems, particularly in image processing and generation. Current research focuses on improving efficiency (e.g., one-step methods, faster sampling algorithms), enhancing control over generation (e.g., through guidance mechanisms and fine-tuning strategies like LoRA), and addressing security concerns (e.g., mitigating membership inference attacks). This work is significant because it leverages the strong generative capabilities of these models to achieve state-of-the-art results in diverse applications, ranging from image restoration and super-resolution to more complex tasks like image composition and 3D reconstruction.
Papers
A Survey on Diffusion Models for Inverse Problems
Giannis Daras, Hyungjin Chung, Chieh-Hsin Lai, Yuki Mitsufuji, Jong Chul Ye, Peyman Milanfar, Alexandros G. Dimakis, Mauricio Delbracio
Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems
Hongkai Zheng, Wenda Chu, Austin Wang, Nikola Kovachki, Ricardo Baptista, Yisong Yue
DIAGen: Diverse Image Augmentation with Generative Models
Tobias Lingenberg, Markus Reuter, Gopika Sudhakaran, Dominik Gojny, Stefan Roth, Simone Schaub-Meyer
Foodfusion: A Novel Approach for Food Image Composition via Diffusion Models
Chaohua Shi, Xuan Wang, Si Shi, Xule Wang, Mingrui Zhu, Nannan Wang, Xinbo Gao