Pre Trained Diffusion Model
Pre-trained diffusion models are generative models used as powerful priors for solving various inverse problems, particularly in image processing and generation. Current research focuses on improving efficiency (e.g., one-step methods, faster sampling algorithms), enhancing control over generation (e.g., through guidance mechanisms and fine-tuning strategies like LoRA), and addressing security concerns (e.g., mitigating membership inference attacks). This work is significant because it leverages the strong generative capabilities of these models to achieve state-of-the-art results in diverse applications, ranging from image restoration and super-resolution to more complex tasks like image composition and 3D reconstruction.
Papers
Zero-Shot Video Semantic Segmentation based on Pre-Trained Diffusion Models
Qian Wang, Abdelrahman Eldesokey, Mohit Mendiratta, Fangneng Zhan, Adam Kortylewski, Christian Theobalt, Peter Wonka
Transfer Learning for Diffusion Models
Yidong Ouyang, Liyan Xie, Hongyuan Zha, Guang Cheng
DMPlug: A Plug-in Method for Solving Inverse Problems with Diffusion Models
Hengkang Wang, Xu Zhang, Taihui Li, Yuxiang Wan, Tiancong Chen, Ju Sun
Learning to Discretize Denoising Diffusion ODEs
Vinh Tong, Anji Liu, Trung-Dung Hoang, Guy Van den Broeck, Mathias Niepert
ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models
Jingyuan Zhu, Shiyu Li, Yuxuan Liu, Ping Huang, Jiulong Shan, Huimin Ma, Jian Yuan
FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via Selective Tensor Freezing
Kai Huang, Wei Gao