Pre Trained Diffusion Model
Pre-trained diffusion models are generative models used as powerful priors for solving various inverse problems, particularly in image processing and generation. Current research focuses on improving efficiency (e.g., one-step methods, faster sampling algorithms), enhancing control over generation (e.g., through guidance mechanisms and fine-tuning strategies like LoRA), and addressing security concerns (e.g., mitigating membership inference attacks). This work is significant because it leverages the strong generative capabilities of these models to achieve state-of-the-art results in diverse applications, ranging from image restoration and super-resolution to more complex tasks like image composition and 3D reconstruction.
Papers
CAD: Photorealistic 3D Generation via Adversarial Distillation
Ziyu Wan, Despoina Paschalidou, Ian Huang, Hongyu Liu, Bokui Shen, Xiaoyu Xiang, Jing Liao, Leonidas Guibas
Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer
Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo
ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank
Zhanjie Zhang, Quanwei Zhang, Guangyuan Li, Wei Xing, Lei Zhao, Jiakai Sun, Zehua Lan, Junsheng Luan, Yiling Huang, Huaizhong Lin
Turn Down the Noise: Leveraging Diffusion Models for Test-time Adaptation via Pseudo-label Ensembling
Mrigank Raman, Rohan Shah, Akash Kannan, Pranit Chawla
Unsupervised Keypoints from Pretrained Diffusion Models
Eric Hedlin, Gopal Sharma, Shweta Mahajan, Xingzhe He, Hossam Isack, Abhishek Kar Helge Rhodin, Andrea Tagliasacchi, Kwang Moo Yi
HiDiffusion: Unlocking Higher-Resolution Creativity and Efficiency in Pretrained Diffusion Models
Shen Zhang, Zhaowei Chen, Zhenyu Zhao, Yuhao Chen, Yao Tang, Jiajun Liang