Pre Trained Diffusion Model
Pre-trained diffusion models are generative models used as powerful priors for solving various inverse problems, particularly in image processing and generation. Current research focuses on improving efficiency (e.g., one-step methods, faster sampling algorithms), enhancing control over generation (e.g., through guidance mechanisms and fine-tuning strategies like LoRA), and addressing security concerns (e.g., mitigating membership inference attacks). This work is significant because it leverages the strong generative capabilities of these models to achieve state-of-the-art results in diverse applications, ranging from image restoration and super-resolution to more complex tasks like image composition and 3D reconstruction.
Papers
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation
Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu
Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers
Subhadeep Koley, Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
ViewFusion: Towards Multi-View Consistency via Interpolated Denoising
Xianghui Yang, Yan Zuo, Sameera Ramasinghe, Loris Bazzani, Gil Avraham, Anton van den Hengel
A Quantitative Evaluation of Score Distillation Sampling Based Text-to-3D
Xiaohan Fei, Chethan Parameshwara, Jiawei Mo, Xiaolong Li, Ashwin Swaminathan, CJ Taylor, Paolo Favaro, Stefano Soatto
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 Classes
Ziying Pan, Kun Wang, Gang Li, Feihong He, Yongxuan Lai
Exploring Privacy and Fairness Risks in Sharing Diffusion Models: An Adversarial Perspective
Xinjian Luo, Yangfan Jiang, Fei Wei, Yuncheng Wu, Xiaokui Xiao, Beng Chin Ooi
U$^2$MRPD: Unsupervised undersampled MRI reconstruction by prompting a large latent diffusion model
Ziqi Gao, S. Kevin Zhou
Make a Cheap Scaling: A Self-Cascade Diffusion Model for Higher-Resolution Adaptation
Lanqing Guo, Yingqing He, Haoxin Chen, Menghan Xia, Xiaodong Cun, Yufei Wang, Siyu Huang, Yong Zhang, Xintao Wang, Qifeng Chen, Ying Shan, Bihan Wen