Stable Diffusion Model
Stable Diffusion models are a class of generative AI models primarily used for high-quality image synthesis from text prompts or other image inputs. Current research focuses on improving efficiency (through model compression and faster sampling techniques), enhancing control and fidelity (via fine-tuning methods and prompt engineering), and mitigating risks associated with data privacy and copyright infringement (through watermarking and data attribution techniques). These models are significantly impacting various fields, including scientific visualization, medical imaging, and creative design, by enabling efficient data augmentation and the generation of novel, realistic images for diverse applications.
Papers
Compositional Inversion for Stable Diffusion Models
Xulu Zhang, Xiao-Yong Wei, Jinlin Wu, Tianyi Zhang, Zhaoxiang Zhang, Zhen Lei, Qing Li
SpeedUpNet: A Plug-and-Play Adapter Network for Accelerating Text-to-Image Diffusion Models
Weilong Chai, DanDan Zheng, Jiajiong Cao, Zhiquan Chen, Changbao Wang, Chenguang Ma
Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?
Zhengyue Zhao, Jinhao Duan, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Xing Hu
HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models
Zhonghao Wang, Wei Wei, Yang Zhao, Zhisheng Xiao, Mark Hasegawa-Johnson, Humphrey Shi, Tingbo Hou