Text to Image Diffusion Model
Text-to-image diffusion models generate images from textual descriptions, aiming for high-fidelity and precise alignment. Current research focuses on improving controllability, addressing safety concerns (e.g., preventing generation of inappropriate content), and enhancing personalization capabilities through techniques like continual learning and latent space manipulation. These advancements are significant for various applications, including medical imaging, artistic creation, and data augmentation, while also raising important ethical considerations regarding model safety and bias.
Papers
Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction
Jing He, Haodong Li, Wei Yin, Yixun Liang, Leheng Li, Kaiqiang Zhou, Hongbo Zhang, Bingbing Liu, Ying-Cong Chen
Dark Miner: Defend against undesired generation for text-to-image diffusion models
Zheling Meng, Bo Peng, Xiaochuan Jin, Yue Jiang, Jing Dong, Wei Wang
Consistent estimation of generative model representations in the data kernel perspective space
Aranyak Acharyya, Michael W. Trosset, Carey E. Priebe, Hayden S. Helm
Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors
Aiping Zhang, Zongsheng Yue, Renjing Pei, Wenqi Ren, Xiaochun Cao
DreamBeast: Distilling 3D Fantastical Animals with Part-Aware Knowledge Transfer
Runjia Li, Junlin Han, Luke Melas-Kyriazi, Chunyi Sun, Zhaochong An, Zhongrui Gui, Shuyang Sun, Philip Torr, Tomas Jakab
Improving Text-guided Object Inpainting with Semantic Pre-inpainting
Yifu Chen, Jingwen Chen, Yingwei Pan, Yehao Li, Ting Yao, Zhineng Chen, Tao Mei
Dynamic Prompting of Frozen Text-to-Image Diffusion Models for Panoptic Narrative Grounding
Hongyu Li, Tianrui Hui, Zihan Ding, Jing Zhang, Bin Ma, Xiaoming Wei, Jizhong Han, Si Liu
Scribble-Guided Diffusion for Training-free Text-to-Image Generation
Seonho Lee, Jiho Choi, Seohyun Lim, Jiwook Kim, Hyunjung Shim