Text to Image Diffusion Model
Text-to-image diffusion models generate images from textual descriptions, aiming for high-fidelity and precise alignment. Current research focuses on improving controllability, addressing safety concerns (e.g., preventing generation of inappropriate content), and enhancing personalization capabilities through techniques like continual learning and latent space manipulation. These advancements are significant for various applications, including medical imaging, artistic creation, and data augmentation, while also raising important ethical considerations regarding model safety and bias.
Papers
DiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion Models
Ximing Xing, Chuang Wang, Haitao Zhou, Jing Zhang, Qian Yu, Dong Xu
A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis
Aishwarya Agarwal, Srikrishna Karanam, K J Joseph, Apoorv Saxena, Koustava Goswami, Balaji Vasan Srinivasan
Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models
Luozhou Wang, Guibao Shen, Wenhang Ge, Guangyong Chen, Yijun Li, Ying-cong Chen
Norm-guided latent space exploration for text-to-image generation
Dvir Samuel, Rami Ben-Ari, Nir Darshan, Haggai Maron, Gal Chechik
Training-free Diffusion Model Adaptation for Variable-Sized Text-to-Image Synthesis
Zhiyu Jin, Xuli Shen, Bin Li, Xiangyang Xue
Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation
Ruoyu Wang, Yongqi Yang, Zhihao Qian, Ye Zhu, Yu Wu