Text to Image Generation
Text-to-image generation aims to create realistic and diverse images from textual descriptions, focusing on improving controllability, efficiency, and factual accuracy. Current research emphasizes enhancing model architectures like diffusion models and leveraging large language models for prompt understanding and control, including methods for fine-grained manipulation of image components and styles. This field is significant for its potential impact on various applications, from creative content generation to assisting in scientific visualization and medical imaging, while also raising important questions about bias mitigation and factual accuracy in AI-generated content.
Papers
X-IQE: eXplainable Image Quality Evaluation for Text-to-Image Generation with Visual Large Language Models
Yixiong Chen, Li Liu, Chris Ding
AIwriting: Relations Between Image Generation and Digital Writing
Scott Rettberg, Talan Memmott, Jill Walker Rettberg, Jason Nelson, Patrick Lichty
Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners
Xuehai He, Weixi Feng, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, William Yang Wang, Xin Eric Wang
Toward Verifiable and Reproducible Human Evaluation for Text-to-Image Generation
Mayu Otani, Riku Togashi, Yu Sawai, Ryosuke Ishigami, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Shin'ichi Satoh
Text-Conditioned Sampling Framework for Text-to-Image Generation with Masked Generative Models
Jaewoong Lee, Sangwon Jang, Jaehyeong Jo, Jaehong Yoon, Yunji Kim, Jin-Hwa Kim, Jung-Woo Ha, Sung Ju Hwang
Medical diffusion on a budget: Textual Inversion for medical image generation
Bram de Wilde, Anindo Saha, Maarten de Rooij, Henkjan Huisman, Geert Litjens
MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models
Jing Zhao, Heliang Zheng, Chaoyue Wang, Long Lan, Wenjing Yang