Text to Image Generation Model
Text-to-image generation models aim to create realistic images from textual descriptions, focusing on improving image quality, accuracy, and user control. Current research emphasizes enhancing model faithfulness to input text, addressing issues like image hallucination and bias, and improving controllability through techniques like sketch guidance and layout conditioning, often leveraging diffusion models and large language models. These advancements have significant implications for accessible communication, creative content generation, and various applications requiring image synthesis from textual information, while also raising concerns about potential misuse and the need for robust evaluation metrics.
Papers
May 23, 2023
May 22, 2023
May 9, 2023
April 27, 2023
April 11, 2023
April 1, 2023
March 25, 2023
March 21, 2023
February 20, 2023
November 14, 2022
October 14, 2022
October 13, 2022
August 18, 2022
April 17, 2022
February 8, 2022
November 27, 2021