Text to Image Generation Model
Text-to-image generation models aim to create realistic images from textual descriptions, focusing on improving image quality, accuracy, and user control. Current research emphasizes enhancing model faithfulness to input text, addressing issues like image hallucination and bias, and improving controllability through techniques like sketch guidance and layout conditioning, often leveraging diffusion models and large language models. These advancements have significant implications for accessible communication, creative content generation, and various applications requiring image synthesis from textual information, while also raising concerns about potential misuse and the need for robust evaluation metrics.
Papers
May 18, 2024
April 15, 2024
April 5, 2024
April 4, 2024
March 14, 2024
March 2, 2024
February 20, 2024
January 25, 2024
January 18, 2024
November 16, 2023
October 20, 2023
September 27, 2023
August 16, 2023
August 14, 2023
July 24, 2023
July 1, 2023
June 23, 2023
June 5, 2023
May 29, 2023