Text to Image Generation Model
Text-to-image generation models aim to create realistic images from textual descriptions, focusing on improving image quality, accuracy, and user control. Current research emphasizes enhancing model faithfulness to input text, addressing issues like image hallucination and bias, and improving controllability through techniques like sketch guidance and layout conditioning, often leveraging diffusion models and large language models. These advancements have significant implications for accessible communication, creative content generation, and various applications requiring image synthesis from textual information, while also raising concerns about potential misuse and the need for robust evaluation metrics.
Papers
October 22, 2024
October 15, 2024
October 11, 2024
October 4, 2024
September 19, 2024
August 31, 2024
August 29, 2024
August 10, 2024
June 24, 2024
June 21, 2024
June 13, 2024
June 11, 2024
June 5, 2024
June 3, 2024
May 18, 2024
April 15, 2024
April 5, 2024
April 4, 2024
March 14, 2024