Diffusion Based Text to Image
Diffusion-based text-to-image models aim to generate high-quality, realistic images from textual descriptions, focusing on improving image fidelity, controllability, and safety. Current research emphasizes enhancing the models' ability to accurately render text within images, mitigating biases and safety concerns (like generating unsafe content through prompt manipulation), and improving compositional generation of complex scenes with multiple objects. These advancements are significant for both the scientific community, pushing the boundaries of multimodal generation and AI safety, and for practical applications in creative content generation, design, and various other fields.
Papers
October 18, 2024
October 6, 2024
September 27, 2024
June 12, 2024
June 1, 2024
May 14, 2024
May 2, 2024
April 30, 2024
April 15, 2024
March 28, 2024
March 11, 2024
January 31, 2024
December 22, 2023
December 10, 2023
November 10, 2023
November 1, 2023
August 9, 2023
July 19, 2023