Subject Driven Text to Image
Subject-driven text-to-image generation aims to create customized images of specific subjects based on textual descriptions, overcoming limitations of generic text-to-image models. Current research focuses on improving subject fidelity and controllability, often employing contrastive learning, fine-tuning pre-trained diffusion models (like DreamBooth and BLIP-Diffusion), and exploring low-rank adaptation techniques (such as LoRA and its variants) for efficient personalization. These advancements enhance the quality and efficiency of generating personalized images, with implications for creative applications, content creation, and potentially addressing ethical concerns surrounding unauthorized image synthesis.
Papers
September 9, 2024
September 5, 2024
July 16, 2024
June 16, 2024
June 13, 2024
May 11, 2024
February 27, 2024
February 15, 2024
December 5, 2023
November 22, 2023
June 13, 2023
May 24, 2023
May 5, 2023