T2I Model
Text-to-image (T2I) models generate images from textual descriptions, a rapidly advancing field focused on improving controllability, efficiency, and personalization. Current research emphasizes developing methods for incorporating diverse multimodal inputs (e.g., edge maps) and efficiently personalizing models to specific styles or subjects using techniques like parameter rank reduction or contrastive learning, often within the context of diffusion models. These advancements are significant for both scientific understanding of image generation and practical applications, enabling more intuitive and powerful image editing and creation tools.
Papers
June 20, 2024
June 9, 2024
May 8, 2024
February 19, 2024
February 15, 2024
February 7, 2024
December 7, 2023
July 13, 2023
July 10, 2023