Conditional Image Generation
Conditional image generation aims to synthesize images based on various conditions, such as text descriptions, sketches, or semantic maps, striving for high fidelity and realism. Current research focuses on improving control over the generation process through techniques like latent space manipulation, diffusion models (including variations like transformers and ODE-based approaches), and integrating diverse conditioning modalities (e.g., combining text with sketches or depth maps). These advancements are significant for applications ranging from creative content generation to image editing and enhancement, driving progress in both computer vision and generative modeling.
Papers
February 10, 2023
July 27, 2022
July 21, 2022
June 9, 2022
June 1, 2022
April 5, 2022
April 2, 2022
March 25, 2022
March 19, 2022
December 31, 2021