Conditional Image Synthesis
Conditional image synthesis aims to generate images based on user-specified conditions, such as text descriptions, sketches, or segmentation maps. Current research heavily utilizes diffusion models, often incorporating techniques like time-decoupled training for efficiency and mixture-of-experts for handling diverse instructions. This field is crucial for advancing various applications, including image editing, 3D modeling, and data augmentation, by enabling more precise control over image generation and improving the quality and diversity of synthetic images. Furthermore, efforts are underway to develop more explainable evaluation metrics for these models.
Papers
September 28, 2024
May 1, 2024
December 27, 2023
December 22, 2023
November 28, 2023
September 30, 2023
September 8, 2023
May 23, 2023
February 24, 2023
February 16, 2023
November 25, 2022
November 21, 2022
November 14, 2022
June 20, 2022
May 13, 2022
December 9, 2021