Prompt Diffusion
Prompt diffusion leverages diffusion models to generate images and other data modalities (audio, text) conditioned on various inputs, primarily text prompts but increasingly incorporating visual context and other modalities. Current research focuses on improving in-context learning within these models, enhancing controllability through prompt engineering and embedding manipulation, and exploring prompt-free approaches using only visual input. This rapidly evolving field is significantly impacting image generation, semantic segmentation, and other areas by enabling more flexible, controllable, and efficient generation of high-quality data, particularly in scenarios with limited labeled data.
Papers
October 26, 2024
June 29, 2024
January 2, 2024
December 3, 2023
October 15, 2023
August 23, 2023
May 25, 2023
May 6, 2023
May 1, 2023
January 30, 2023