Conditional Latent Diffusion
Conditional latent diffusion models are a rapidly advancing class of generative models aiming to synthesize diverse data types, conditioned on various inputs, by leveraging the power of diffusion processes in a latent space. Current research focuses on applying these models to diverse tasks, including image harmonization, parameter generation for neural networks, and multi-modal data synthesis (e.g., MRI, human motion, and layout design), often incorporating autoencoders for efficient latent representation. This approach offers significant advantages in generating high-quality, controllable outputs across various domains, impacting fields ranging from medical imaging and robotics to computer vision and design.
Papers
September 20, 2024
August 18, 2024
August 2, 2024
May 29, 2024
May 21, 2024
May 18, 2024
March 20, 2024
February 13, 2024
August 13, 2023
June 15, 2023
March 24, 2023
January 27, 2023
November 13, 2022