Unconditional Diffusion Model
Unconditional diffusion models are generative AI models that create new data samples from a learned probability distribution without explicit conditioning information during training. Current research focuses on improving their efficiency and control, including developing novel training-free guidance methods, approaches to impose constraints during generation, and techniques for extracting training data or adapting them for conditional generation using learned internal representations or smaller, less-trained versions of themselves. These advancements are significant because they enhance the quality, controllability, and efficiency of these models, impacting diverse applications such as image generation, inverse problems, and time series forecasting.