Diffusion Based Baseline
Diffusion-based baselines serve as foundational models for various generative tasks, aiming to improve efficiency and performance compared to traditional methods. Current research focuses on adapting these baselines for specific applications, such as image-to-image translation, test-time adaptation, and inverse problems, often employing latent diffusion models and incorporating techniques like distribution matching and severity encoding to enhance speed and accuracy. These advancements demonstrate significant potential for accelerating inference times and improving the quality of generated outputs across diverse domains, including image processing, speech enhancement, and text generation.
Papers
June 20, 2024
March 16, 2024
September 12, 2023
May 23, 2023
April 6, 2023
December 19, 2022
October 31, 2022