Classifier Guidance
Classifier guidance leverages the gradients of a trained classifier to steer the generation process of probabilistic models, primarily diffusion models, improving the quality and control of generated outputs like images and speech. Current research focuses on refining guidance techniques within these models, addressing issues like information loss during denoising, classifier overfitting with limited data, and the reliability of classifier gradients, often employing adversarial robustness strategies. This approach enhances the capabilities of generative models, offering improved control over conditional generation and potentially impacting various applications, including image synthesis, manipulation, and text-to-speech.
Papers
August 12, 2024
July 9, 2023
April 25, 2023
March 23, 2023
August 18, 2022
July 26, 2022