Classifier Guidance

Classifier guidance leverages the gradients of a trained classifier to steer the generation process of probabilistic models, primarily diffusion models, improving the quality and control of generated outputs like images and speech. Current research focuses on refining guidance techniques within these models, addressing issues like information loss during denoising, classifier overfitting with limited data, and the reliability of classifier gradients, often employing adversarial robustness strategies. This approach enhances the capabilities of generative models, offering improved control over conditional generation and potentially impacting various applications, including image synthesis, manipulation, and text-to-speech.

Papers