Diffusion Distillation

Diffusion distillation aims to accelerate and improve the efficiency of diffusion models, which are powerful but computationally expensive generative models. Current research focuses on distilling these models into faster, single-step or few-step generators using various techniques, including distribution matching, consistency distillation, and adversarial training, often leveraging architectures like GANs and DEQ models. This work is significant because it addresses the computational bottleneck of diffusion models, enabling their wider application in areas such as image-to-image translation, text-to-image generation, and video editing, while also improving the quality and robustness of generated outputs.

Papers