Paper ID: 2401.04585

EDA-DM: Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models

Xuewen Liu, Zhikai Li, Junrui Xiao, Qingyi Gu

Diffusion models have achieved great success in image generation tasks through iterative noise estimation. However, the heavy denoising process and complex neural networks hinder their low-latency applications in real-world scenarios. Quantization can effectively reduce model complexity, and post-training quantization (PTQ), which does not require fine-tuning, is highly promising for compressing and accelerating diffusion models. Unfortunately, we find that due to the highly dynamic distribution of activations in different denoising steps, existing PTQ methods for diffusion models suffer from distribution mismatch issues at both calibration sample level and reconstruction output level, which makes the performance far from satisfactory, especially in low-bit cases. In this paper, we propose Enhanced Distribution Alignment for Post-Training Quantization of Diffusion Models (EDA-DM) to address the above issues. Specifically, at the calibration sample level, we select calibration samples based on the density and variety in the latent space, thus facilitating the alignment of their distribution with the overall samples; and at the reconstruction output level, we modify the loss of block reconstruction with the losses of layers, aligning the outputs of quantized model and full-precision model at different network granularity. Extensive experiments demonstrate that EDA-DM significantly outperforms the existing PTQ methods across various models (DDIM, LDM-4, LDM-8, Stable-Diffusion) and different datasets (CIFAR-10, LSUN-Bedroom, LSUN-Church, ImageNet, MS-COCO).

Submitted: Jan 9, 2024