Paper ID: 2411.09510 • Published Nov 14, 2024
Communication Compression for Tensor Parallel LLM Inference
Jan Hansen-Palmus, Michael Truong-Le, Oliver Hausdörfer, Alok Verma
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Large Language Models (LLMs) have pushed the frontier of artificial
intelligence but are comprised of hundreds of billions of parameters and
operations. For faster inference latency, LLMs are deployed on multiple
hardware accelerators through various Model Parallelism strategies. Our paper
looks into the details on one such strategy - Tensor Parallel - and proposes to
reduce latency by compressing inter-accelerator communication. We leverage fine
grained quantization techniques to compress selected activations by 3.5 - 4.5x.
Our proposed method leads up to 2x reduction of time-to-first-token (TTFT) with
negligible model performance degradation.