Paper ID: 2311.04951

Leveraging Speculative Sampling and KV-Cache Optimizations Together for Generative AI using OpenVINO

Haim Barad, Ekaterina Aidova, Yury Gorbachev

Inference optimizations are critical for improving user experience and reducing infrastructure costs and power consumption. In this article, we illustrate a form of dynamic execution known as speculative sampling to reduce the overall latency of text generation and compare it with standard autoregressive sampling. This can be used together with model-based optimizations (e.g. quantization) to provide an optimized solution. Both sampling methods make use of KV caching. A Jupyter notebook and some sample executions are provided.

Submitted: Nov 8, 2023