Paper ID: 2406.10816

Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp

Longhao Chen, Yina Zhao, Qiangjun Xie, Qinghua Sheng

This article optimizes the inference performance of the Qwen-1.8B model by performing Int8 quantization, vectorizing some operators in llama.cpp, and modifying the compilation script to improve the compiler optimization level. On the Yitian 710 experimental platform, the prefill performance is increased by 1.6 times, the decoding performance is increased by 24 times, the memory usage is reduced to 1/5 of the original, and the accuracy loss is almost negligible.

Submitted: Jun 16, 2024