Paper ID: 2503.16893 • Published Mar 21, 2025
Improving the End-to-End Efficiency of Offline Inference for Multi-LLM Applications Based on Sampling and Simulation
Jingzhi Fang, Yanyan Shen, Yue Wang, Lei Chen
HKUST•Shanghai Jiao Tong University•Shenzhen Institute of Computing Sciences•HKUST(GZ)
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
As large language models (LLMs) have shown great success in many tasks, they
are used in various applications. While a lot of works have focused on the
efficiency of single-LLM application (e.g., offloading, request scheduling,
parallelism strategy selection), multi-LLM applications receive less attention,
particularly in offline inference scenarios. In this work, we aim to improve
the offline end-to-end inference efficiency of multi-LLM applications in the
single-node multi-GPU environment. The problem involves two key decisions: (1)
determining which LLMs to run concurrently each time (we may not run all the
models at the same time), and (2) selecting a parallelism strategy to use for
each LLM. This problem is NP-hard. Naive solutions may not work well because
the running time for a model to complete a set of requests depends on the
request workload and the selected parallelism strategy, and they lack an
accurate model of the running time. As the LLM output lengths are unknown
before running, to estimate the model running time, we propose a
sampling-then-simulation method which first estimates the output lengths by
sampling from an empirical cumulative function we obtained from a large dataset
in advance, and then simulates the LLM inference process accordingly. Based on
the simulation, we estimate the per-iteration latencys to get the total
latency. A greedy method is proposed to optimize the scheduling of the LLMs in
the application across the GPUs. We then propose a framework SamuLLM which
contains two phases: planning, which calls the greedy method for an application
and running, which runs the application and dynamically adjust the model
scheduling based on the runtime information. Experiments on 3 applications and
a mixed application show that SamuLLM can achieve 1.0-2.4\times end-to-end
speedups compared to the competitors.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.