Paper ID: 2502.17282 • Published Feb 24, 2025
Capability Instruction Tuning: A New Paradigm for Dynamic LLM Routing
Yi-Kai Zhang, De-Chuan Zhan, Han-Jia Ye
Nanjing University
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Large Language Models (LLMs) have demonstrated human-like
instruction-following abilities, particularly those exceeding 100 billion
parameters. The combined capability of some smaller, resource-friendly LLMs can
address most of the instructions that larger LLMs excel at. In this work, we
explore how to route the best-performing LLM for each instruction to achieve
better overall performance. We develop a new paradigm, constructing capability
instructions with model capability representation, user instruction, and
performance inquiry prompts to assess the performance. To learn from capability
instructions, we introduce a new end-to-end framework called Model Selection
with Aptitude Test (Model-SAT), which generates positive and negative samples
based on what different models perform well or struggle with. Model-SAT uses a
model capability encoder that extends its model representation to a lightweight
LLM. Our experiments show that Model-SAT understands the performance dimensions
of candidate models and provides the probabilities of their capability to
handle various instructions. Additionally, during deployment, a new model can
quickly infer its aptitude test results across 50 tasks, each with 20 shots.
Model-SAT performs state-of-the-art model routing without candidate inference
and in real-world new model-released scenarios. The code is available at
this https URL
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.