Paper ID: 2501.18824 • Published Jan 31, 2025
Memory-Efficient Fine-Tuning of Transformers via Token Selection
Antoine Simoulin, Namyong Park, Xiaoyi Liu, Grey Yang
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Fine-tuning provides an effective means to specialize pre-trained models for
various downstream tasks. However, fine-tuning often incurs high memory
overhead, especially for large transformer-based models, such as LLMs. While
existing methods may reduce certain parts of the memory required for
fine-tuning, they still require caching all intermediate activations computed
in the forward pass to update weights during the backward pass. In this work,
we develop TokenTune, a method to reduce memory usage, specifically the memory
to store intermediate activations, in the fine-tuning of transformer-based
models. During the backward pass, TokenTune approximates the gradient
computation by backpropagating through just a subset of input tokens. Thus,
with TokenTune, only a subset of intermediate activations are cached during the
forward pass. Also, TokenTune can be easily combined with existing methods like
LoRA, further reducing the memory cost. We evaluate our approach on pre-trained
transformer models with up to billions of parameters, considering the
performance on multiple downstream tasks such as text classification and
question answering in a few-shot learning setup. Overall, TokenTune achieves
performance on par with full fine-tuning or representative memory-efficient
fine-tuning methods, while greatly reducing the memory footprint, especially
when combined with other methods with complementary memory reduction
mechanisms. We hope that our approach will facilitate the fine-tuning of large
transformers, in specializing them for specific domains or co-training them
with other neural components from a larger system. Our code is available at
this https URL