Parameter Efficient Adaptation

Parameter-efficient adaptation (PEA) focuses on modifying only a small subset of parameters in large pre-trained models to adapt them to new tasks, thereby reducing computational costs and storage needs. Current research explores various PEA techniques, including low-rank adaptation (LoRA), adapter modules, and prompt tuning, often applied to transformer-based architectures across diverse domains like vision, language, and speech. This field is significant because it enables the practical application of massive pre-trained models to resource-constrained settings and diverse downstream tasks, improving efficiency and accessibility in various applications.

Papers