Parameter Efficient Adaptation
Parameter-efficient adaptation (PEA) focuses on modifying only a small subset of parameters in large pre-trained models to adapt them to new tasks, thereby reducing computational costs and storage needs. Current research explores various PEA techniques, including low-rank adaptation (LoRA), adapter modules, and prompt tuning, often applied to transformer-based architectures across diverse domains like vision, language, and speech. This field is significant because it enables the practical application of massive pre-trained models to resource-constrained settings and diverse downstream tasks, improving efficiency and accessibility in various applications.
Papers
October 23, 2024
October 17, 2024
October 2, 2024
August 23, 2024
July 28, 2024
May 31, 2024
April 20, 2024
April 6, 2024
March 18, 2024
March 11, 2024
March 7, 2024
March 1, 2024
December 20, 2023
October 27, 2023
October 6, 2023
July 5, 2023
February 22, 2023
October 13, 2022
October 8, 2022