Parameter Efficient Transfer
Parameter-efficient transfer learning (PETL) aims to adapt large pre-trained models to new tasks using minimal additional parameters, addressing the high computational cost of full fine-tuning. Current research focuses on developing novel adapter architectures (e.g., LoRA, multiple-exit tuning) and training strategies (e.g., prompt tuning, side networks) to improve both accuracy and inference efficiency across various model types, including vision transformers and language models. This field is significant because it enables the deployment of powerful, large models on resource-constrained devices and facilitates more efficient adaptation to diverse downstream tasks, impacting both research and practical applications.
Papers
March 1, 2023
February 16, 2023
January 27, 2023
December 6, 2022
October 25, 2022
June 13, 2022
June 8, 2022
May 7, 2022