Parameter Efficient Transfer
Parameter-efficient transfer learning (PETL) aims to adapt large pre-trained models to new tasks using minimal additional parameters, addressing the high computational cost of full fine-tuning. Current research focuses on developing novel adapter architectures (e.g., LoRA, multiple-exit tuning) and training strategies (e.g., prompt tuning, side networks) to improve both accuracy and inference efficiency across various model types, including vision transformers and language models. This field is significant because it enables the deployment of powerful, large models on resource-constrained devices and facilitates more efficient adaptation to diverse downstream tasks, impacting both research and practical applications.
Papers
September 24, 2024
September 21, 2024
August 19, 2024
July 19, 2024
July 10, 2024
July 4, 2024
July 1, 2024
May 27, 2024
May 10, 2024
March 12, 2024
March 3, 2024
February 1, 2024
December 13, 2023
October 30, 2023
August 28, 2023
June 16, 2023
May 28, 2023
May 24, 2023