Parameter Efficient Tuning
Parameter-efficient tuning (PET) focuses on adapting large pre-trained models to specific tasks by modifying only a small fraction of their parameters, thereby reducing computational costs and storage requirements. Current research emphasizes techniques like low-rank adaptation (LoRA), adapters, and prompt tuning, often applied within transformer architectures and increasingly explored for multimodal models and continual learning scenarios. This approach is significant because it enables the deployment of powerful models on resource-constrained devices and facilitates more efficient and scalable model personalization across diverse applications, including natural language processing, computer vision, and medical image analysis.
Papers
August 11, 2023
July 31, 2023
July 27, 2023
July 21, 2023
July 15, 2023
June 4, 2023
May 28, 2023
May 26, 2023
May 15, 2023
April 25, 2023
April 21, 2023
April 3, 2023
March 31, 2023
March 17, 2023
March 9, 2023
March 5, 2023
March 2, 2023
March 1, 2023
February 22, 2023
February 13, 2023