Parameter Efficient Tuning
Parameter-efficient tuning (PET) focuses on adapting large pre-trained models to specific tasks by modifying only a small fraction of their parameters, thereby reducing computational costs and storage requirements. Current research emphasizes techniques like low-rank adaptation (LoRA), adapters, and prompt tuning, often applied within transformer architectures and increasingly explored for multimodal models and continual learning scenarios. This approach is significant because it enables the deployment of powerful models on resource-constrained devices and facilitates more efficient and scalable model personalization across diverse applications, including natural language processing, computer vision, and medical image analysis.
Papers
November 16, 2022
November 4, 2022
October 31, 2022
October 30, 2022
October 24, 2022
October 20, 2022
October 17, 2022
October 14, 2022
October 10, 2022
July 1, 2022
June 15, 2022
May 24, 2022
April 10, 2022
February 20, 2022