Parameter Efficient Tuning
Parameter-efficient tuning (PET) focuses on adapting large pre-trained models to specific tasks by modifying only a small fraction of their parameters, thereby reducing computational costs and storage requirements. Current research emphasizes techniques like low-rank adaptation (LoRA), adapters, and prompt tuning, often applied within transformer architectures and increasingly explored for multimodal models and continual learning scenarios. This approach is significant because it enables the deployment of powerful models on resource-constrained devices and facilitates more efficient and scalable model personalization across diverse applications, including natural language processing, computer vision, and medical image analysis.
Papers
November 4, 2024
October 13, 2024
September 7, 2024
August 5, 2024
July 7, 2024
May 27, 2024
May 10, 2024
April 4, 2024
March 4, 2024
February 29, 2024
February 13, 2024
January 30, 2024
January 25, 2024
January 6, 2024
December 21, 2023
December 16, 2023
November 15, 2023
October 30, 2023
October 11, 2023