Low Rank Adaptation
Low-rank adaptation (LoRA) is a parameter-efficient fine-tuning technique for large pre-trained models, aiming to reduce computational costs and memory requirements while maintaining performance on downstream tasks. Current research focuses on improving LoRA's efficiency and effectiveness through methods like tensor decomposition, adaptive parameter allocation, and novel aggregation strategies for federated learning scenarios, often applied to transformer-based language and vision models. This approach holds significant promise for making large model fine-tuning more accessible and enabling the development of personalized and specialized models across diverse applications with limited resources.
Papers
June 12, 2024
June 7, 2024
June 6, 2024
June 5, 2024
June 4, 2024
June 3, 2024
May 31, 2024
May 24, 2024
May 15, 2024
May 10, 2024
April 29, 2024
April 24, 2024
April 11, 2024
March 30, 2024
March 29, 2024
March 25, 2024
March 24, 2024