Finetuning Method
Finetuning, the process of adapting pre-trained large language models (LLMs) to specific tasks, is a crucial area of research aiming to improve efficiency and performance. Current efforts focus on parameter-efficient methods like Low-Rank Adaptation (LoRA) and its variants, which modify only a small subset of model parameters, and explore techniques like probabilistic finetuning and active learning to optimize data usage and reduce computational costs. These advancements are significant because they enable the deployment of powerful LLMs on resource-constrained devices and facilitate continual learning scenarios, impacting both research and practical applications requiring efficient model adaptation.
Papers
October 22, 2024
October 1, 2024
May 15, 2024
March 28, 2024
March 14, 2024
March 11, 2024
February 27, 2024
February 24, 2024
January 25, 2024
January 15, 2024
November 30, 2023
October 26, 2023
June 8, 2023
May 30, 2023
May 23, 2023
February 13, 2023
December 20, 2022
October 12, 2022