Run LoRA Run
Run LoRA Run focuses on improving the efficiency and effectiveness of Low-Rank Adaptation (LoRA), a technique for fine-tuning large language models (LLMs) with minimal parameter updates. Current research emphasizes enhancing LoRA's performance in continual learning scenarios, addressing catastrophic forgetting, and improving its application in federated learning and privacy-preserving contexts. These advancements aim to make LLMs more adaptable, resource-efficient, and suitable for deployment on resource-constrained devices while maintaining accuracy, impacting both research and practical applications of LLMs.
Papers
November 8, 2024
October 28, 2024
October 10, 2024
October 8, 2024
September 29, 2024
August 13, 2024
July 3, 2024
March 18, 2024
March 10, 2024
February 19, 2024
February 18, 2024
December 29, 2023
December 6, 2023
November 20, 2023
September 26, 2023