Safe LoRA
Safe LoRA focuses on mitigating the safety risks associated with parameter-efficient fine-tuning of large language models (LLMs) using the Low-Rank Adaptation (LoRA) technique. Current research emphasizes developing robust LoRA variants for federated learning, multi-modal applications, and various downstream tasks, often incorporating techniques like alternating minimization, adaptive parameter allocation, and dropout regularization to improve performance and efficiency. This work is significant because it addresses the crucial need for safe and efficient LLM adaptation, enabling wider adoption of powerful LLMs across diverse applications while mitigating potential harms.
Papers
April 7, 2024
March 18, 2024
March 8, 2024
March 2, 2024
February 25, 2024
February 23, 2024
December 6, 2023
November 16, 2023
November 4, 2023
October 20, 2023
September 16, 2023
January 26, 2023