Federated Tuning
Federated tuning focuses on efficiently fine-tuning large language models (LLMs) across decentralized datasets while preserving data privacy. Current research emphasizes reducing the substantial communication overhead inherent in this process, exploring techniques like gradient compression, low-rank adaptations (LoRA), and zeroth-order optimization methods. These advancements aim to improve the scalability and performance of federated learning for LLMs, enabling broader deployment of powerful language models while respecting user data privacy and mitigating the computational burden on individual devices. The resulting improvements in efficiency and privacy have significant implications for both NLP research and real-world applications.
Papers
November 18, 2024
September 10, 2024
May 22, 2024
April 20, 2024
February 18, 2024
February 8, 2024
December 11, 2023