Federated Large Language Model
Federated large language models (FedLLMs) aim to train powerful language models collaboratively across multiple decentralized datasets without directly sharing sensitive data, addressing privacy concerns inherent in centralized training. Current research focuses on efficient training strategies, including techniques like federated pre-training, fine-tuning, and prompt learning, often employing parameter-efficient methods to reduce communication overhead and improve model convergence in heterogeneous data environments. This approach holds significant promise for advancing both the development of more powerful LLMs and the responsible use of private data in various applications, including time series forecasting and multilingual machine translation.
Papers
Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning
Zachary Charles, Nicole Mitchell, Krishna Pillutla, Michael Reneer, Zachary Garrett
Integration of Large Language Models and Federated Learning
Chaochao Chen, Xiaohua Feng, Yuyuan Li, Lingjuan Lyu, Jun Zhou, Xiaolin Zheng, Jianwei Yin