CG FedLLM
CG-FedLLM represents a research area focused on efficiently and privately training large language models (LLMs) using federated learning (FedLLM). Current research emphasizes techniques to reduce the substantial communication overhead inherent in FedLLM, often involving gradient compression methods and novel training strategies like those employing encoders and decoders to selectively preserve crucial gradient information. This work is significant because it addresses the privacy concerns and computational limitations associated with training LLMs on massive datasets, potentially enabling broader access to and development of advanced language models while protecting sensitive user data.
Papers
September 24, 2024
June 7, 2024
May 22, 2024
April 18, 2024
February 10, 2024
October 16, 2023