Federated Continual Learning

Federated Continual Learning (FCL) addresses the challenge of training machine learning models across decentralized devices while adapting to continuously evolving data streams, without compromising data privacy. Current research focuses on mitigating "catastrophic forgetting" – the loss of previously learned knowledge – through techniques like buffer-based gradient projection, generative replay using diffusion models, and personalized learning via multi-granularity prompts. These advancements are significant for enabling robust and privacy-preserving AI in dynamic real-world applications, such as personalized healthcare, robotics, and online security systems, where data is distributed and constantly changing.

Papers