ConFederated Learning

Confederated learning (CFL) extends federated learning by distributing the coordination of model training across multiple edge servers, each managing a subset of devices, to improve scalability and reduce communication overhead. Current research focuses on developing efficient algorithms, such as those based on stochastic gradient methods and ADMM, that minimize communication between servers and devices while maintaining model accuracy. This approach addresses the limitations of centralized federated learning, offering a more robust and practical solution for large-scale distributed machine learning applications requiring privacy preservation.

Papers