Centralized Federated Learning
Centralized federated learning (CFL) aims to collaboratively train machine learning models across multiple devices without directly sharing their data, preserving privacy while improving model accuracy. Current research emphasizes improving efficiency and scalability, exploring various aggregation methods and addressing challenges posed by heterogeneous data distributions, often employing deep learning models like Long Short-Term Memory networks. This approach is significant for its potential to enhance the performance of distributed machine learning applications while upholding data privacy, impacting fields like IoT and cybersecurity. Furthermore, research is actively exploring methods for fairly rewarding data contributors in CFL settings.
Papers
Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration
Qinglun Li, Miao Zhang, Yingqi Liu, Quanjun Yin, Li Shen, Xiaochun Cao
OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement
Qinglun Li, Miao Zhang, Mengzhu Wang, Quanjun Yin, Li Shen