Paper ID: 2204.02632

DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg

Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv

Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem. In reality, the prosperity of FL is largely due to a centralized framework called FedAvg, in which workers are in charge of model training and servers are in control of model aggregation. However, FedAvg's centralized worker-server architecture has raised new concerns, be it the low scalability of the cluster, the risk of data leakage, and the failure or even defection of the central server. To overcome these problems, we propose Decentralized Federated Trusted Averaging (DeFTA), a decentralized FL framework that serves as a plug-and-play replacement for FedAvg, instantly bringing better security, scalability, and fault-tolerance to the federated learning process after installation. In principle, it fundamentally resolves the above-mentioned issues from an architectural perspective without compromises or tradeoffs, primarily consisting of a new model aggregating formula with theoretical performance analysis, and a decentralized trust system (DTS) to greatly improve system robustness. Note that since DeFTA is an alternative to FedAvg at the framework level, \textit{prevalent algorithms published for FedAvg can be also utilized in DeFTA with ease}. Extensive experiments on six datasets and six basic models suggest that DeFTA not only has comparable performance with FedAvg in a more realistic setting, but also achieves great resilience even when 66% of workers are malicious. Furthermore, we also present an asynchronous variant of DeFTA to endow it with more powerful usability.

Submitted: Apr 6, 2022