Paper ID: 2203.09670
Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing
Dinh C. Nguyen, Seyyedali Hosseinalipour, David J. Love, Pubudu N. Pathirana, Christopher G. Brinton
In this paper, we study a new latency optimization problem for blockchain-based federated learning (BFL) in multi-server edge computing. In this system model, distributed mobile devices (MDs) communicate with a set of edge servers (ESs) to handle both machine learning (ML) model training and block mining simultaneously. To assist the ML model training for resource-constrained MDs, we develop an offloading strategy that enables MDs to transmit their data to one of the associated ESs. We then propose a new decentralized ML model aggregation solution at the edge layer based on a consensus mechanism to build a global ML model via peer-to-peer (P2P)-based blockchain communications. Blockchain builds trust among MDs and ESs to facilitate reliable ML model sharing and cooperative consensus formation, and enables rapid elimination of manipulated models caused by poisoning attacks. We formulate latency-aware BFL as an optimization aiming to minimize the system latency via joint consideration of the data offloading decisions, MDs' transmit power, channel bandwidth allocation for MDs' data offloading, MDs' computational allocation, and hash power allocation. Given the mixed action space of discrete offloading and continuous allocation variables, we propose a novel deep reinforcement learning scheme with a parameterized advantage actor critic algorithm. We theoretically characterize the convergence properties of BFL in terms of the aggregation delay, mini-batch size, and number of P2P communication rounds. Our numerical evaluation demonstrates the superiority of our proposed scheme over baselines in terms of model training efficiency, convergence rate, system latency, and robustness against model poisoning attacks.
Submitted: Mar 18, 2022