Paper ID: 2312.17164
Securing NextG Systems against Poisoning Attacks on Federated Learning: A Game-Theoretic Solution
Yalin E. Sagduyu, Tugba Erpek, Yi Shi
This paper studies the poisoning attack and defense interactions in a federated learning (FL) system, specifically in the context of wireless signal classification using deep learning for next-generation (NextG) communications. FL collectively trains a global model without the need for clients to exchange their data samples. By leveraging geographically dispersed clients, the trained global model can be used for incumbent user identification, facilitating spectrum sharing. However, in this distributed learning system, the presence of malicious clients introduces the risk of poisoning the training data to manipulate the global model through falsified local model exchanges. To address this challenge, a proactive defense mechanism is employed in this paper to make informed decisions regarding the admission or rejection of clients participating in FL systems. Consequently, the attack-defense interactions are modeled as a game, centered around the underlying admission and poisoning decisions. First, performance bounds are established, encompassing the best and worst strategies for attackers and defenders. Subsequently, the attack and defense utilities are characterized within the Nash equilibrium, where no player can unilaterally improve its performance given the fixed strategies of others. The results offer insights into novel operational modes that safeguard FL systems against poisoning attacks by quantifying the performance of both attacks and defenses in the context of NextG communications.
Submitted: Dec 28, 2023