Paper ID: 2212.01976

FedCC: Robust Federated Learning against Model Poisoning Attacks

Hyejun Jeong, Hamin Son, Seohu Lee, Jayun Hyun, Tai-Myoung Chung

Federated Learning, designed to address privacy concerns in learning models, introduces a new distributed paradigm that safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets and the change in protection objective--parameters' integrity. Existing approaches, including robust aggregation algorithms, fail to effectively filter out malicious clients, especially those with non-Independently and Identically Distributed data. Furthermore, these approaches often tackle non-IID data and poisoning attacks separately. To address both challenges simultaneously, we present FedCC, a simple yet novel algorithm. It leverages the Centered Kernel Alignment similarity of Penultimate Layer Representations for clustering, allowing it to identify and filter out malicious clients by selectively averaging chosen parameters, even in non-IID data settings. Our extensive experiments demonstrate the effectiveness of FedCC in mitigating untargeted model poisoning and backdoor attacks. FedCC reduces the attack confidence to a consistent zero compared to existing outlier detection-based and first-order statistics-based methods. Specifically, it significantly minimizes the average degradation of global performance by 65.5\%. We believe that this new perspective of assessing learning models makes it a valuable contribution to the field of FL model security and privacy. The code will be made available upon paper acceptance.

Submitted: Dec 5, 2022