Paper ID: 2409.12769

The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning

Manh V. Nguyen, Liang Zhao, Bobin Deng, William Severa, Honghui Xu, Shaoen Wu

Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices and emerged as an energy-efficient alternative to conventional Artificial Neural Networks (ANNs). However, to extend SNNs to a Federated Learning (FL) setting involving collaborative model training, the communication between the local devices and the remote server remains the bottleneck, which is often restricted and costly. In this paper, we first explore the inherent robustness of SNNs under noisy communication in FL. Building upon this foundation, we propose a novel Federated Learning with Top-K Sparsification (FLTS) algorithm to reduce the bandwidth usage for FL training. We discover that the proposed scheme with SNNs allows more bandwidth savings compared to ANNs without impacting the model's accuracy. Additionally, the number of parameters to be communicated can be reduced to as low as 6 percent of the size of the original model. We further improve the communication efficiency by enabling dynamic parameter compression during model training. Extensive experiment results demonstrate that our proposed algorithms significantly outperform the baselines in terms of communication cost and model accuracy and are promising for practical network-efficient FL with SNNs.

Submitted: Sep 19, 2024