Sparse Federated
Sparse federated learning aims to improve the efficiency and privacy of federated learning by training smaller, more efficient models on decentralized data. Current research focuses on developing algorithms that strategically prune model parameters during training, often guided by gradient congruity or parameter saliency, leading to reduced communication overhead and improved generalization. These techniques, applied to various model architectures including neural networks and Kalman filters, are showing promise in diverse applications like object detection in IoT and personalized healthcare, addressing the challenges of resource-constrained devices and data heterogeneity.
Papers
October 14, 2024
May 15, 2024
May 2, 2024
September 7, 2023
April 3, 2023
February 20, 2023
January 23, 2023
August 24, 2022
July 11, 2022
April 26, 2022
March 25, 2022