Federated Stochastic
Federated stochastic methods aim to collaboratively train machine learning models across decentralized devices without directly sharing sensitive data. Current research focuses on improving the efficiency and robustness of algorithms like Federated Averaging (FedAvg) and Federated Stochastic Gradient Descent (FedSGD), addressing challenges posed by data heterogeneity and Byzantine attacks through techniques such as gradient scaling, normalization, and dynamic resource allocation. These advancements are significant for privacy-preserving machine learning in diverse applications like medical image analysis and IoT data processing, enabling collaborative model training while maintaining data security and integrity.
Papers
November 2, 2024
October 30, 2024
September 8, 2024
August 21, 2024
August 18, 2024
August 10, 2024
July 17, 2024
June 16, 2024
June 10, 2024
January 24, 2024
November 2, 2023
October 15, 2023
March 31, 2023
February 21, 2023
February 10, 2023
November 3, 2022
September 16, 2022
April 26, 2022
February 17, 2022