Centralized Learning
Centralized learning, the traditional approach to training machine learning models, is being challenged by privacy concerns and scalability limitations. Current research focuses on decentralized alternatives like federated learning and its variants, employing techniques such as gradient compression, knowledge distillation, and adaptive client selection to improve efficiency and robustness while preserving data privacy. These advancements are significant because they enable collaborative model training across distributed datasets, unlocking the potential of massive, privacy-sensitive data sources for various applications, including healthcare, IoT, and vehicular networks.
Papers
May 23, 2023
May 18, 2023
March 19, 2023
February 22, 2023
January 24, 2023
January 23, 2023
December 16, 2022
December 13, 2022
October 3, 2022
September 15, 2022
August 24, 2022
August 23, 2022
July 15, 2022
June 23, 2022
May 26, 2022
May 6, 2022
May 5, 2022
March 9, 2022
February 10, 2022
February 6, 2022