Centralized Learning
Centralized learning, the traditional approach to training machine learning models, is being challenged by privacy concerns and scalability limitations. Current research focuses on decentralized alternatives like federated learning and its variants, employing techniques such as gradient compression, knowledge distillation, and adaptive client selection to improve efficiency and robustness while preserving data privacy. These advancements are significant because they enable collaborative model training across distributed datasets, unlocking the potential of massive, privacy-sensitive data sources for various applications, including healthcare, IoT, and vehicular networks.
Papers
November 3, 2024
August 4, 2024
July 30, 2024
July 24, 2024
July 17, 2024
June 7, 2024
May 28, 2024
May 22, 2024
April 1, 2024
March 18, 2024
February 20, 2024
January 13, 2024
December 19, 2023
December 18, 2023
October 27, 2023
October 4, 2023
July 28, 2023
July 16, 2023
June 19, 2023