Federated Optimization
Federated optimization tackles the challenge of training machine learning models on decentralized data without compromising privacy, aiming to efficiently aggregate model updates from numerous clients while minimizing communication overhead. Current research focuses on improving convergence rates and communication efficiency through adaptive and asynchronous optimization methods, addressing data heterogeneity and exploring techniques like zeroth-order optimization for non-differentiable functions. This field is crucial for enabling large-scale machine learning applications in privacy-sensitive domains like healthcare and IoT, impacting both theoretical understanding of distributed optimization and the practical deployment of AI systems.
Papers
May 25, 2023
April 4, 2023
November 21, 2022
October 15, 2022
September 6, 2022
June 23, 2022
June 22, 2022
June 21, 2022
April 7, 2022
March 29, 2022
March 20, 2022
January 26, 2022
January 6, 2022
December 23, 2021
December 17, 2021
December 15, 2021