Federated Optimization

Federated optimization tackles the challenge of training machine learning models on decentralized data without compromising privacy, aiming to efficiently aggregate model updates from numerous clients while minimizing communication overhead. Current research focuses on improving convergence rates and communication efficiency through adaptive and asynchronous optimization methods, addressing data heterogeneity and exploring techniques like zeroth-order optimization for non-differentiable functions. This field is crucial for enabling large-scale machine learning applications in privacy-sensitive domains like healthcare and IoT, impacting both theoretical understanding of distributed optimization and the practical deployment of AI systems.

Papers