Federated Optimization
Federated optimization tackles the challenge of training machine learning models on decentralized data without compromising privacy, aiming to efficiently aggregate model updates from numerous clients while minimizing communication overhead. Current research focuses on improving convergence rates and communication efficiency through adaptive and asynchronous optimization methods, addressing data heterogeneity and exploring techniques like zeroth-order optimization for non-differentiable functions. This field is crucial for enabling large-scale machine learning applications in privacy-sensitive domains like healthcare and IoT, impacting both theoretical understanding of distributed optimization and the practical deployment of AI systems.
Papers
FedNAR: Federated Optimization with Normalized Annealing Regularization
Junbo Li, Ang Li, Chong Tian, Qirong Ho, Eric P. Xing, Hongyi Wang
FedAWARE: Maximizing Gradient Diversity for Heterogeneous Federated Server-side Optimization
Dun Zeng, Zenglin Xu, Yu Pan, Qifan Wang, Xiaoying Tang
Enhanced Federated Optimization: Adaptive Unbiased Client Sampling with Reduced Variance
Dun Zeng, Zenglin Xu, Yu Pan, Xu Luo, Qifan Wang, Xiaoying Tang