Local Gradient
Local gradients, representing the change in a model's output with respect to small changes in its parameters, are central to many machine learning optimization strategies. Current research focuses on improving the efficiency and robustness of algorithms using local gradients, particularly in distributed and federated learning settings, employing techniques like gradient compression, adaptive batch sizes, and novel aggregation methods. These advancements aim to reduce communication overhead, improve convergence speed, and enhance model fairness and robustness against data heterogeneity and adversarial attacks, impacting the scalability and reliability of large-scale machine learning applications.
Papers
Boosting Adversarial Transferability by Achieving Flat Local Maxima
Zhijin Ge, Hongying Liu, Xiaosen Wang, Fanhua Shang, Yuanyuan Liu
Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates
Siqi Zhang, Sayantan Choudhury, Sebastian U Stich, Nicolas Loizou