Local Gradient
Local gradients, representing the change in a model's output with respect to small changes in its parameters, are central to many machine learning optimization strategies. Current research focuses on improving the efficiency and robustness of algorithms using local gradients, particularly in distributed and federated learning settings, employing techniques like gradient compression, adaptive batch sizes, and novel aggregation methods. These advancements aim to reduce communication overhead, improve convergence speed, and enhance model fairness and robustness against data heterogeneity and adversarial attacks, impacting the scalability and reliability of large-scale machine learning applications.
Papers
Neighborhood Gradient Clustering: An Efficient Decentralized Learning Method for Non-IID Data Distributions
Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
FedVeca: Federated Vectorized Averaging on Non-IID Data with Adaptive Bi-directional Global Objective
Ping Luo, Jieren Cheng, Zhenhao Liu, N.Xiong, Jie Wu