Gradient Tracking

Gradient tracking is a decentralized optimization technique used to collaboratively minimize a global objective function distributed across multiple agents or nodes, overcoming data heterogeneity challenges inherent in distributed systems. Current research focuses on improving the efficiency and robustness of gradient tracking algorithms, particularly within federated learning frameworks and for non-convex optimization problems, often incorporating techniques like heavy-ball momentum, natural gradients, and variance reduction. These advancements enhance the scalability and performance of distributed machine learning and other applications requiring efficient information exchange and consensus among networked agents, leading to improved model training and resource utilization.

Papers