Convergence Rate
Convergence rate analysis in machine learning focuses on determining how quickly algorithms approach optimal solutions, a crucial factor for efficiency and scalability. Current research investigates convergence rates across diverse algorithms, including stochastic gradient descent (SGD) and its variants, federated learning methods, and policy gradient approaches, often within specific contexts like high-dimensional optimization or heterogeneous data distributions. Understanding and improving convergence rates is vital for developing more efficient machine learning models and enabling their application to increasingly complex problems, impacting both theoretical understanding and practical deployment.
Papers
Convergence rates for Poisson learning to a Poisson equation with measure data
Leon Bungert, Jeff Calder, Max Mihailescu, Kodjo Houssou, Amber Yuan
Fast Distributed Optimization over Directed Graphs under Malicious Attacks using Trust
Arif Kerem Dayı, Orhan Eren Akgün, Stephanie Gil, Michal Yemini, Angelia Nedić