Convergence Rate
Convergence rate analysis in machine learning focuses on determining how quickly algorithms approach optimal solutions, a crucial factor for efficiency and scalability. Current research investigates convergence rates across diverse algorithms, including stochastic gradient descent (SGD) and its variants, federated learning methods, and policy gradient approaches, often within specific contexts like high-dimensional optimization or heterogeneous data distributions. Understanding and improving convergence rates is vital for developing more efficient machine learning models and enabling their application to increasingly complex problems, impacting both theoretical understanding and practical deployment.
Papers
Learning GFlowNets from partial episodes for improved convergence and stability
Kanika Madan, Jarrid Rector-Brooks, Maksym Korablyov, Emmanuel Bengio, Moksh Jain, Andrei Nica, Tom Bosc, Yoshua Bengio, Nikolay Malkin
Convergence rate of the (1+1)-evolution strategy on locally strongly convex functions with lipschitz continuous gradient and their monotonic transformations
Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
$O(T^{-1})$ Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games
Yuepeng Yang, Cong Ma