Early Stage Convergence
Early stage convergence in machine learning focuses on understanding and improving the initial phases of training algorithms, aiming to accelerate convergence speed and enhance generalization performance. Current research investigates this through the lens of various optimization algorithms (e.g., Adam, SGD, FedProx), model architectures (e.g., transformers, diffusion models), and specific problem domains (e.g., federated learning, collaborative filtering). These studies leverage techniques from dynamical systems theory and optimal transport to establish convergence guarantees and bounds, ultimately contributing to more efficient and robust machine learning systems across diverse applications.
Papers
Stability and Convergence of Distributed Stochastic Approximations with large Unbounded Stochastic Information Delays
Adrian Redder, Arunselvan Ramaswamy, Holger Karl
Convergence of Alternating Gradient Descent for Matrix Factorization
Rachel Ward, Tamara G. Kolda
On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm
Julien Aubert, Luc Lehéricy, Patricia Reynaud-Bouret