Superlinear Convergence

Superlinear convergence describes algorithms that exhibit an increasingly rapid rate of convergence as they approach a solution, significantly faster than linear convergence. Current research focuses on achieving this superior convergence in various contexts, including optimization problems (using methods like quasi-Newton, ADMM, and stochastic Newton approaches), and within specific applications such as federated learning and deep neural network training. These advancements are crucial for improving the efficiency and scalability of numerous machine learning and optimization tasks, leading to faster model training and solution finding in diverse scientific and engineering domains.

Papers