Superlinear Convergence
Superlinear convergence describes algorithms that exhibit an increasingly rapid rate of convergence as they approach a solution, significantly faster than linear convergence. Current research focuses on achieving this superior convergence in various contexts, including optimization problems (using methods like quasi-Newton, ADMM, and stochastic Newton approaches), and within specific applications such as federated learning and deep neural network training. These advancements are crucial for improving the efficiency and scalability of numerous machine learning and optimization tasks, leading to faster model training and solution finding in diverse scientific and engineering domains.
Papers
October 3, 2024
August 29, 2024
July 23, 2024
July 3, 2024
June 3, 2024
April 17, 2024
February 4, 2024
November 12, 2023
September 4, 2023
July 8, 2023
June 27, 2023
May 26, 2023
May 25, 2023
February 16, 2023
October 22, 2022
August 11, 2022
April 22, 2022
April 20, 2022
April 14, 2022