Adaptive Preconditioner
Adaptive preconditioning aims to accelerate the convergence of iterative methods used in solving large-scale linear systems and optimizing complex functions, particularly in machine learning and scientific computing. Current research focuses on developing efficient preconditioners using techniques like Kronecker product approximations, graph neural networks, and data-driven approaches to learn problem-specific preconditioners, often integrated into optimizers such as Adam and Shampoo. These advancements improve the efficiency and scalability of algorithms for diverse applications, including deep learning model training, solving partial differential equations, and accelerating Gaussian process regression.
Papers
April 20, 2023
April 4, 2023
February 13, 2023
February 10, 2023
February 7, 2023
February 2, 2023
January 30, 2023
December 24, 2022
December 1, 2022
November 8, 2022
October 28, 2022
October 11, 2022
September 2, 2022
August 4, 2022
August 2, 2022
July 6, 2022
June 30, 2022
June 1, 2022
April 27, 2022