Adaptive Preconditioner
Adaptive preconditioning aims to accelerate the convergence of iterative methods used in solving large-scale linear systems and optimizing complex functions, particularly in machine learning and scientific computing. Current research focuses on developing efficient preconditioners using techniques like Kronecker product approximations, graph neural networks, and data-driven approaches to learn problem-specific preconditioners, often integrated into optimizers such as Adam and Shampoo. These advancements improve the efficiency and scalability of algorithms for diverse applications, including deep learning model training, solving partial differential equations, and accelerating Gaussian process regression.
Papers
February 1, 2024
January 16, 2024
December 28, 2023
December 20, 2023
December 6, 2023
December 4, 2023
November 16, 2023
October 27, 2023
October 18, 2023
October 3, 2023
September 29, 2023
September 20, 2023
September 12, 2023
August 28, 2023
August 21, 2023
June 30, 2023
June 9, 2023
May 30, 2023
May 25, 2023