Adaptive Preconditioner
Adaptive preconditioning aims to accelerate the convergence of iterative methods used in solving large-scale linear systems and optimizing complex functions, particularly in machine learning and scientific computing. Current research focuses on developing efficient preconditioners using techniques like Kronecker product approximations, graph neural networks, and data-driven approaches to learn problem-specific preconditioners, often integrated into optimizers such as Adam and Shampoo. These advancements improve the efficiency and scalability of algorithms for diverse applications, including deep learning model training, solving partial differential equations, and accelerating Gaussian process regression.
Papers
March 14, 2022