Adaptive Preconditioner
Adaptive preconditioning aims to accelerate the convergence of iterative methods used in solving large-scale linear systems and optimizing complex functions, particularly in machine learning and scientific computing. Current research focuses on developing efficient preconditioners using techniques like Kronecker product approximations, graph neural networks, and data-driven approaches to learn problem-specific preconditioners, often integrated into optimizers such as Adam and Shampoo. These advancements improve the efficiency and scalability of algorithms for diverse applications, including deep learning model training, solving partial differential equations, and accelerating Gaussian process regression.
Papers
PC-SRIF: Preconditioned Cholesky-based Square Root Information Filter for Vision-aided Inertial Navigation
Tong Ke, Parth Agrawal, Yun Zhang, Weikun Zhen, Chao X. Guo, Toby Sharp, Ryan C. Dutoit
SOAP: Improving and Stabilizing Shampoo using Adam
Nikhil Vyas, Depen Morwani, Rosie Zhao, Itai Shapira, David Brandfonbrener, Lucas Janson, Sham Kakade
Preconditioners for the Stochastic Training of Implicit Neural Representations
Shin-Fang Chng, Hemanth Saratchandran, Simon Lucey
Multi-Level GNN Preconditioner for Solving Large Scale Problems
Matthieu Nastorg, Jean-Marc Gratien, Thibault Faney, Michele Alessandro Bucci, Guillaume Charpiat, Marc Schoenauer