Krylov Subspace
Krylov subspace methods are powerful numerical techniques used to approximate solutions to large-scale linear systems and eigenvalue problems by iteratively building a low-dimensional subspace that captures essential information about the original problem. Current research focuses on applying these methods to accelerate various machine learning tasks, including bilevel optimization, training neural operators for solving partial differential equations, and optimizing graph neural networks, often leveraging algorithms like the Lanczos process and conjugate gradient methods. This leads to significant improvements in computational efficiency and scalability for diverse applications, ranging from medical imaging reconstruction to large-scale data analysis and material science simulations.