Iterative Method

Iterative methods are computational techniques that repeatedly refine an approximate solution until a desired level of accuracy is reached, addressing diverse problems across scientific computing and machine learning. Current research emphasizes improving convergence speed and efficiency, particularly for large-scale datasets, through advancements in algorithms like conjugate gradient methods, variational Born iterative methods, and neural operator-based solvers. These improvements have significant implications for accelerating computations in fields ranging from Gaussian process regression and inverse scattering problems to large-scale optimization and federated learning, ultimately enabling the analysis of larger and more complex datasets.

Papers