Anderson Acceleration

Anderson acceleration is a technique used to speed up the convergence of iterative algorithms, primarily by extrapolating from previous iterations to improve subsequent estimates. Current research focuses on applying Anderson acceleration to diverse areas, including optimization algorithms (like gradient descent and iteratively reweighted L1), diffusion models for generative AI, and even physics-based models for inverse problems like Electrical Impedance Tomography. This acceleration method shows promise for improving the efficiency of various machine learning and scientific computing tasks, leading to faster training times, reduced computational costs, and potentially more accurate solutions.

Papers