Incremental Gradient
Incremental gradient methods are optimization algorithms designed to efficiently solve large-scale machine learning problems by processing data in smaller, sequential batches. Current research focuses on improving the convergence guarantees of these methods, particularly for the last iterate (rather than the average iterate), and exploring optimal data ordering strategies to accelerate training. This work is significant because it addresses limitations in the theoretical understanding of widely used algorithms like stochastic gradient descent and its variants, leading to more efficient and robust machine learning models across various applications.
Papers
June 2, 2024
March 12, 2024
March 11, 2024
June 28, 2023
May 30, 2023
December 4, 2022
October 13, 2022
April 28, 2022
February 3, 2022
November 25, 2021