First Order Algorithm
First-order algorithms are iterative methods for optimization problems that utilize only gradient information, offering computational efficiency for large-scale applications. Current research focuses on improving their convergence rates and applicability to diverse problem classes, including non-convex and min-max problems, often employing techniques like variance reduction, adaptive step sizes, and entropy regularization within algorithms such as stochastic gradient descent and its variants. These advancements are crucial for scaling machine learning models and solving complex problems in fields like wildfire science and optimal transport, where efficient optimization is paramount.
Papers
October 8, 2024
August 26, 2024
June 7, 2024
April 2, 2024
March 11, 2024
February 7, 2024
September 4, 2023
August 4, 2023
July 29, 2023
July 15, 2023
July 10, 2023
June 21, 2023
February 14, 2023
February 1, 2023
January 30, 2023
January 17, 2023
June 4, 2022
April 7, 2022
March 29, 2022