Gradient Method
Gradient methods are iterative optimization algorithms that aim to find the minimum (or maximum) of a function by repeatedly stepping in the direction of the negative (or positive) gradient. Current research focuses on improving the efficiency and robustness of these methods, particularly for non-convex problems arising in deep learning and other applications, exploring variations like stochastic gradient descent, adaptive methods (e.g., Adam), and incorporating second-order information or preconditioning techniques. These advancements are crucial for training complex models, enabling progress in fields like scientific machine learning and improving the performance of various machine learning tasks.
Papers
November 5, 2024
November 4, 2024
October 16, 2024
October 13, 2024
October 11, 2024
October 2, 2024
July 8, 2024
July 5, 2024
May 6, 2024
April 10, 2024
March 19, 2024
March 18, 2024
March 12, 2024
March 6, 2024
March 5, 2024
February 4, 2024
January 29, 2024
January 26, 2024