First Order Gradient
First-order gradient methods are fundamental optimization algorithms used to find the minimum of a function by iteratively moving in the direction of the negative gradient. Current research focuses on improving their efficiency and robustness, particularly for high-dimensional problems and non-convex functions like those encountered in training neural networks, exploring techniques such as variance reduction, adaptive sampling, and modifications to accelerate convergence. These advancements are crucial for tackling complex problems in machine learning, solving variational inequalities, and enhancing the performance of various applications, including image processing and scientific computing.
Papers
May 28, 2024
February 24, 2024
July 10, 2023
March 30, 2023
October 27, 2022
September 3, 2022
August 8, 2022
May 20, 2022