First Order Gradient

First-order gradient methods are fundamental optimization algorithms used to find the minimum of a function by iteratively moving in the direction of the negative gradient. Current research focuses on improving their efficiency and robustness, particularly for high-dimensional problems and non-convex functions like those encountered in training neural networks, exploring techniques such as variance reduction, adaptive sampling, and modifications to accelerate convergence. These advancements are crucial for tackling complex problems in machine learning, solving variational inequalities, and enhancing the performance of various applications, including image processing and scientific computing.

Papers