Stochastic Line Search
Stochastic line search methods aim to improve the efficiency and robustness of gradient-based optimization algorithms, particularly stochastic gradient descent (SGD), by dynamically adjusting the step size during training. Current research focuses on developing adaptive line search variants that guarantee convergence in various settings (e.g., convex, over-parameterized models), incorporating variance reduction techniques for improved efficiency, and exploring non-monotone approaches to potentially accelerate convergence. These advancements are significant for training large-scale machine learning models, offering improved performance and reduced computational costs across diverse applications, including those involving quantum computing optimization.