Gradient Dominance Property

Gradient dominance is a property of functions where the suboptimality gap is bounded by a function of the gradient norm, offering a weaker yet more practical alternative to strong convexity for analyzing optimization algorithms. Current research focuses on establishing convergence rates and sample complexities of various stochastic first and second-order methods, including stochastic gradient descent, variance-reduced methods, and Newton-type methods, under different gradient dominance assumptions for both convex and non-convex problems, with applications in machine learning and reinforcement learning. This research aims to provide tighter theoretical guarantees for the performance of these algorithms in practical settings where strong convexity often fails to hold, leading to improved algorithm design and more reliable performance predictions. The results are particularly relevant for training deep neural networks and policy optimization in reinforcement learning.

Papers