Paper ID: 2404.00042
Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis
Koulik Khamaru
We consider the problem of stochastic convex optimization under convex constraints. We analyze the behavior of a natural variance reduced proximal gradient (VRPG) algorithm for this problem. Our main result is a non-asymptotic guarantee for VRPG algorithm. Contrary to minimax worst case guarantees, our result is instance-dependent in nature. This means that our guarantee captures the complexity of the loss function, the variability of the noise, and the geometry of the constraint set. We show that the non-asymptotic performance of the VRPG algorithm is governed by the scaled distance (scaled by $\sqrt{N}$) between the solutions of the given problem and that of a certain small perturbation of the given problem -- both solved under the given convex constraints; here, $N$ denotes the number of samples. Leveraging a well-established connection between local minimax lower bounds and solutions to perturbed problems, we show that as $N \rightarrow \infty$, the VRPG algorithm achieves the renowned local minimax lower bound by H\`{a}jek and Le Cam up to universal constants and a logarithmic factor of the sample size.
Submitted: Mar 24, 2024