Stochastic Bilevel Optimization
Stochastic bilevel optimization tackles nested optimization problems where the objective function depends on the solution of an inner optimization problem, often involving stochasticity in either or both levels. Current research focuses on developing efficient algorithms, such as single-loop methods and those leveraging variance reduction or momentum, to improve sample complexity and reduce computational cost, particularly in the context of non-convex upper-level and strongly-convex lower-level problems. These advancements are crucial for addressing large-scale applications in machine learning, including hyperparameter optimization, meta-learning, and reinforcement learning, where efficient solutions to nested optimization problems are essential. The field is also exploring decentralized and zeroth-order methods to handle distributed data and scenarios with limited gradient information.