First Order Method
First-order methods are optimization algorithms that utilize only gradient information to iteratively find solutions to complex problems, offering computational efficiency compared to higher-order methods. Current research focuses on extending their applicability to challenging problem settings, including bilevel optimization, minimax problems, and online learning scenarios, often employing techniques like accelerated gradient descent and adaptive step sizes to improve convergence rates. These advancements are significant because they enable efficient solutions for large-scale problems in diverse fields such as machine learning, robotics, and resource allocation, where computational cost is a major constraint.
Papers
Decoupling Learning and Decision-Making: Breaking the $\mathcal{O}(\sqrt{T})$ Barrier in Online Resource Allocation with First-Order Methods
Wenzhi Gao, Chunlin Sun, Chenyu Xue, Dongdong Ge, Yinyu Ye
On the Complexity of First-Order Methods in Stochastic Bilevel Optimization
Jeongyeol Kwon, Dohyun Kwon, Hanbaek Lyu