Last Iterate Convergence
Last-iterate convergence in optimization and game theory focuses on whether the final iteration of an algorithm converges to a desired solution (e.g., Nash equilibrium), rather than relying on averaging across iterations. Current research investigates this property across various models, including mean field games, Markov decision processes, and zero-sum games, employing algorithms like optimistic gradient descent-ascent, multiplicative weights update, and policy gradient methods. Achieving last-iterate convergence offers significant advantages for practical applications by providing a stable and directly usable solution, improving efficiency and reducing computational costs in large-scale problems. Furthermore, understanding the conditions under which last-iterate convergence holds provides valuable insights into the dynamics of learning algorithms and their behavior in complex systems.
Papers
Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms
Yang Cai, Gabriele Farina, Julien Grand-Clément, Christian Kroer, Chung-Wei Lee, Haipeng Luo, Weiqiang Zheng
Last-iterate Convergence Separation between Extra-gradient and Optimism in Constrained Periodic Games
Yi Feng, Ping Li, Ioannis Panageas, Xiao Wang