Markov Game
Markov games model strategic interactions among multiple agents in dynamic environments, aiming to find equilibrium solutions like Nash equilibria that represent stable outcomes. Current research focuses on developing efficient algorithms for learning these equilibria, particularly in large-scale settings, often employing techniques like mean-field games, actor-critic methods, and policy gradient approaches, and addressing challenges posed by incomplete information, asymmetry, and robustness to uncertainty. This field is crucial for advancing multi-agent reinforcement learning and has significant implications for diverse applications, including robotics, economics, and energy systems.
Papers
$\widetilde{O}(T^{-1})$ Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games
Weichao Mao, Haoran Qiu, Chen Wang, Hubertus Franke, Zbigniew Kalbarczyk, Tamer Başar
Near-Optimal Reinforcement Learning with Self-Play under Adaptivity Constraints
Dan Qiao, Yu-Xiang Wang