Zero Sum Markov Game

Zero-sum Markov games model strategic interactions between two agents with diametrically opposed interests in a dynamic environment. Current research focuses on developing efficient algorithms, such as Q-learning variants and policy iteration methods, often incorporating techniques like entropy regularization or variance reduction to improve convergence and sample efficiency, particularly in settings with incomplete information or adversarial data. These advancements are crucial for tackling challenges in multi-agent reinforcement learning and have implications for robust control, security games, and other applications requiring the analysis of competitive interactions.

Papers