Zero Sum Markov Game
Zero-sum Markov games model strategic interactions between two agents with diametrically opposed interests in a dynamic environment. Current research focuses on developing efficient algorithms, such as Q-learning variants and policy iteration methods, often incorporating techniques like entropy regularization or variance reduction to improve convergence and sample efficiency, particularly in settings with incomplete information or adversarial data. These advancements are crucial for tackling challenges in multi-agent reinforcement learning and have implications for robust control, security games, and other applications requiring the analysis of competitive interactions.
Papers
December 3, 2022
November 16, 2022
October 4, 2022
October 3, 2022
September 26, 2022
July 25, 2022
May 27, 2022
January 21, 2022