Zero Sum Markov Game
Zero-sum Markov games model strategic interactions between two agents with diametrically opposed interests in a dynamic environment. Current research focuses on developing efficient algorithms, such as Q-learning variants and policy iteration methods, often incorporating techniques like entropy regularization or variance reduction to improve convergence and sample efficiency, particularly in settings with incomplete information or adversarial data. These advancements are crucial for tackling challenges in multi-agent reinforcement learning and have implications for robust control, security games, and other applications requiring the analysis of competitive interactions.
Papers
September 2, 2024
July 5, 2024
June 13, 2024
April 4, 2024
March 4, 2024
February 1, 2024
December 13, 2023
December 8, 2023
November 3, 2023
November 1, 2023
August 17, 2023
July 13, 2023
June 13, 2023
June 9, 2023
May 23, 2023
March 17, 2023
March 5, 2023
March 3, 2023
February 20, 2023