Cooperative Multi Agent Reinforcement Learning
Cooperative multi-agent reinforcement learning (MARL) focuses on training multiple agents to collaborate effectively towards shared goals, a challenge amplified by the complexity of decentralized decision-making. Current research emphasizes centralized training for decentralized execution (CTDE), employing techniques like value function factorization (e.g., QMIX variants) and communication mechanisms (e.g., graph-based networks and attention mechanisms) to improve coordination and scalability. This field is significant for its potential to advance autonomous systems in diverse domains, including robotics, traffic control, and resource management, by enabling efficient and robust teamwork in complex environments. Addressing challenges like power imbalances, adversarial agents, and privacy concerns within these frameworks is a key focus of ongoing work.
Papers
Cooperative Multi-Agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation
Yifei Min, Jiafan He, Tianhao Wang, Quanquan Gu
Robust multi-agent coordination via evolutionary generation of auxiliary adversarial attackers
Lei Yuan, Zi-Qian Zhang, Ke Xue, Hao Yin, Feng Chen, Cong Guan, Li-He Li, Chao Qian, Yang Yu
A Theory of Mind Approach as Test-Time Mitigation Against Emergent Adversarial Communication
Nancirose Piazza, Vahid Behzadan
Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement Learning
Shanqi Liu, Yujing Hu, Runze Wu, Dong Xing, Yu Xiong, Changjie Fan, Kun Kuang, Yong Liu