Multi Agent Reinforcement Learning
Multi-agent reinforcement learning (MARL) focuses on developing algorithms that enable multiple independent agents to learn optimal strategies within a shared environment, often to achieve a common goal. Current research emphasizes improving sample efficiency and generalization, exploring novel architectures like equivariant graph neural networks and specialized network structures (e.g., Bottom-Up Networks), and addressing challenges posed by non-stationarity and partial observability through techniques such as auxiliary prioritization and global state inference with diffusion models. MARL's significance lies in its potential to solve complex real-world problems across diverse domains, including robotics, traffic control, and healthcare, by enabling effective coordination and collaboration among multiple agents.
Papers
Putting Data at the Centre of Offline Multi-Agent Reinforcement Learning
Claude Formanek, Louise Beyers, Callum Rhys Tilbury, Jonathan P. Shock, Arnu Pretorius
XP-MARL: Auxiliary Prioritization in Multi-Agent Reinforcement Learning to Address Non-Stationarity
Jianye Xu, Omar Sobhy, Bassam Alrifaee
HARP: Human-Assisted Regrouping with Permutation Invariant Critic for Multi-Agent Reinforcement Learning
Huawen Hu, Enze Shi, Chenxi Yue, Shuocun Yang, Zihao Wu, Yiwei Li, Tianyang Zhong, Tuo Zhang, Tianming Liu, Shu Zhang
An Introduction to Centralized Training for Decentralized Execution in Cooperative Multi-Agent Reinforcement Learning
Christopher Amato
A Survey on Emergent Language
Jannik Peters, Constantin Waubert de Puiseau, Hasan Tercan, Arya Gopikrishnan, Gustavo Adolpho Lucas De Carvalho, Christian Bitter, Tobias Meisen
Cooperative Path Planning with Asynchronous Multiagent Reinforcement Learning
Jiaming Yin, Weixiong Rao, Yu Xiao, Keshuang Tang
Multi-Agent Reinforcement Learning from Human Feedback: Data Coverage and Algorithmic Techniques
Natalia Zhang, Xinqi Wang, Qiwen Cui, Runlong Zhou, Sham M. Kakade, Simon S. Du