Neural MMO
Neural MMO is a massively multi-agent reinforcement learning environment designed to benchmark agent generalization and robustness in complex, dynamic scenarios. Current research focuses on developing agents capable of mastering diverse tasks within the environment, often employing standard reinforcement learning algorithms enhanced by domain-specific engineering. This platform serves as a valuable tool for advancing multi-agent learning research and contributes to a better understanding of how agents can adapt to unforeseen challenges and collaborate effectively in large-scale systems. The open-source nature of the environment and associated tools fosters community collaboration and accelerates progress in the field.
Papers
Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning
Joseph Suárez, Phillip Isola, Kyoung Whan Choe, David Bloomin, Hao Xiang Li, Nikhil Pinnaparaju, Nishaanth Kanna, Daniel Scott, Ryan Sullivan, Rose S. Shuman, Lucas de Alcântara, Herbie Bradley, Louis Castricato, Kirsty You, Yuhao Jiang, Qimai Li, Jiaxin Chen, Xiaolong Zhu
The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade
Enhong Liu, Joseph Suarez, Chenhui You, Bo Wu, Bingcheng Chen, Jun Hu, Jiaxin Chen, Xiaolong Zhu, Clare Zhu, Julian Togelius, Sharada Mohanty, Weijun Hong, Rui Du, Yibing Zhang, Qinwen Wang, Xinhang Li, Zheng Yuan, Xiang Li, Yuejia Huang, Kun Zhang, Hanhui Yang, Shiqi Tang, Phillip Isola