Paper ID: 2209.06904
Forecasting Evolution of Clusters in Game Agents with Hebbian Learning
Beomseok Kang, Saibal Mukhopadhyay
Large multi-agent systems such as real-time strategy games are often driven by collective behavior of agents. For example, in StarCraft II, human players group spatially near agents into a team and control the team to defeat opponents. In this light, clustering the agents in the game has been used for various purposes such as the efficient control of the agents in multi-agent reinforcement learning and game analytic tools for the game users. However, despite the useful information provided by clustering, learning the dynamics of multi-agent systems at a cluster level has been rarely studied yet. In this paper, we present a hybrid AI model that couples unsupervised and self-supervised learning to forecast evolution of the clusters in StarCraft II. We develop an unsupervised Hebbian learning method in a set-to-cluster module to efficiently create a variable number of the clusters with lower inference time complexity than K-means clustering. Also, a long short-term memory based prediction module is designed to recursively forecast state vectors generated by the set-to-cluster module to define cluster configuration. We experimentally demonstrate the proposed model successfully predicts complex movement of the clusters in the game.
Submitted: Aug 19, 2022