Paper ID: 2208.10676
Entropy Enhanced Multi-Agent Coordination Based on Hierarchical Graph Learning for Continuous Action Space
Yining Chen, Ke Wang, Guanghua Song, Xiaohong Jiang
In most existing studies on large-scale multi-agent coordination, the control methods aim to learn discrete policies for agents with finite choices. They rarely consider selecting actions directly from continuous action spaces to provide more accurate control, which makes them unsuitable for more complex tasks. To solve the control issue due to large-scale multi-agent systems with continuous action spaces, we propose a novel MARL coordination control method that derives stable continuous policies. By optimizing policies with maximum entropy learning, agents improve their exploration in execution and acquire an excellent performance after training. We also employ hierarchical graph attention networks (HGAT) and gated recurrent units (GRU) to improve the scalability and transferability of our method. The experiments show that our method consistently outperforms all baselines in large-scale multi-agent cooperative reconnaissance tasks.
Submitted: Aug 23, 2022