Multi Agent Actor Critic
Multi-agent actor-critic (MAAC) methods are a class of reinforcement learning algorithms designed to enable coordinated behavior among multiple agents interacting within a shared environment. Current research focuses on improving the scalability and efficiency of MAAC, addressing challenges like high variance in gradient estimates and the need for efficient communication and coordination, often employing techniques like centralized training with decentralized execution and incorporating attention mechanisms or transformer architectures. These advancements are driving progress in diverse applications, including traffic control, power grid management, and robotic manipulation, where the ability to learn effective cooperative strategies in complex, multi-agent systems is crucial.
Papers
A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework for Decentralized Inverter-based Voltage Control
Han Xu, Jialin Zheng, Guannan Qu
Multi Actor-Critic DDPG for Robot Action Space Decomposition: A Framework to Control Large 3D Deformation of Soft Linear Objects
Mélodie Daniel, Aly Magassouba, Miguel Aranda, Laurent Lequièvre, Juan Antonio Corrales Ramon, Roberto Iglesias Rodriguez, Youcef Mezouar