Safe Multi Agent Reinforcement Learning

Safe multi-agent reinforcement learning (MARL) focuses on training multiple agents to collaborate and achieve goals while adhering to safety constraints, crucial for real-world applications like autonomous driving and power grid management. Current research emphasizes incorporating safety through methods like constrained optimization (e.g., primal-dual methods, bilevel optimization), model predictive control, and the use of natural language constraints to define safety rules more intuitively. These advancements aim to improve the reliability and trustworthiness of MARL systems, enabling their deployment in high-stakes scenarios where safety is paramount. The field's impact spans various domains, offering solutions for complex coordination problems requiring both efficiency and robust safety guarantees.

Papers