Value Function Decomposition
Value function decomposition aims to simplify the complex problem of learning in multi-agent reinforcement learning (MARL) by breaking down the overall value function into individual agent components. Current research focuses on developing algorithms that ensure consistency between these individual components and the overall optimal strategy, often employing techniques like Tchebycheff aggregation or greedy-based value representations within transformer-based or actor-critic architectures. This decomposition approach improves scalability and interpretability in MARL, particularly in challenging domains like StarCraft micromanagement, offering valuable insights into agent decision-making and facilitating more efficient agent design.
Papers
June 24, 2023
November 22, 2022
August 15, 2022
June 24, 2022
February 10, 2022