Paper ID: 2202.11188

SIPOMDPLite-Net: Lightweight, Self-Interested Learning and Planning in POSGs with Sparse Interactions

Gengyu Zhang, Prashant Doshi

This work introduces sIPOMDPLite-net, a deep neural network (DNN) architecture for decentralized, self-interested agent control in partially observable stochastic games (POSGs) with sparse interactions between agents. The network learns to plan in contexts modeled by the interactive partially observable Markov decision process (I-POMDP) Lite framework and uses hierarchical value iteration networks to simulate the solution of nested MDPs, which I-POMDP Lite attributes to the other agent to model its behavior and predict its intention. We train sIPOMDPLite-net with expert demonstrations on small two-agent Tiger-grid tasks, for which it accurately learns the underlying I-POMDP Lite model and near-optimal policy, and the policy continues to perform well on larger grids and real-world maps. As such, sIPOMDPLite-net shows good transfer capabilities and offers a lighter learning and planning approach for individual, self-interested agents in multiagent settings.

Submitted: Feb 22, 2022