Dynamic Game
Dynamic games model strategic interactions between multiple agents whose actions influence each other over time, aiming to predict and optimize agent behavior in various scenarios. Current research focuses on developing efficient algorithms, such as those based on reinforcement learning (e.g., DQN, PPO, A2C), sequential quadratic programming, and iterative linear quadratic regulators (iLQR), to solve these often complex games, particularly in partially observable or noisy environments. These advancements are crucial for improving the safety and efficiency of autonomous systems, such as self-driving cars and robots, by enabling them to better predict and react to human and other agent behavior in shared spaces. Furthermore, research explores methods for inferring agent objectives and handling multimodal behaviors, leading to more robust and realistic models of multi-agent interactions.
Papers
Emergent Coordination through Game-Induced Nonlinear Opinion Dynamics
Haimin Hu, Kensuke Nakamura, Kai-Chieh Hsu, Naomi Ehrich Leonard, Jaime Fernández Fisac
Dynamic Adversarial Resource Allocation: the dDAB Game
Daigo Shishika, Yue Guan, Jason R. Marden, Michael Dorothy, Panagiotis Tsiotras, Vijay Kumar