Observable Markov Decision Process
Observable Markov Decision Processes (POMDPs) provide a powerful framework for modeling and solving sequential decision-making problems under uncertainty, where the complete state of the environment is not always known. Current research focuses on developing efficient algorithms to solve POMDPs, particularly for complex applications like autonomous driving, robotics, and human-robot interaction, often employing hierarchical planning, deep reinforcement learning, and Monte Carlo tree search. These advancements are significantly impacting various fields by enabling more robust and adaptable autonomous systems capable of operating effectively in uncertain and dynamic environments.
Papers
Multi-Objective Multi-Agent Planning for Discovering and Tracking Multiple Mobile Objects
Hoa Van Nguyen, Ba-Ngu Vo, Ba-Tuong Vo, Hamid Rezatofighi, Damith C. Ranasinghe
Cooperative Trajectory Planning in Uncertain Environments with Monte Carlo Tree Search and Risk Metrics
Philipp Stegmaier, Karl Kurzer, J. Marius Zöllner