Observable Markov Decision Process

Observable Markov Decision Processes (POMDPs) provide a powerful framework for modeling and solving sequential decision-making problems under uncertainty, where the complete state of the environment is not always known. Current research focuses on developing efficient algorithms to solve POMDPs, particularly for complex applications like autonomous driving, robotics, and human-robot interaction, often employing hierarchical planning, deep reinforcement learning, and Monte Carlo tree search. These advancements are significantly impacting various fields by enabling more robust and adaptable autonomous systems capable of operating effectively in uncertain and dynamic environments.

Papers