Observable Markov Decision Process
Observable Markov Decision Processes (POMDPs) provide a powerful framework for modeling and solving sequential decision-making problems under uncertainty, where the complete state of the environment is not always known. Current research focuses on developing efficient algorithms to solve POMDPs, particularly for complex applications like autonomous driving, robotics, and human-robot interaction, often employing hierarchical planning, deep reinforcement learning, and Monte Carlo tree search. These advancements are significantly impacting various fields by enabling more robust and adaptable autonomous systems capable of operating effectively in uncertain and dynamic environments.
Papers
A POMDP-based hierarchical planning framework for manipulation under pose uncertainty
Muhammad Suhail Saleem, Rishi Veerapaneni, Maxim Likhachev
BoT-Drive: Hierarchical Behavior and Trajectory Planning for Autonomous Driving using POMDPs
Xuanjin Jin, Chendong Zeng, Shengfa Zhu, Chunxiao Liu, Panpan Cai