Observable Markov Decision Process
Observable Markov Decision Processes (POMDPs) provide a powerful framework for modeling and solving sequential decision-making problems under uncertainty, where the complete state of the environment is not always known. Current research focuses on developing efficient algorithms to solve POMDPs, particularly for complex applications like autonomous driving, robotics, and human-robot interaction, often employing hierarchical planning, deep reinforcement learning, and Monte Carlo tree search. These advancements are significantly impacting various fields by enabling more robust and adaptable autonomous systems capable of operating effectively in uncertain and dynamic environments.
Papers
October 16, 2023
October 2, 2023
September 23, 2023
July 29, 2023
June 16, 2023
May 29, 2023
May 14, 2023
May 8, 2023
May 5, 2023
May 1, 2023
April 30, 2023
April 27, 2023
April 19, 2023
April 13, 2023
March 29, 2023
March 13, 2023
February 27, 2023
February 17, 2023