Observable Markov Decision Process
Observable Markov Decision Processes (POMDPs) provide a powerful framework for modeling and solving sequential decision-making problems under uncertainty, where the complete state of the environment is not always known. Current research focuses on developing efficient algorithms to solve POMDPs, particularly for complex applications like autonomous driving, robotics, and human-robot interaction, often employing hierarchical planning, deep reinforcement learning, and Monte Carlo tree search. These advancements are significantly impacting various fields by enabling more robust and adaptable autonomous systems capable of operating effectively in uncertain and dynamic environments.
Papers
February 1, 2023
January 31, 2023
January 22, 2023
January 14, 2023
January 2, 2023
December 17, 2022
November 15, 2022
November 3, 2022
October 25, 2022
October 2, 2022
September 21, 2022
September 20, 2022
September 2, 2022
August 31, 2022
August 4, 2022
July 24, 2022
July 11, 2022
June 23, 2022