Hierarchical MDP
Hierarchical Markov Decision Processes (MDPs) decompose complex tasks into nested sub-tasks, enabling efficient planning and learning in large, intricate environments. Current research focuses on developing deep reinforcement learning algorithms to synthesize hierarchical controllers, often employing generative models to improve efficiency and robustness in training. These methods address challenges like sparse rewards and resource constraints, finding applications in diverse areas such as robotics, mobile edge computing, and multi-task reinforcement learning. The resulting improvements in scalability and performance have significant implications for the design and control of autonomous systems and resource-constrained applications.
Papers
February 21, 2024
September 30, 2023
July 3, 2023
October 21, 2022
June 6, 2022