Kernel Based Reinforcement Learning
Kernel-based reinforcement learning (KBRL) aims to leverage the power of kernel methods for function approximation within reinforcement learning algorithms, addressing the challenge of handling large and complex state-action spaces. Current research focuses on developing theoretically sound algorithms, such as variations of policy iteration and Q-learning, that utilize reproducing kernel Hilbert spaces (RKHSs) and achieve order-optimal regret bounds or finite sample complexity. This approach offers a powerful framework for tackling real-world problems, demonstrated by applications in areas like adaptive filtering and motion planning for autonomous vehicles, where it shows improved performance compared to traditional methods.
Papers
June 21, 2024
September 14, 2023
August 7, 2023
June 13, 2023
February 1, 2023
October 21, 2022
October 20, 2022