Kernel Based Reinforcement Learning

Kernel-based reinforcement learning (KBRL) aims to leverage the power of kernel methods for function approximation within reinforcement learning algorithms, addressing the challenge of handling large and complex state-action spaces. Current research focuses on developing theoretically sound algorithms, such as variations of policy iteration and Q-learning, that utilize reproducing kernel Hilbert spaces (RKHSs) and achieve order-optimal regret bounds or finite sample complexity. This approach offers a powerful framework for tackling real-world problems, demonstrated by applications in areas like adaptive filtering and motion planning for autonomous vehicles, where it shows improved performance compared to traditional methods.

Papers