Bellman Residual
Bellman residual methods aim to improve the accuracy and efficiency of reinforcement learning and related optimization problems by focusing on the difference between the estimated value function and its Bellman backup. Current research emphasizes minimizing this residual using various techniques, including Krylov subspace methods, energy-based approaches for distributional reinforcement learning, and orthogonalization strategies for offline reinforcement learning. These advancements enhance the robustness and sample efficiency of algorithms, particularly in complex visual control tasks and multi-objective optimization, leading to improved performance and interpretability in diverse applications.
Papers
February 25, 2024
February 2, 2024
June 14, 2023
October 20, 2022
September 5, 2022