Distributional Bellman
Distributional Bellman methods in reinforcement learning aim to learn not just the expected reward, but the entire distribution of possible future rewards, enabling more nuanced decision-making, especially in risk-sensitive scenarios. Current research focuses on developing theoretically sound algorithms, including model-based approaches and those utilizing mean embeddings or categorical representations of distributions, addressing challenges like high-dimensional rewards and biased exploration stemming from optimism-based methods. These advancements offer improved performance in various applications, particularly where understanding the uncertainty of future outcomes is crucial, such as robotics and financial modeling.
Papers
August 14, 2024
February 12, 2024
December 9, 2023
October 25, 2023
July 2, 2023
October 26, 2022
May 24, 2022