Model Free Algorithm
Model-free algorithms in reinforcement learning aim to learn optimal policies directly from experience without explicitly modeling the environment's dynamics. Current research focuses on improving the sample efficiency and asymptotic performance of these algorithms, exploring techniques like optimistic Q-learning, entropy regularization, and ensemble methods within various architectures such as soft actor-critic and variants of Q-learning. These advancements are significant because they enable efficient learning in complex, high-dimensional settings, with applications ranging from robotics and game playing to resource management and personalized medicine.
Papers
October 16, 2024
October 7, 2024
October 3, 2024
July 18, 2024
July 8, 2024
June 17, 2024
June 12, 2024
June 11, 2024
March 25, 2024
October 3, 2023
September 27, 2023
August 17, 2023
May 31, 2023
March 10, 2023
February 20, 2023
November 10, 2022
October 14, 2022
May 27, 2022
March 27, 2022