Policy Distillation
Policy distillation in reinforcement learning aims to transfer knowledge from a complex, often computationally expensive "teacher" policy to a simpler, more efficient "student" policy. Current research focuses on improving sample efficiency, enhancing robustness to imperfect teacher policies, and achieving interpretability through distillation into models like decision trees, gradient boosting machines, or neuro-fuzzy systems. This technique is proving valuable across diverse applications, including robotics (manipulation, locomotion, grasping), finance (portfolio management), and healthcare (drug dosing), by enabling faster training, reduced computational cost, and improved explainability of learned behaviors.
Papers
October 31, 2024
June 8, 2024
May 8, 2024
April 26, 2024
April 5, 2024
March 21, 2024
March 15, 2024
February 9, 2024
November 22, 2023
June 23, 2023
June 21, 2023
May 25, 2023
February 1, 2023
September 20, 2022
September 19, 2022
September 15, 2022
September 7, 2022
July 29, 2022