Interpretable Policy
Interpretable policy research focuses on developing machine learning models whose decision-making processes are transparent and understandable, addressing the "black box" problem of many AI systems. Current research emphasizes tree-based models, including decision trees and variations like Optimal MDP Decision Trees and Interpretable Continuous Control Trees, as well as neuro-symbolic approaches that combine neural networks with symbolic reasoning to create more explainable policies. This work is crucial for building trust in AI systems, particularly in high-stakes applications like autonomous driving and healthcare, where understanding the reasoning behind decisions is paramount for safety and accountability.
Papers
August 15, 2024
May 30, 2024
May 23, 2024
April 16, 2024
February 14, 2024
November 16, 2023
June 2, 2023
May 12, 2023
January 30, 2023
July 20, 2022
May 2, 2022
April 26, 2022
April 8, 2022
February 4, 2022
January 25, 2022
January 18, 2022
January 15, 2022