Interpretable Rule
Interpretable rule learning aims to create understandable and transparent decision-making models, addressing the "black box" nature of many powerful machine learning systems. Current research focuses on developing efficient algorithms, such as those based on decision trees, integer programming, and submodular optimization, to extract concise and accurate rule sets from complex models like neural networks and ensembles. This pursuit is crucial for building trust in AI systems, facilitating human understanding of their predictions, and enabling responsible deployment in high-stakes applications like healthcare and finance.
Papers
July 29, 2024
July 1, 2024
June 30, 2024
June 25, 2024
April 3, 2024
March 20, 2024
March 5, 2024
December 22, 2023
October 22, 2023
July 5, 2023
June 24, 2023
June 20, 2023
June 16, 2023
March 15, 2023
November 9, 2022
June 18, 2022
June 8, 2022
May 27, 2022
January 17, 2022