Rule Based Explanation
Rule-based explanation methods aim to make the decisions of complex machine learning models more transparent and understandable by representing them as a set of easily interpretable rules. Current research focuses on improving the fidelity and efficiency of these rule-based explanations, often integrating them with other explanation techniques like feature importance analysis and counterfactual examples, and employing algorithms such as decision trees and reinforcement learning to generate them. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications where understanding model decisions is paramount, and for facilitating better model selection and debugging.
Papers
October 31, 2024
July 16, 2024
June 2, 2024
March 16, 2024
February 21, 2024
February 16, 2024
January 4, 2024
November 23, 2023
October 23, 2023
September 29, 2023
September 6, 2023
June 19, 2023
June 7, 2023
February 14, 2023
February 9, 2023
January 22, 2023
October 31, 2022