Interpretable Machine Learning
Interpretable machine learning (IML) aims to develop machine learning models that are not only accurate but also transparent and understandable, addressing the "black box" problem of many high-performing models. Current research focuses on developing inherently interpretable models like generalized additive models (GAMs) and decision trees, as well as post-hoc methods that explain the predictions of complex models (e.g., using feature importance, Shapley values, or LLM-based explanations). This field is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, where understanding model decisions is paramount for responsible and effective use.
Papers
August 15, 2023
August 2, 2023
July 20, 2023
July 19, 2023
July 11, 2023
June 1, 2023
May 28, 2023
May 25, 2023
May 13, 2023
May 2, 2023
March 27, 2023
March 20, 2023
March 12, 2023
February 27, 2023
February 13, 2023
December 21, 2022
December 16, 2022
November 26, 2022