Interpretable Machine Learning
Interpretable machine learning (IML) aims to develop machine learning models that are not only accurate but also transparent and understandable, addressing the "black box" problem of many high-performing models. Current research focuses on developing inherently interpretable models like generalized additive models (GAMs) and decision trees, as well as post-hoc methods that explain the predictions of complex models (e.g., using feature importance, Shapley values, or LLM-based explanations). This field is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, where understanding model decisions is paramount for responsible and effective use.
Papers
November 10, 2022
November 2, 2022
October 19, 2022
September 29, 2022
September 15, 2022
July 12, 2022
July 11, 2022
July 7, 2022
June 20, 2022
June 11, 2022
June 10, 2022
May 28, 2022
May 27, 2022
May 16, 2022
May 14, 2022
May 9, 2022
May 5, 2022
April 29, 2022
April 19, 2022