Interpretable Machine Learning
Interpretable machine learning (IML) aims to develop machine learning models that are not only accurate but also transparent and understandable, addressing the "black box" problem of many high-performing models. Current research focuses on developing inherently interpretable models like generalized additive models (GAMs) and decision trees, as well as post-hoc methods that explain the predictions of complex models (e.g., using feature importance, Shapley values, or LLM-based explanations). This field is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, where understanding model decisions is paramount for responsible and effective use.
Papers
May 31, 2024
May 19, 2024
May 18, 2024
April 27, 2024
April 9, 2024
March 24, 2024
March 16, 2024
March 15, 2024
March 4, 2024
February 27, 2024
February 15, 2024
February 14, 2024
January 30, 2024
January 24, 2024
January 19, 2024
December 30, 2023
December 11, 2023
October 3, 2023
September 4, 2023
August 15, 2023