Interpretable Machine Learning
Interpretable machine learning (IML) aims to develop machine learning models that are not only accurate but also transparent and understandable, addressing the "black box" problem of many high-performing models. Current research focuses on developing inherently interpretable models like generalized additive models (GAMs) and decision trees, as well as post-hoc methods that explain the predictions of complex models (e.g., using feature importance, Shapley values, or LLM-based explanations). This field is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, where understanding model decisions is paramount for responsible and effective use.
Papers
November 14, 2024
November 13, 2024
October 11, 2024
October 10, 2024
September 22, 2024
September 11, 2024
September 5, 2024
August 27, 2024
August 22, 2024
August 18, 2024
August 15, 2024
August 2, 2024
July 31, 2024
July 26, 2024
July 16, 2024
July 12, 2024
June 14, 2024
June 5, 2024