Local Interpretability
Local interpretability in machine learning focuses on explaining individual predictions made by complex models, such as deep neural networks and time series transformers, to enhance trust and facilitate understanding. Current research emphasizes developing methods that provide faithful and robust explanations, often using techniques like SHAP values, LIME, and rule-based ensembles, and exploring how to effectively evaluate the quality of these explanations. This area is crucial for deploying machine learning models in high-stakes domains like healthcare and finance, where understanding model decisions is paramount for responsible use and building trust.
Papers
January 26, 2024
October 30, 2023
September 16, 2023
July 16, 2023
June 20, 2023
March 29, 2023
March 2, 2023
December 7, 2022
December 1, 2022
November 3, 2022
August 1, 2022
May 5, 2022
April 7, 2022
March 8, 2022
January 4, 2022