Local Interpretable
Local interpretable model-agnostic explanations (LIME) aim to make the predictions of complex machine learning models, often "black boxes," understandable by providing local explanations for individual predictions. Current research focuses on improving the stability, accuracy, and fidelity of LIME and related methods like SHAP, often by employing alternative model architectures (e.g., decision trees, Bayesian regression) within the explanation framework or by developing inherently interpretable models. This work is crucial for building trust in AI systems, particularly in high-stakes domains like medicine, where understanding the reasoning behind predictions is essential for responsible deployment and clinical decision-making.
Papers
June 8, 2024
June 1, 2024
November 27, 2023
July 16, 2023
April 29, 2023
November 28, 2022
November 1, 2022