Local Interpretable Model Agnostic Explanation
Local Interpretable Model-Agnostic Explanations (LIME) aim to make the predictions of complex "black box" machine learning models more understandable. Current research focuses on improving LIME's stability, fidelity (how well explanations reflect the model's behavior), and applicability to various data types (images, text, time series) and model architectures (including deep learning models like transformers and convolutional neural networks). This work is significant because it addresses the critical need for transparency and trust in AI systems, particularly in high-stakes domains like healthcare and finance, by providing more reliable and insightful explanations of model decisions.
Papers
September 24, 2024
July 7, 2024
June 8, 2024
May 23, 2024
April 10, 2024
March 12, 2024
March 4, 2024
February 21, 2024
January 22, 2024
November 27, 2023
October 12, 2023
October 5, 2023
October 1, 2023
September 11, 2023
June 21, 2023
May 3, 2023
April 12, 2023
February 27, 2023
February 11, 2023
January 13, 2023