Local Interpretable Model Agnostic Explanation
Local Interpretable Model-Agnostic Explanations (LIME) aim to make the predictions of complex "black box" machine learning models more understandable. Current research focuses on improving LIME's stability, fidelity (how well explanations reflect the model's behavior), and applicability to various data types (images, text, time series) and model architectures (including deep learning models like transformers and convolutional neural networks). This work is significant because it addresses the critical need for transparency and trust in AI systems, particularly in high-stakes domains like healthcare and finance, by providing more reliable and insightful explanations of model decisions.
Papers
December 10, 2022
November 2, 2022
October 7, 2022
September 8, 2022
July 5, 2022
May 16, 2022
April 7, 2022
March 4, 2022
February 15, 2022
February 2, 2022
January 28, 2022
January 12, 2022
January 4, 2022