Local Explanation
Local explanation methods aim to make the decisions of complex, "black-box" machine learning models more transparent and understandable. Current research focuses on improving the faithfulness and reliability of these explanations, particularly for large language models and deep learning architectures like convolutional neural networks and gradient boosting trees, often employing techniques like surrogate models, counterfactual analysis, and topological data analysis for comparison and visualization. This work is crucial for building trust in AI systems across various domains, from healthcare to finance, by providing users with insights into model behavior and identifying potential biases or vulnerabilities.
Papers
September 6, 2022
August 12, 2022
August 1, 2022
July 30, 2022
July 7, 2022
June 9, 2022
May 11, 2022
April 30, 2022
April 29, 2022
March 3, 2022
February 8, 2022
February 1, 2022
January 12, 2022
January 6, 2022
December 16, 2021