Local Explanation
Local explanation methods aim to make the decisions of complex, "black-box" machine learning models more transparent and understandable. Current research focuses on improving the faithfulness and reliability of these explanations, particularly for large language models and deep learning architectures like convolutional neural networks and gradient boosting trees, often employing techniques like surrogate models, counterfactual analysis, and topological data analysis for comparison and visualization. This work is crucial for building trust in AI systems across various domains, from healthcare to finance, by providing users with insights into model behavior and identifying potential biases or vulnerabilities.
Papers
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
Laura State, Hadrien Salat, Stefania Rubrichi, Zbigniew Smoreda
REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study
Iván Sevillano-García, Julián Luengo-Martín, Francisco Herrera