Explainable Model
Explainable models aim to make the decision-making processes of machine learning models transparent and understandable, addressing the "black box" problem inherent in many complex algorithms. Current research focuses on developing inherently interpretable models, such as those based on additive models, decision trees, and prototypical networks, as well as employing post-hoc explanation techniques like SHAP values and counterfactual analysis to interpret existing models. This pursuit of explainability is crucial for building trust in AI systems across diverse fields, from healthcare and finance to environmental science, enabling more reliable decision-making and facilitating greater understanding of complex phenomena.
Papers
June 18, 2022
June 16, 2022
May 9, 2022
February 14, 2022
December 7, 2021
December 1, 2021