Explainable Model
Explainable models aim to make the decision-making processes of machine learning models transparent and understandable, addressing the "black box" problem inherent in many complex algorithms. Current research focuses on developing inherently interpretable models, such as those based on additive models, decision trees, and prototypical networks, as well as employing post-hoc explanation techniques like SHAP values and counterfactual analysis to interpret existing models. This pursuit of explainability is crucial for building trust in AI systems across diverse fields, from healthcare and finance to environmental science, enabling more reliable decision-making and facilitating greater understanding of complex phenomena.
Papers
A Data-driven Case-based Reasoning in Bankruptcy Prediction
Wei Li, Wolfgang Karl Härdle, Stefan Lessmann
Interpretable estimation of the risk of heart failure hospitalization from a 30-second electrocardiogram
Sergio González, Wan-Ting Hsieh, Davide Burba, Trista Pei-Chun Chen, Chun-Li Wang, Victor Chien-Chia Wu, Shang-Hung Chang