Explainable Machine Learning

Explainable Machine Learning (XAI) aims to make the decision-making processes of machine learning models more transparent and understandable, addressing the "black box" problem. Current research focuses on developing and evaluating various explanation methods, often employing tree-based models like Random Forests and decision trees, as well as exploring techniques like SHAP values and game-theoretic approaches to quantify feature importance and model behavior. This field is crucial for building trust in AI systems across diverse applications, from healthcare and finance to cybersecurity and environmental modeling, by providing insights into model predictions and improving human-AI collaboration.

Papers