Explainable Machine Learning
Explainable Machine Learning (XAI) aims to make the decision-making processes of machine learning models more transparent and understandable, addressing the "black box" problem. Current research focuses on developing and evaluating various explanation methods, often employing tree-based models like Random Forests and decision trees, as well as exploring techniques like SHAP values and game-theoretic approaches to quantify feature importance and model behavior. This field is crucial for building trust in AI systems across diverse applications, from healthcare and finance to cybersecurity and environmental modeling, by providing insights into model predictions and improving human-AI collaboration.
Papers
December 15, 2022
November 28, 2022
November 2, 2022
November 1, 2022
October 16, 2022
September 16, 2022
September 12, 2022
September 8, 2022
August 22, 2022
August 17, 2022
August 2, 2022
July 10, 2022
July 1, 2022
June 24, 2022
May 25, 2022
April 20, 2022
April 11, 2022
March 9, 2022
March 1, 2022