Explainable Machine Learning
Explainable Machine Learning (XAI) aims to make the decision-making processes of machine learning models more transparent and understandable, addressing the "black box" problem. Current research focuses on developing and evaluating various explanation methods, often employing tree-based models like Random Forests and decision trees, as well as exploring techniques like SHAP values and game-theoretic approaches to quantify feature importance and model behavior. This field is crucial for building trust in AI systems across diverse applications, from healthcare and finance to cybersecurity and environmental modeling, by providing insights into model predictions and improving human-AI collaboration.
Papers
October 23, 2023
October 21, 2023
October 8, 2023
September 25, 2023
September 15, 2023
July 31, 2023
July 17, 2023
July 3, 2023
June 27, 2023
June 23, 2023
May 29, 2023
May 4, 2023
May 3, 2023
April 12, 2023
March 27, 2023
March 5, 2023
March 2, 2023
February 7, 2023
February 5, 2023