Explainable Machine Learning
Explainable Machine Learning (XAI) aims to make the decision-making processes of machine learning models more transparent and understandable, addressing the "black box" problem. Current research focuses on developing and evaluating various explanation methods, often employing tree-based models like Random Forests and decision trees, as well as exploring techniques like SHAP values and game-theoretic approaches to quantify feature importance and model behavior. This field is crucial for building trust in AI systems across diverse applications, from healthcare and finance to cybersecurity and environmental modeling, by providing insights into model predictions and improving human-AI collaboration.
Papers
Efficient Milling Quality Prediction with Explainable Machine Learning
Dennis Gross, Helge Spieker, Arnaud Gotlieb, Ricardo Knoblauch, Mohamed Elmansori
Global Lightning-Ignited Wildfires Prediction and Climate Change Projections based on Explainable Machine Learning Models
Assaf Shmuel, Teddy Lazebnik, Oren Glickman, Eyal Heifetz, Colin Price