Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Widespread Increases in Future Wildfire Risk to Global Forest Carbon Offset Projects Revealed by Explainable AI
Tristan Ballard, Matthew Cooper, Chris Lowrie, Gopal Erinjippurath
Calibrated Explanations: with Uncertainty Information and Counterfactuals
Helena Lofstrom, Tuwe Lofstrom, Ulf Johansson, Cecilia Sonstrod
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
Subrato Bharati, M. Rubaiyat Hossain Mondal, Prajoy Podder
Artificial Intelligence/Operations Research Workshop 2 Report Out
John Dickerson, Bistra Dilkina, Yu Ding, Swati Gupta, Pascal Van Hentenryck, Sven Koenig, Ramayya Krishnan, Radhika Kulkarni, Catherine Gill, Haley Griffin, Maddy Hunter, Ann Schwartz