XAI Method
Explainable AI (XAI) methods aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing robust evaluation frameworks for existing XAI techniques, including those based on feature attribution, surrogate models, and concept-based explanations, and addressing challenges like the generation of out-of-distribution samples and the impact of multicollinearity. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications like healthcare and finance, where interpretability and accountability are paramount. The development of standardized evaluation metrics and the exploration of user-centric approaches are key areas of ongoing investigation.
Papers
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto, Tiago Gonçalves, João Ribeiro Pinto, Wilson Silva, Ana F. Sequeira, Arun Ross, Jaime S. Cardoso
Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience
Antonios Mamalakis, Elizabeth A. Barnes, Imme Ebert-Uphoff
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong, Wenyu Jiang, Yi Zhang, Chongjun Wang
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser
Do We Need Another Explainable AI Method? Toward Unifying Post-hoc XAI Evaluation Methods into an Interactive and Multi-dimensional Benchmark
Mohamed Karim Belaid, Eyke Hüllermeier, Maximilian Rabus, Ralf Krestel