XAI Method
Explainable AI (XAI) methods aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing robust evaluation frameworks for existing XAI techniques, including those based on feature attribution, surrogate models, and concept-based explanations, and addressing challenges like the generation of out-of-distribution samples and the impact of multicollinearity. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications like healthcare and finance, where interpretability and accountability are paramount. The development of standardized evaluation metrics and the exploration of user-centric approaches are key areas of ongoing investigation.
Papers
The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research
Thomas Decker, Ralf Gross, Alexander Koebler, Michael Lebacher, Ronald Schnitzer, Stefan H. Weber
Human-Centered Evaluation of XAI Methods
Karam Dawoud, Wojciech Samek, Peter Eisert, Sebastian Lapuschkin, Sebastian Bosse
Towards Feasible Counterfactual Explanations: A Taxonomy Guided Template-based NLG Method
Pedram Salimi, Nirmalie Wiratunga, David Corsar, Anjana Wijekoon
Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation
Abdul Karim Gizzini, Mustafa Shukor, Ali J. Ghandour
Trainable Noise Model as an XAI evaluation method: application on Sobol for remote sensing image segmentation
Hossein Shreim, Abdul Karim Gizzini, Ali J. Ghandour