XAI Method
Explainable AI (XAI) methods aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing robust evaluation frameworks for existing XAI techniques, including those based on feature attribution, surrogate models, and concept-based explanations, and addressing challenges like the generation of out-of-distribution samples and the impact of multicollinearity. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications like healthcare and finance, where interpretability and accountability are paramount. The development of standardized evaluation metrics and the exploration of user-centric approaches are key areas of ongoing investigation.
Papers
Metric Tools for Sensitivity Analysis with Applications to Neural Networks
Jaime Pizarroso, David Alfaya, José Portela, Antonio Muñoz
A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME
Ahmed Salih, Zahra Raisi-Estabragh, Ilaria Boscolo Galazzo, Petia Radeva, Steffen E. Petersen, Gloria Menegaz, Karim Lekadir
Characterizing the contribution of dependent features in XAI methods
Ahmed Salih, Ilaria Boscolo Galazzo, Zahra Raisi-Estabragh, Steffen E. Petersen, Gloria Menegaz, Petia Radeva
A Brief Review of Explainable Artificial Intelligence in Healthcare
Zahra Sadeghi, Roohallah Alizadehsani, Mehmet Akif Cifci, Samina Kausar, Rizwan Rehman, Priyakshi Mahanta, Pranjal Kumar Bora, Ammar Almasri, Rami S. Alkhawaldeh, Sadiq Hussain, Bilal Alatas, Afshin Shoeibi, Hossein Moosaei, Milan Hladik, Saeid Nahavandi, Panos M. Pardalos