XAI Method
Explainable AI (XAI) methods aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing robust evaluation frameworks for existing XAI techniques, including those based on feature attribution, surrogate models, and concept-based explanations, and addressing challenges like the generation of out-of-distribution samples and the impact of multicollinearity. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications like healthcare and finance, where interpretability and accountability are paramount. The development of standardized evaluation metrics and the exploration of user-centric approaches are key areas of ongoing investigation.
Papers
EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods
Benedict Clark, Rick Wilming, Artur Dox, Paul Eschenbach, Sami Hached, Daniel Jin Wodke, Michias Taye Zewdie, Uladzislau Bruila, Marta Oliveira, Hjalmar Schulz, Luca Matteo Cornils, Danny Panknin, Ahcène Boubekki, Stefan Haufe
Improving the Explain-Any-Concept by Introducing Nonlinearity to the Trainable Surrogate Model
Mounes Zaval, Sedat Ozer
XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development
Zerui Wang, Yan Liu, Abishek Arumugam Thiruselvi, Abdelwahab Hamou-Lhadj
Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors
Md Abdul Kadir, GowthamKrishna Addluri, Daniel Sonntag