XAI Attribution
Explainable AI (XAI) attribution methods aim to decipher the decision-making processes of complex machine learning models, particularly deep learning models, by identifying which input features most influence the model's output. Current research focuses on improving the accuracy and interpretability of attribution methods, addressing challenges like handling different model types (classification vs. regression), defining appropriate baselines for comparison, and developing robust evaluation metrics. This work is crucial for building trust in AI systems across diverse fields, from medical diagnosis to climate modeling, by providing insights into model behavior and facilitating more reliable and responsible AI deployment.
Papers
November 20, 2024
September 26, 2024
August 1, 2024
March 12, 2024
October 6, 2023
August 6, 2023
November 27, 2022
August 19, 2022