XAI Attribution

Explainable AI (XAI) attribution methods aim to decipher the decision-making processes of complex machine learning models, particularly deep learning models, by identifying which input features most influence the model's output. Current research focuses on improving the accuracy and interpretability of attribution methods, addressing challenges like handling different model types (classification vs. regression), defining appropriate baselines for comparison, and developing robust evaluation metrics. This work is crucial for building trust in AI systems across diverse fields, from medical diagnosis to climate modeling, by providing insights into model behavior and facilitating more reliable and responsible AI deployment.

Papers