Paper ID: 2209.03433
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour, Arghasree Banerjee, Matthew Guzdial
Explainable Artificial Intelligence (XAI) methods are intended to help human users better understand the decision making of an AI agent. However, many modern XAI approaches are unintuitive to end users, particularly those without prior AI or ML knowledge. In this paper, we present a novel XAI approach we call Responsibility that identifies the most responsible training example for a particular decision. This example can then be shown as an explanation: "this is what I (the AI) learned that led me to do that". We present experimental results across a number of domains along with the results of an Amazon Mechanical Turk user study, comparing responsibility and existing XAI methods on an image classification task. Our results demonstrate that responsibility can help improve accuracy for both human end users and secondary ML models.
Submitted: Sep 7, 2022