Paper ID: 2210.15675

Feature Necessity & Relevancy in ML Classifier Explanations

Xuanxiang Huang, Martin C. Cooper, Antonio Morgado, Jordi Planes, Joao Marques-Silva

Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction. In some applications, and besides asking for an explanation, it is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations. This paper starts by relating such queries respectively with the problems of relevancy and necessity in logic-based abduction. The paper then proves membership and hardness results for several families of ML classifiers. Afterwards the paper proposes concrete algorithms for two classes of classifiers. The experimental results confirm the scalability of the proposed algorithms.

Submitted: Oct 27, 2022