Paper ID: 2203.16073
Explainability in Process Outcome Prediction: Guidelines to Obtain Interpretable and Faithful Models
Alexander Stevens, Johannes De Smedt
Although a recent shift has been made in the field of predictive process monitoring to use models from the explainable artificial intelligence field, the evaluation still occurs mainly through performance-based metrics, thus not accounting for the actionability and implications of the explanations. In this paper, we define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction. The introduced properties are analysed along the event, case, and control flow perspective which are typical for a process-based analysis. This allows comparing inherently created explanations with post-hoc explanations. We benchmark seven classifiers on thirteen real-life events logs, and these cover a range of transparent and non-transparent machine learning and deep learning models, further complemented with explainability techniques. Next, this paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications, by providing insight into how the varying preprocessing, model complexity and explainability techniques typical in process outcome prediction influence the explainability of the model.
Submitted: Mar 30, 2022