Paper ID: 2310.05468

Enhancing Interpretability and Generalizability in Extended Isolation Forests

Alessio Arcudi, Davide Frizzo, Chiara Masiero, Gian Antonio Susto

Anomaly Detection (AD) focuses on identifying unusual behaviors in complex datasets. Machine Learning (ML) algorithms and Decision Support Systems (DSSs) provide effective solutions for AD, but detecting anomalies alone may not be enough, especially in engineering, where diagnostics and maintenance are crucial. Users need clear explanations to support root cause analysis and build trust in the model. The unsupervised nature of AD, however, makes interpretability a challenge. This paper introduces Extended Isolation Forest Feature Importance (ExIFFI), a method that explains predictions made by Extended Isolation Forest (EIF) models, which split data using hyperplanes. ExIFFI provides explanations at both global and local levels by leveraging feature importance. We also present an improved version, Enhanced Extended Isolation Forest (EIF+), designed to enhance the model's ability to detect unseen anomalies through a revised splitting strategy. Using five synthetic and eleven real-world datasets, we conduct a comparative analysis, evaluating unsupervised AD methods with the Average Precision metric. EIF+ consistently outperforms EIF across all datasets when trained without anomalies, demonstrating better generalization. To assess ExIFFI's interpretability, we introduce the Area Under the Curve of Feature Selection (AUC\_FS), a novel metric using feature selection as a proxy task. ExIFFI outperforms other unsupervised interpretability methods on 8 of 11 real-world datasets and successfully identifies anomalous features in synthetic datasets. When trained only on inliers, ExIFFI also outperforms competing models on real-world data and accurately detects anomalous features in synthetic datasets. We provide open-source code to encourage further research and reproducibility.

Submitted: Oct 9, 2023