Explainable Anomaly Detection

Explainable anomaly detection (XAD) aims to improve the transparency and interpretability of anomaly detection systems, addressing the "black box" nature of many machine learning models. Current research focuses on developing methods that provide insights into why an anomaly is flagged, employing techniques like counterfactual explanations, isolation-based approaches, and saliency maps within various model architectures such as temporal convolutional networks and transformers. This field is crucial for building trust and facilitating human understanding in high-stakes applications across diverse domains, including industrial process monitoring, cybersecurity, and text analysis, where understanding the *reason* for an anomaly is as important as its detection.

Papers