Paper ID: 2306.12507

Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection

Mozhgan Salimiparsa, Surajsinh Parmar, San Lee, Choongmin Kim, Yonghwan Kim, Jang Yong Kim

Interpreting machine learning models remains a challenge, hindering their adoption in clinical settings. This paper proposes leveraging Local Interpretable Model-Agnostic Explanations (LIME) to provide interpretable descriptions of black box classification models in high-stakes sepsis detection. By analyzing misclassified instances, significant features contributing to suboptimal performance are identified. The analysis reveals regions where the classifier performs poorly, allowing the calculation of error rates within these regions. This knowledge is crucial for cautious decision-making in sepsis detection and other critical applications. The proposed approach is demonstrated using the eICU dataset, effectively identifying and visualizing regions where the classifier underperforms. By enhancing interpretability, our method promotes the adoption of machine learning models in clinical practice, empowering informed decision-making and mitigating risks in critical scenarios.

Submitted: Jun 21, 2023