Paper ID: 2404.16064
Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications
Yuanfang Ren, Chirayu Tripathi, Ziyuan Guan, Ruilin Zhu, Victoria Hougha, Yingbo Ma, Zhenhong Hu, Jeremy Balch, Tyler J. Loftus, Parisa Rashidi, Benjamin Shickel, Tezcan Ozrazgat-Baslanti, Azra Bihorac
Given the sheer volume of surgical procedures and the significant rate of postoperative fatalities, assessing and managing surgical complications has become a critical public health concern. Existing artificial intelligence (AI) tools for risk surveillance and diagnosis often lack adequate interpretability, fairness, and reproducibility. To address this, we proposed an Explainable AI (XAI) framework designed to answer five critical questions: why, why not, how, what if, and what else, with the goal of enhancing the explainability and transparency of AI models. We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), counterfactual explanations, model cards, an interactive feature manipulation interface, and the identification of similar patients to address these questions. We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications. This initial implementation has provided valuable insights into the vast explanatory potential of our XAI framework and represents an initial step towards its clinical adoption.
Submitted: Apr 18, 2024