Explainable AI Framework

Explainable AI (XAI) frameworks aim to make the decision-making processes of complex machine learning models transparent and understandable, addressing the "black box" problem. Current research focuses on integrating XAI techniques like SHAP values, LIME, and Layer-wise Relevance Propagation with various model architectures, including deep learning (e.g., CNNs, BiLSTMs) and tree-based models, across diverse applications. This work is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and scientific discovery, by providing insights into model predictions and facilitating human-in-the-loop decision-making. The resulting improved interpretability enhances both the reliability and the utility of AI in these fields.

Papers