Post Hoc XAI Method
Post-hoc explainable AI (XAI) methods aim to interpret the predictions of complex, "black-box" machine learning models after they are trained, primarily addressing concerns about transparency and trustworthiness. Current research focuses on evaluating the reliability and effectiveness of various post-hoc techniques, such as LIME and SHAP, often using benchmark datasets and novel metrics to assess explanation quality and user comprehension. This work is crucial for building trust in AI systems across diverse applications, particularly in high-stakes domains like medicine, where understanding model decisions is paramount for both accountability and effective knowledge discovery.
Papers
October 3, 2024
August 23, 2024
July 29, 2024
May 20, 2024
March 28, 2024
February 25, 2024
September 21, 2023
July 18, 2023