Post Hoc Explainability
Post-hoc explainability aims to understand the decision-making processes of already-trained "black box" machine learning models, particularly deep neural networks, without altering their structure or performance. Current research focuses on developing model-agnostic methods, such as those based on Shapley values, gradients, and distillation, to generate explanations across various data modalities (images, audio, text) and improve the faithfulness and stability of these explanations. This field is crucial for building trust in AI systems used in high-stakes applications like healthcare and finance, where understanding model decisions is paramount for responsible deployment and effective auditing.
Papers
October 2, 2024
September 17, 2024
June 14, 2024
June 5, 2024
May 6, 2024
April 15, 2024
April 3, 2024
January 17, 2024
December 19, 2023
December 3, 2023
October 18, 2023
September 21, 2023
August 27, 2023
July 1, 2023
June 4, 2023
January 5, 2023
November 12, 2022
September 8, 2022
July 12, 2022