Post Hoc Explanation
Post-hoc explanation methods aim to make the decision-making processes of "black box" machine learning models more transparent, primarily by identifying which input features most influence a model's predictions. Current research focuses on improving the accuracy, efficiency, and interpretability of these explanations, often employing techniques like Shapley values, LIME, and various neural network architectures (e.g., transformers, CNNs) to generate explanations in different data modalities (audio, images, text, graphs). This work is crucial for building trust in AI systems and enabling better understanding of model behavior, particularly in high-stakes applications like healthcare and finance, where model transparency is paramount.
Papers
November 6, 2023
October 25, 2023
October 11, 2023
July 23, 2023
July 1, 2023
June 19, 2023
May 25, 2023
May 21, 2023
May 19, 2023
April 24, 2023
March 15, 2023
February 15, 2023
February 11, 2023
January 13, 2023
December 19, 2022
December 16, 2022
December 10, 2022
December 6, 2022
December 2, 2022
November 14, 2022