Post Hoc Explanation
Post-hoc explanation methods aim to make the decision-making processes of "black box" machine learning models more transparent, primarily by identifying which input features most influence a model's predictions. Current research focuses on improving the accuracy, efficiency, and interpretability of these explanations, often employing techniques like Shapley values, LIME, and various neural network architectures (e.g., transformers, CNNs) to generate explanations in different data modalities (audio, images, text, graphs). This work is crucial for building trust in AI systems and enabling better understanding of model behavior, particularly in high-stakes applications like healthcare and finance, where model transparency is paramount.
Papers
November 7, 2024
September 13, 2024
August 30, 2024
August 19, 2024
July 30, 2024
June 26, 2024
June 19, 2024
June 7, 2024
April 26, 2024
April 11, 2024
April 3, 2024
March 28, 2024
March 6, 2024
February 19, 2024
February 5, 2024
January 22, 2024
January 9, 2024
December 3, 2023
November 29, 2023