Feature Explanation Using Contrasting Concept
Feature explanation using contrasting concepts aims to improve the interpretability of complex machine learning models by identifying key features driving predictions, often focusing on groups of features rather than individual ones. Current research emphasizes developing methods that align with expert knowledge and address inconsistencies across different explanation techniques, exploring approaches like axiomatic characterizations of explainers and the use of techniques such as channel attention and orthogonalization in time series analysis. This work is crucial for building trust in AI systems and enabling better understanding of model behavior across diverse applications, from cybersecurity to medical diagnosis.
Papers
October 28, 2024
September 20, 2024
August 9, 2024
July 4, 2024
July 1, 2024
June 30, 2024
June 3, 2024
March 11, 2024
December 27, 2023
November 27, 2023
October 31, 2023
October 20, 2023
July 19, 2023
March 21, 2023
September 23, 2022
August 16, 2022
June 17, 2022
March 30, 2022
November 14, 2021