Explanation Based
Explanation-based methods aim to enhance the transparency and trustworthiness of machine learning models by providing insights into their decision-making processes. Current research focuses on developing robust and faithful explanation methods, evaluating their quality using statistical measures and information theory, and integrating explanations into model training to improve both accuracy and interpretability. This work is crucial for building trust in AI systems across various domains, from medical diagnosis to autonomous driving, by providing users with understandable justifications for model predictions and facilitating the identification and mitigation of biases.
Papers
November 15, 2024
July 18, 2024
April 30, 2024
February 7, 2024
December 29, 2023
November 30, 2023
October 19, 2023
September 11, 2023
July 12, 2023
March 5, 2023
November 15, 2022
October 18, 2022
June 16, 2022