Explanation Algorithm
Explanation algorithms aim to make the decision-making processes of complex machine learning models, such as random forests and graph neural networks, more transparent and understandable. Current research focuses on developing algorithms that generate faithful and plausible explanations, often using techniques like prototype selection, rule extraction, and feature attribution, while also addressing challenges like quantifying uncertainty in explanations and handling feature interactions. This work is crucial for building trust in AI systems across various scientific domains and practical applications, enabling better decision-making and facilitating the responsible use of machine learning.
Papers
November 1, 2024
August 10, 2024
May 31, 2024
March 29, 2024
February 5, 2024
December 20, 2023
November 29, 2023
October 27, 2023
June 7, 2023
November 3, 2022