Interpretable Prediction
Interpretable prediction focuses on developing machine learning models that not only achieve high predictive accuracy but also provide understandable explanations for their decisions. Current research emphasizes developing novel model architectures, such as rule-set models, variational autoencoders, and neural-symbolic approaches, that inherently incorporate interpretability or generate easily understandable explanations alongside predictions. This field is crucial for building trust in AI systems, particularly in high-stakes domains like healthcare and finance, where understanding the reasoning behind predictions is paramount for responsible decision-making.
Papers
August 11, 2024
June 5, 2024
May 21, 2024
May 16, 2024
April 23, 2024
November 9, 2023
August 24, 2023
May 25, 2023
February 6, 2023
December 21, 2022
December 6, 2022
October 31, 2022
September 26, 2022
April 1, 2022
March 24, 2022
March 23, 2022