Interpretable Ensemble
Interpretable ensemble methods aim to combine the predictive power of multiple machine learning models with enhanced transparency and understanding of their decision-making processes. Current research focuses on developing ensemble architectures that incorporate interpretable base learners, such as decision trees and hyper-rectangles, and employing techniques like feature graphs and gradient-based methods to explain model predictions. This work is significant because it addresses the critical need for trustworthy AI in high-stakes domains like healthcare and autonomous driving, where understanding model behavior is as important as prediction accuracy.
Papers
October 24, 2024
September 16, 2024
April 27, 2024
August 1, 2023
July 24, 2023
June 24, 2023
March 15, 2023
August 11, 2022
May 25, 2022
May 12, 2022