Interpretable Ensemble

Interpretable ensemble methods aim to combine the predictive power of multiple machine learning models with enhanced transparency and understanding of their decision-making processes. Current research focuses on developing ensemble architectures that incorporate interpretable base learners, such as decision trees and hyper-rectangles, and employing techniques like feature graphs and gradient-based methods to explain model predictions. This work is significant because it addresses the critical need for trustworthy AI in high-stakes domains like healthcare and autonomous driving, where understanding model behavior is as important as prediction accuracy.

Papers