Ensemble Pruning
Ensemble pruning aims to improve the efficiency and performance of machine learning models by selecting a smaller, optimal subset of classifiers from a larger ensemble. Current research focuses on developing efficient algorithms, such as those inspired by liquid democracy or hierarchical pruning, to identify these subsets, often incorporating diversity metrics to ensure the selected models are complementary. This work is significant because it addresses the computational cost associated with large ensembles, particularly in deep learning, leading to faster inference and reduced resource consumption while maintaining or improving prediction accuracy across various model architectures, including deep neural networks and gradient boosted trees.