Model Ensembling
Model ensembling combines predictions from multiple machine learning models to improve accuracy, robustness, and uncertainty quantification. Current research focuses on efficient ensembling techniques for resource-constrained environments (like edge devices), novel weighting schemes that leverage model diversity and complementarity (e.g., using topological data analysis or gradient-free optimization), and applications across diverse fields including image geolocation, natural language processing, and medical image analysis. These advancements are significant because they enhance the reliability and performance of machine learning systems in various real-world applications, particularly where single models may be insufficient or computationally expensive.
Papers
Consistent Explanations in the Face of Model Indeterminacy via Ensembling
Dan Ley, Leonard Tang, Matthew Nazari, Hongjin Lin, Suraj Srinivas, Himabindu Lakkaraju
A Boosted Model Ensembling Approach to Ball Action Spotting in Videos: The Runner-Up Solution to CVPR'23 SoccerNet Challenge
Luping Wang, Hao Guo, Bin Liu
Two Independent Teachers are Better Role Model
Afifa Khaled, Ahmed A. Mubarak, Kun He