Ensemble Diversity
Ensemble diversity, the degree of difference among multiple models within an ensemble, is a key focus in improving machine learning performance and robustness. Current research emphasizes methods to enhance this diversity through techniques like orthogonalizing model layers, employing dynamic weighting of model predictions, and using specialized pruning algorithms to select the most diverse and effective sub-ensembles. This pursuit is driven by the need for more reliable predictions, particularly in handling adversarial attacks, out-of-distribution data, and long-tailed distributions, with applications spanning computer vision, natural language processing, and even medical diagnostics. The ultimate goal is to create ensembles that are not only accurate but also well-calibrated and robust against various forms of uncertainty.
Papers
Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness
Yanzhao Wu, Ka-Ho Chow, Wenqi Wei, Ling Liu
Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts in Underspecified Visual Tasks
Luca Scimeca, Alexander Rubinstein, Armand Mihai Nicolicioiu, Damien Teney, Yoshua Bengio