Diverse Ensemble
Diverse ensemble methods combine predictions from multiple models to improve accuracy, robustness, and uncertainty quantification in various machine learning tasks. Current research focuses on developing efficient ensemble techniques, including those based on AdaBoost, weight averaging, and contrastive learning, and applying them to diverse areas such as natural language processing, image classification, and time series forecasting. This approach is proving valuable for addressing challenges like limited training data, adversarial attacks, and concept drift, leading to improved performance and reliability in real-world applications.
Papers
Blending Ensemble for Classification with Genetic-algorithm generated Alpha factors and Sentiments (GAS)
Quechen Yang
Graph-DPEP: Decomposed Plug and Ensemble Play for Few-Shot Document Relation Extraction with Graph-of-Thoughts Reasoning
Tao Zhang, Ning Yan, Masood Mortazavi, Hoang H. Nguyen, Zhongfen Deng, Philip S. Yu
Theoretical Limitations of Ensembles in the Age of Overparameterization
Niclas Dern, John P. Cunningham, Geoff Pleiss
1024m at SMM4H 2024: Tasks 3, 5 & 6 -- Ensembles of Transformers and Large Language Models for Medical Text Classification
Ram Mohan Rao Kadiyala, M.V.P. Chandra Sekhara Rao
A Comprehensive Comparative Study of Individual ML Models and Ensemble Strategies for Network Intrusion Detection Systems
Ismail Bibers, Osvaldo Arreche, Mustafa Abdallah