Multiple Model
Multiple model approaches in machine learning aim to improve performance, robustness, and fairness by combining the strengths of several individual models. Current research focuses on efficient model merging techniques, such as layer-wise integration and canonical correlation analysis, as well as strategies for selecting and weighting models within ensembles, including dynamic model selection and multilingual arbitrage. This field is significant because it addresses limitations of single models, leading to improved accuracy, reduced bias, and more efficient resource utilization across diverse applications, from autonomous driving to medical diagnosis.
Papers
SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection
Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Osama Mohammed Afzal, Tarek Mahmoud, Giovanni Puccetti, Thomas Arnold, Chenxi Whitehouse, Alham Fikri Aji, Nizar Habash, Iryna Gurevych, Preslav Nakov
Fair Concurrent Training of Multiple Models in Federated Learning
Marie Siew, Haoran Zhang, Jong-Ik Park, Yuezhou Liu, Yichen Ruan, Lili Su, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong