Model Performance
Model performance research focuses on improving the accuracy, efficiency, and robustness of machine learning models across diverse applications. Current efforts concentrate on optimizing ensemble methods, particularly for large language models (LLMs), and addressing challenges like model drift and the impact of data quality and quantity on performance, often employing techniques like network deconvolution, adaptive sampling, and low-rank adaptation. These advancements are crucial for deploying reliable AI systems in various fields, from healthcare diagnostics to resource-constrained IoT devices, and for establishing robust evaluation methodologies to ensure trustworthy AI.
Papers
Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition
Christian Jacobsen, Jiayuan Dong, Mehdi Khalloufi, Xun Huan, Karthik Duraisamy, Maryam Akram, Wanjiao Liu
Estimating Model Performance Under Covariate Shift Without Labels
Jakub Białek, Wojtek Kuberski, Nikolaos Perrakis, Albert Bifet