Model Performance
Model performance research focuses on improving the accuracy, efficiency, and robustness of machine learning models across diverse applications. Current efforts concentrate on optimizing ensemble methods, particularly for large language models (LLMs), and addressing challenges like model drift and the impact of data quality and quantity on performance, often employing techniques like network deconvolution, adaptive sampling, and low-rank adaptation. These advancements are crucial for deploying reliable AI systems in various fields, from healthcare diagnostics to resource-constrained IoT devices, and for establishing robust evaluation methodologies to ensure trustworthy AI.
Papers
A Meta-Learning Approach to Predicting Performance and Data Requirements
Achin Jain, Gurumurthy Swaminathan, Paolo Favaro, Hao Yang, Avinash Ravichandran, Hrayr Harutyunyan, Alessandro Achille, Onkar Dabeer, Bernt Schiele, Ashwin Swaminathan, Stefano Soatto
Safe AI for health and beyond -- Monitoring to transform a health service
Mahed Abroshan, Michael Burkhart, Oscar Giles, Sam Greenbury, Zoe Kourtzi, Jack Roberts, Mihaela van der Schaar, Jannetta S Steyn, Alan Wilson, May Yong