Model Performance
Model performance research focuses on improving the accuracy, efficiency, and robustness of machine learning models across diverse applications. Current efforts concentrate on optimizing ensemble methods, particularly for large language models (LLMs), and addressing challenges like model drift and the impact of data quality and quantity on performance, often employing techniques like network deconvolution, adaptive sampling, and low-rank adaptation. These advancements are crucial for deploying reliable AI systems in various fields, from healthcare diagnostics to resource-constrained IoT devices, and for establishing robust evaluation methodologies to ensure trustworthy AI.
Papers
T\"urk\c{c}e Dil Modellerinin Performans Kar\c{s}{\i}la\c{s}t{\i}rmas{\i} Performance Comparison of Turkish Language Models
Eren Dogan, M. Egemen Uzun, Atahan Uz, H. Emre Seyrek, Ahmed Zeer, Ezgi Sevi, H. Toprak Kesgin, M. Kaan Yuce, M. Fatih Amasyali
Examining the robustness of LLM evaluation to the distributional assumptions of benchmarks
Melissa Ailem, Katerina Marazopoulou, Charlotte Siska, James Bono