Model Performance
Model performance research focuses on improving the accuracy, efficiency, and robustness of machine learning models across diverse applications. Current efforts concentrate on optimizing ensemble methods, particularly for large language models (LLMs), and addressing challenges like model drift and the impact of data quality and quantity on performance, often employing techniques like network deconvolution, adaptive sampling, and low-rank adaptation. These advancements are crucial for deploying reliable AI systems in various fields, from healthcare diagnostics to resource-constrained IoT devices, and for establishing robust evaluation methodologies to ensure trustworthy AI.
Papers
Cosmos-LLaVA: Chatting with the Visual Cosmos-LLaVA: Görselle Sohbet Etmek
Ahmed Zeer, Eren Dogan, Yusuf Erdem, Elif Ince, Osama Shbib, M. Egemen Uzun, Atahan Uz, M. Kaan Yuce, H. Toprak Kesgin, M. Fatih Amasyali
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs
Abhas Kumar, Kapil Pathak, Rajesh Kavuru, Prabhakar Srinivasan
ChemTEB: Chemical Text Embedding Benchmark, an Overview of Embedding Models Performance & Efficiency on a Specific Domain
Ali Shiraee Kasmaee, Mohammad Khodadad, Mohammad Arshi Saloot, Nick Sherck, Stephen Dokas, Hamidreza Mahyar, Soheila Samiee
Predictive Models in Sequential Recommendations: Bridging Performance Laws with Data Quality Insights
Tingjia Shen, Hao Wang, Chuhan Wu, Jin Yao Chin, Wei Guo, Yong Liu, Huifeng Guo, Defu Lian, Ruiming Tang, Enhong Chen