System Performance
System performance research focuses on optimizing the efficiency and accuracy of various computational systems, from machine learning models to robotic controllers and even quantum computers. Current research emphasizes improving model architectures (e.g., graph-oriented databases for language models, retention-based networks for multi-agent reinforcement learning) and training techniques (e.g., hard sample mining, co-optimization of design and control), while also addressing issues like fairness, robustness, and explainability. These advancements have significant implications for diverse fields, impacting the development of more efficient and reliable AI systems, improved medical diagnostics, and enhanced manufacturing processes.
Papers
InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction
Ishani Mondal, Michelle Yuan, Anandhavelu N, Aparna Garimella, Francis Ferraro, Andrew Blair-Stanek, Benjamin Van Durme, Jordan Boyd-Graber
Realistically distributing object placements in synthetic training data improves the performance of vision-based object detection models
Setareh Dabiri, Vasileios Lioutas, Berend Zwartsenberg, Yunpeng Liu, Matthew Niedoba, Xiaoxuan Liang, Dylan Green, Justice Sefas, Jonathan Wilder Lavington, Frank Wood, Adam Scibior
A Review of Vision-Language Models and their Performance on the Hateful Memes Challenge
Bryan Zhao, Andrew Zhang, Blake Watson, Gillian Kearney, Isaac Dale
Effects of sub-word segmentation on performance of transformer language models
Jue Hou, Anisia Katinskaia, Anh-Duc Vu, Roman Yangarber
An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild'' Edge Applications
Heitor Guimarães, Arthur Pimentel, Anderson Avila, Mehdi Rezagholizadeh, Tiago H. Falk
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance
Lingjiao Chen, Matei Zaharia, James Zou