System Performance
System performance research focuses on optimizing the efficiency and accuracy of various computational systems, from machine learning models to robotic controllers and even quantum computers. Current research emphasizes improving model architectures (e.g., graph-oriented databases for language models, retention-based networks for multi-agent reinforcement learning) and training techniques (e.g., hard sample mining, co-optimization of design and control), while also addressing issues like fairness, robustness, and explainability. These advancements have significant implications for diverse fields, impacting the development of more efficient and reliable AI systems, improved medical diagnostics, and enhanced manufacturing processes.
Papers
Quantifying the Importance of Data Alignment in Downstream Model Performance
Krrish Chawla, Aryan Sahai, Mario DePavia, Sudharsan Sundar, Brando Miranda
Predicting Performance of Object Detection Models in Electron Microscopy Using Random Forests
Ni Li, Ryan Jacobs, Matthew Lynch, Vidit Agrawal, Kevin Field, Dane Morgan
Exploring a Datasets Statistical Effect Size Impact on Model Performance, and Data Sample-Size Sufficiency
Arya Hatamian, Lionel Levine, Haniyeh Ehsani Oskouie, Majid Sarrafzadeh
FOLDER: Accelerating Multi-modal Large Language Models with Enhanced Performance
Haicheng Wang, Zhemeng Yu, Gabriele Spadaro, Chen Ju, Victor Quétu, Enzo Tartaglione
Large Language Models on Small Resource-Constrained Systems: Performance Characterization, Analysis and Trade-offs
Liam Seymour, Basar Kutukcu, Sabur Baidya
Unveiling Uncertainty: A Deep Dive into Calibration and Performance of Multimodal Large Language Models
Zijun Chen, Wenbo Hu, Guande He, Zhijie Deng, Zheng Zhang, Richang Hong