System Performance
System performance research focuses on optimizing the efficiency and accuracy of various computational systems, from machine learning models to robotic controllers and even quantum computers. Current research emphasizes improving model architectures (e.g., graph-oriented databases for language models, retention-based networks for multi-agent reinforcement learning) and training techniques (e.g., hard sample mining, co-optimization of design and control), while also addressing issues like fairness, robustness, and explainability. These advancements have significant implications for diverse fields, impacting the development of more efficient and reliable AI systems, improved medical diagnostics, and enhanced manufacturing processes.
Papers
On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
Zesen Liu, Tianshuo Cong, Xinlei He, Qi Li
Are Large Language Models Strategic Decision Makers? A Study of Performance and Bias in Two-Player Non-Zero-Sum Games
Nathan Herr, Fernando Acero, Roberta Raileanu, María Pérez-Ortiz, Zhibin Li
UAV-assisted Unbiased Hierarchical Federated Learning: Performance and Convergence Analysis
Ruslan Zhagypar, Nour Kouzayha, Hesham ElSawy, Hayssam Dahrouj, Tareq Y. Al-Naffouri
M5 -- A Diverse Benchmark to Assess the Performance of Large Multimodal Models Across Multilingual and Multicultural Vision-Language Tasks
Florian Schneider, Sunayana Sitaram
On the performance of sequential Bayesian update for database of diverse tsunami scenarios
Reika Nomura, Louise A. Hirao Vermare, Saneiki Fujita, Donsub Rim, Shuji Moriguchi, Randall J. LeVeque, Kenjiro Terada
Revisiting the Performance of Deep Learning-Based Vulnerability Detection on Realistic Datasets
Partha Chakraborty, Krishna Kanth Arumugam, Mahmoud Alfadel, Meiyappan Nagappan, Shane McIntosh
Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data
Younghun Lee, Sungchul Kim, Ryan A. Rossi, Tong Yu, Xiang Chen
GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism
Byungsoo Jeon, Mengdi Wu, Shiyi Cao, Sunghyun Kim, Sunghyun Park, Neeraj Aggarwal, Colin Unger, Daiyaan Arfeen, Peiyuan Liao, Xupeng Miao, Mohammad Alizadeh, Gregory R. Ganger, Tianqi Chen, Zhihao Jia
BitNet b1.58 Reloaded: State-of-the-art Performance Also on Smaller Networks
Jacob Nielsen, Peter Schneider-Kamp
Seg-LSTM: Performance of xLSTM for Semantic Segmentation of Remotely Sensed Images
Qinfeng Zhu, Yuanzhi Cai, Lei Fan
How Many Parameters Does it Take to Change a Light Bulb? Evaluating Performance in Self-Play of Conversational Games as a Function of Model Characteristics
Nidhir Bhavsar, Jonathan Jordan, Sherzod Hakimov, David Schlangen
Modeling & Evaluating the Performance of Convolutional Neural Networks for Classifying Steel Surface Defects
Nadeem Jabbar Chaudhry, M. Bilal Khan, M. Javaid Iqbal, Siddiqui Muhammad Yasir
A New Approach for Evaluating and Improving the Performance of Segmentation Algorithms on Hard-to-Detect Blood Vessels
João Pedro Parella, Matheus Viana da Silva, Cesar Henrique Comin