System Performance
System performance research focuses on optimizing the efficiency and accuracy of various computational systems, from machine learning models to robotic controllers and even quantum computers. Current research emphasizes improving model architectures (e.g., graph-oriented databases for language models, retention-based networks for multi-agent reinforcement learning) and training techniques (e.g., hard sample mining, co-optimization of design and control), while also addressing issues like fairness, robustness, and explainability. These advancements have significant implications for diverse fields, impacting the development of more efficient and reliable AI systems, improved medical diagnostics, and enhanced manufacturing processes.
Papers
Cheap Learning: Maximising Performance of Language Models for Social Data Science Using Minimal Data
Leonardo Castro-Gonzalez, Yi-Ling Chung, Hannak Rose Kirk, John Francis, Angus R. Williams, Pica Johansson, Jonathan Bright
The Effect of Predictive Formal Modelling at Runtime on Performance in Human-Swarm Interaction
Ayodeji O. Abioye, William Hunt, Yue Gu, Eike Schneiders, Mohammad Naiseh, Joel E. Fischer, Sarvapali D. Ramchurn, Mohammad D. Soorati, Blair Archibald, Michele Sevegnani
Enhancing the Fairness and Performance of Edge Cameras with Explainable AI
Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Quoc Hung Cao, Van Binh Truong, Quoc Khanh Nguyen, Hung Cao
A Fast, Performant, Secure Distributed Training Framework For Large Language Model
Wei Huang, Yinggui Wang, Anda Cheng, Aihui Zhou, Chaofan Yu, Lei Wang
Comparative Study on the Performance of Categorical Variable Encoders in Classification and Regression Tasks
Wenbin Zhu, Runwen Qiu, Ying Fu