Evaluation Benchmark
Evaluation benchmarks are crucial for assessing the performance of large language models (LLMs) and other AI systems across diverse tasks, providing objective measures of capabilities and identifying areas for improvement. Current research focuses on developing comprehensive benchmarks that address various challenges, including data contamination, bias, and the evaluation of specific model functionalities (e.g., tool use, image editing, and video analysis), often incorporating novel metrics and datasets. These benchmarks are vital for fostering reproducible research, enabling fair comparisons between models, and ultimately driving the development of more robust and reliable AI systems with real-world applications.
Papers
FedAD-Bench: A Unified Benchmark for Federated Unsupervised Anomaly Detection in Tabular Data
Ahmed Anwar, Brian Moser, Dayananda Herurkar, Federico Raue, Vinit Hegiste, Tatjana Legler, Andreas Dengel
ToolSandbox: A Stateful, Conversational, Interactive Evaluation Benchmark for LLM Tool Use Capabilities
Jiarui Lu, Thomas Holleis, Yizhe Zhang, Bernhard Aumayer, Feng Nan, Felix Bai, Shuang Ma, Shen Ma, Mengyu Li, Guoli Yin, Zirui Wang, Ruoming Pang
UniTTA: Unified Benchmark and Versatile Framework Towards Realistic Test-Time Adaptation
Chaoqun Du, Yulin Wang, Jiayi Guo, Yizeng Han, Jie Zhou, Gao Huang
Beyond Metrics: A Critical Analysis of the Variability in Large Language Model Evaluation Frameworks
Marco AF Pimentel, Clément Christophe, Tathagata Raha, Prateek Munjal, Praveen K Kanithi, Shadab Khan
Qiskit HumanEval: An Evaluation Benchmark For Quantum Code Generative Models
Sanjay Vishwakarma, Francis Harkins, Siddharth Golecha, Vishal Sharathchandra Bajpe, Nicolas Dupuis, Luca Buratti, David Kremer, Ismael Faro, Ruchir Puri, Juan Cruz-Benito
CityBench: Evaluating the Capabilities of Large Language Model as World Model
Jie Feng, Jun Zhang, Junbo Yan, Xin Zhang, Tianjian Ouyang, Tianhui Liu, Yuwei Du, Siqi Guo, Yong Li