Evaluation Benchmark
Evaluation benchmarks are crucial for assessing the performance of large language models (LLMs) and other AI systems across diverse tasks, providing objective measures of capabilities and identifying areas for improvement. Current research focuses on developing comprehensive benchmarks that address various challenges, including data contamination, bias, and the evaluation of specific model functionalities (e.g., tool use, image editing, and video analysis), often incorporating novel metrics and datasets. These benchmarks are vital for fostering reproducible research, enabling fair comparisons between models, and ultimately driving the development of more robust and reliable AI systems with real-world applications.
Papers
Qiskit HumanEval: An Evaluation Benchmark For Quantum Code Generative Models
Sanjay Vishwakarma, Francis Harkins, Siddharth Golecha, Vishal Sharathchandra Bajpe, Nicolas Dupuis, Luca Buratti, David Kremer, Ismael Faro, Ruchir Puri, Juan Cruz-Benito
CityBench: Evaluating the Capabilities of Large Language Models for Urban Tasks
Jie Feng, Jun Zhang, Tianhui Liu, Xin Zhang, Tianjian Ouyang, Junbo Yan, Yuwei Du, Siqi Guo, Yong Li
A Large-scale Universal Evaluation Benchmark For Face Forgery Detection
Yijun Bei, Hengrui Lou, Jinsong Geng, Erteng Liu, Lechao Cheng, Jie Song, Mingli Song, Zunlei Feng
Towards Reliable Detection of LLM-Generated Texts: A Comprehensive Evaluation Framework with CUDRT
Zhen Tao, Yanfang Chen, Dinghao Xi, Zhiyu Li, Wei Xu