Global Evaluation
Global evaluation in various scientific domains focuses on developing robust and reliable methods for assessing the performance of models and systems, often addressing challenges in data diversity, evolving data distributions, and the need for human-centered metrics. Current research emphasizes the development of comprehensive benchmarks and evaluation frameworks, often incorporating techniques like Item Response Theory and multi-faceted metrics beyond simple accuracy, and utilizing diverse model architectures including Large Language Models (LLMs), Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs). These advancements are crucial for ensuring the trustworthiness and effectiveness of AI systems across diverse applications, from medical diagnosis to autonomous driving, and for fostering reproducible and comparable research within the scientific community.
Papers
ASTRID -- An Automated and Scalable TRIaD for the Evaluation of RAG-based Clinical Question Answering Systems
Mohita Chowdhury, Yajie Vera He, Aisling Higham, Ernest Lim
Unsupervised Feature Construction for Anomaly Detection in Time Series -- An Evaluation
Marine Hamon, Vincent Lemaire, Nour Eddine Yassine Nair-Benrekia, Samuel Berlemont, Julien Cumin
Evaluation of Artificial Intelligence Methods for Lead Time Prediction in Non-Cycled Areas of Automotive Production
Cornelius Hake (1, 2), Jonas Weigele (1, 3), Frederik Reichert (3), Christian Friedrich (2) ((1) Ing. h.c. F. Porsche AG (2) Hochschule Karlsruhe (3) Hochschule Esslingen)
Value Compass Leaderboard: A Platform for Fundamental and Validated Evaluation of LLMs Values
Jing Yao, Xiaoyuan Yi, Shitong Duan, Jindong Wang, Yuzhuo Bai, Muhua Huang, Peng Zhang, Tun Lu, Zhicheng Dou, Maosong Sun, Xing Xie
Automating Legal Concept Interpretation with LLMs: Retrieval, Generation, and Evaluation
Kangcheng Luo, Quzhe Huang, Cong Jiang, Yansong Feng
PSYCHE: A Multi-faceted Patient Simulation Framework for Evaluation of Psychiatric Assessment Conversational Agents
Jingoo Lee, Kyungho Lim, Young-Chul Jung, Byung-Hoon Kim
Setting Standards in Turkish NLP: TR-MMLU for Large Language Model Evaluation
M. Ali Bayram, Ali Arda Fincan, Ahmet Semih G"um"uş, Banu Diri, Savaş Yıldırım, "Oner Aytaş
LLM-Rubric: A Multidimensional, Calibrated Approach to Automated Evaluation of Natural Language Texts
Helia Hashemi, Jason Eisner, Corby Rosset, Benjamin Van Durme, Chris Kedzie
How Well Do LLMs Generate Code for Different Application Domains? Benchmark and Evaluation
Dewu Zheng, Yanlin Wang, Ensheng Shi, Hongyu Zhang, Zibin Zheng
DeepCRCEval: Revisiting the Evaluation of Code Review Comment Generation
Junyi Lu, Xiaojia Li, Zihan Hua, Lei Yu, Shiqi Cheng, Li Yang, Fengjun Zhang, Chun Zuo
Evaluation of radiomic feature harmonization techniques for benign and malignant pulmonary nodules
Claire Huchthausen, Menglin Shi, Gabriel L.A. de Sousa, Jonathan Colen, Emery Shelley, James Larner, Krishni Wijesooriya
HammerBench: Fine-Grained Function-Calling Evaluation in Real Mobile Device Scenarios
Jun Wang, Jiamu Zhou, Muning Wen, Xiaoyun Mo, Haoyu Zhang, Qiqiang Lin, Cheng Jin, Xihuai Wang, Weinan Zhang, Qiuying Peng, Jun Wang