Global Evaluation
Global evaluation in various scientific domains focuses on developing robust and reliable methods for assessing the performance of models and systems, often addressing challenges in data diversity, evolving data distributions, and the need for human-centered metrics. Current research emphasizes the development of comprehensive benchmarks and evaluation frameworks, often incorporating techniques like Item Response Theory and multi-faceted metrics beyond simple accuracy, and utilizing diverse model architectures including Large Language Models (LLMs), Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs). These advancements are crucial for ensuring the trustworthiness and effectiveness of AI systems across diverse applications, from medical diagnosis to autonomous driving, and for fostering reproducible and comparable research within the scientific community.
Papers
The African Woman is Rhythmic and Soulful: Evaluation of Open-ended Generation for Implicit Biases
Serene Lim
Evaluation of Text-to-Video Generation Models: A Dynamics Perspective
Mingxiang Liao, Hannan Lu, Xinyu Zhang, Fang Wan, Tianyu Wang, Yuzhong Zhao, Wangmeng Zuo, Qixiang Ye, Jingdong Wang
Overcoming Common Flaws in the Evaluation of Selective Classification Systems
Jeremias Traub, Till J. Bungert, Carsten T. Lüth, Michael Baumgartner, Klaus H. Maier-Hein, Lena Maier-Hein, Paul F Jaeger
X-ray Made Simple: Radiology Report Generation and Evaluation with Layman's Terms
Kun Zhao, Chenghao Xiao, Chen Tang, Bohao Yang, Kai Ye, Noura Al Moubayed, Liang Zhan, Chenghua Lin
TALEC: Teach Your LLM to Evaluate in Specific Domain with In-house Criteria by Criteria Division and Zero-shot Plus Few-shot
Kaiqi Zhang, Shuai Yuan, Honghan Zhao
Evaluation of Language Models in the Medical Context Under Resource-Constrained Settings
Andrea Posada, Daniel Rueckert, Felix Meissen, Philip Müller
neuROSym: Deployment and Evaluation of a ROS-based Neuro-Symbolic Model for Human Motion Prediction
Sariah Mghames, Luca Castri, Marc Hanheide, Nicola Bellotto
Evaluation of Instruction-Following Ability for Large Language Models on Story-Ending Generation
Rem Hida, Junki Ohmura, Toshiyuki Sekiya
ADVSCORE: A Metric for the Evaluation and Creation of Adversarial Benchmarks
Yoo Yeon Sung, Eve Fleisig, Ishani Mondal, Jordan Lee Boyd-Graber
What Do VLMs NOTICE? A Mechanistic Interpretability Pipeline for Gaussian-Noise-free Text-Image Corruption and Evaluation
Michal Golovanevsky, William Rudman, Vedant Palit, Ritambhara Singh, Carsten Eickhoff
Depth $F_1$: Improving Evaluation of Cross-Domain Text Classification by Measuring Semantic Generalizability
Parker Seegmiller, Joseph Gatto, Sarah Masud Preum
EasyECR: A Library for Easy Implementation and Evaluation of Event Coreference Resolution Models
Yuncong Li, Tianhua Xu, Sheng-hua Zhong, Haiqin Yang