Global Evaluation
Global evaluation in various scientific domains focuses on developing robust and reliable methods for assessing the performance of models and systems, often addressing challenges in data diversity, evolving data distributions, and the need for human-centered metrics. Current research emphasizes the development of comprehensive benchmarks and evaluation frameworks, often incorporating techniques like Item Response Theory and multi-faceted metrics beyond simple accuracy, and utilizing diverse model architectures including Large Language Models (LLMs), Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs). These advancements are crucial for ensuring the trustworthiness and effectiveness of AI systems across diverse applications, from medical diagnosis to autonomous driving, and for fostering reproducible and comparable research within the scientific community.
Papers
Evaluation Of P300 Speller Performance Using Large Language Models Along With Cross-Subject Training
Nithin Parthasarathy, James Soetedjo, Saarang Panchavati, Nitya Parthasarathy, Corey Arnold, Nader Pouratian, William Speier
A Survey on All-in-One Image Restoration: Taxonomy, Evaluation and Future Trends
Junjun Jiang, Zengyuan Zuo, Gang Wu, Kui Jiang, Xianming Liu
Do LLMs Overcome Shortcut Learning? An Evaluation of Shortcut Challenges in Large Language Models
Yu Yuan, Lili Zhao, Kai Zhang, Guangting Zheng, Qi Liu
MCQG-SRefine: Multiple Choice Question Generation and Evaluation with Iterative Self-Critique, Correction, and Comparison Feedback
Zonghai Yao, Aditya Parashar, Huixue Zhou, Won Seok Jang, Feiyun Ouyang, Zhichao Yang, Hong Yu
Better to Ask in English: Evaluation of Large Language Models on English, Low-resource and Cross-Lingual Settings
Krishno Dey, Prerona Tarannum, Md. Arid Hasan, Imran Razzak, Usman Naseem
Evaluation of Attribution Bias in Retrieval-Augmented Large Language Models
Amin Abolghasemi, Leif Azzopardi, Seyyed Hadi Hashemi, Maarten de Rijke, Suzan Verberne
On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation
Xiaonan Jing, Srinivas Billa, Danny Godbout
DocETL: Agentic Query Rewriting and Evaluation for Complex Document Processing
Shreya Shankar, Tristan Chambers, Tarak Shah, Aditya G. Parameswaran, Eugene Wu
Dynamic Benchmarks: Spatial and Temporal Alignment for ADS Performance Evaluation
Yin-Hsiu Chen, John M. Scanlon, Kristofer D. Kusano, Timothy L. McMurry, Trent Victor
Convolutional Neural Network Design and Evaluation for Real-Time Multivariate Time Series Fault Detection in Spacecraft Attitude Sensors
Riccardo Gallon, Fabian Schiemenz, Alessandra Menicucci, Eberhard Gill
Block Induced Signature Generative Adversarial Network (BISGAN): Signature Spoofing Using GANs and Their Evaluation
Haadia Amjad, Kilian Goeller, Steffen Seitz, Carsten Knoll, Naseer Bajwa, Muhammad Imran Malik, Ronald Tetzlaff
Mitigating the Impact of Reference Quality on Evaluation of Summarization Systems with Reference-Free Metrics
Théo Gigant (L2S), Camille Guinaudeau (STL, LISN), Marc Decombas, Frédéric Dufaux (L2S)