Global Evaluation
Global evaluation in various scientific domains focuses on developing robust and reliable methods for assessing the performance of models and systems, often addressing challenges in data diversity, evolving data distributions, and the need for human-centered metrics. Current research emphasizes the development of comprehensive benchmarks and evaluation frameworks, often incorporating techniques like Item Response Theory and multi-faceted metrics beyond simple accuracy, and utilizing diverse model architectures including Large Language Models (LLMs), Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs). These advancements are crucial for ensuring the trustworthiness and effectiveness of AI systems across diverse applications, from medical diagnosis to autonomous driving, and for fostering reproducible and comparable research within the scientific community.
Papers
AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics
Max Pascher, Felix Ferdinand Goldau, Kirill Kronhardt, Udo Frese, Jens Gerken
MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications
Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Heyan Huang, Yang Gao
Recent Advances in Multi-modal 3D Scene Understanding: A Comprehensive Survey and Evaluation
Yinjie Lei, Zixuan Wang, Feng Chen, Guoqing Wang, Peng Wang, Yang Yang
An International Consortium for Evaluations of Societal-Scale Risks from Advanced AI
Ross Gruetzemacher, Alan Chan, Kevin Frazier, Christy Manning, Štěpán Los, James Fox, José Hernández-Orallo, John Burden, Matija Franklin, Clíodhna Ní Ghuidhir, Mark Bailey, Daniel Eth, Toby Pilditch, Kyle Kilian
Evaluating Subjective Cognitive Appraisals of Emotions from Large Language Models
Hongli Zhan, Desmond C. Ong, Junyi Jessy Li
Evaluation and improvement of Segment Anything Model for interactive histopathology image segmentation
SeungKyu Kim, Hyun-Jic Oh, Seonghui Min, Won-Ki Jeong
Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT
Xiaoshuai Song, Keqing He, Pei Wang, Guanting Dong, Yutao Mou, Jingang Wang, Yunsen Xian, Xunliang Cai, Weiran Xu
End-to-end Evaluation of Practical Video Analytics Systems for Face Detection and Recognition
Praneet Singh, Edward J. Delp, Amy R. Reibman
On the Evaluation and Refinement of Vision-Language Instruction Tuning Datasets
Ning Liao, Shaofeng Zhang, Renqiu Xia, Min Cao, Yu Qiao, Junchi Yan
Evaluation of ChatGPT Feedback on ELL Writers' Coherence and Cohesion
Su-Youn Yoon, Eva Miszoglad, Lisa R. Pierce
Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Jiatong Shi, William Chen, Dan Berrebbi, Hsiu-Hsuan Wang, Wei-Ping Huang, En-Pei Hu, Ho-Lam Chuang, Xuankai Chang, Yuxun Tang, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe
Edge Computing-Enabled Road Condition Monitoring: System Development and Evaluation
Abdulateef Daud, Mark Amo-Boateng, Neema Jakisa Owor, Armstrong Aboah, Yaw Adu-Gyamfi