Automatic Evaluation
Automatic evaluation of generated text and other outputs from AI models, particularly large language models (LLMs), aims to create objective and efficient alternatives to expensive and time-consuming human assessment. Current research focuses on developing new metrics and frameworks that better correlate with human judgment, often leveraging LLMs themselves as "judges" or incorporating techniques like instruction tuning and preference optimization. These advancements are crucial for accelerating the development and deployment of AI systems across diverse fields, from scientific protocol generation to medical diagnosis and education, by providing reliable and scalable evaluation methods.
Papers
See What LLMs Cannot Answer: A Self-Challenge Framework for Uncovering LLM Weaknesses
Yulong Chen, Yang Liu, Jianhao Yan, Xuefeng Bai, Ming Zhong, Yinghao Yang, Ziyi Yang, Chenguang Zhu, Yue Zhang
Evaluating the Evaluator: Measuring LLMs' Adherence to Task Evaluation Instructions
Bhuvanashree Murugadoss, Christian Poelitz, Ian Drosos, Vu Le, Nick McKenna, Carina Suzana Negreanu, Chris Parnin, Advait Sarkar