Human Evaluation
Human evaluation in the field of artificial intelligence, particularly concerning large language models (LLMs), focuses on developing reliable and efficient methods to assess model performance against human expectations. Current research emphasizes creating standardized evaluation frameworks, often incorporating LLM-as-a-judge approaches to automate the process, while simultaneously addressing biases and inconsistencies in both human and automated assessments. This work is crucial for improving the trustworthiness and practical applicability of LLMs across diverse domains, from medical diagnosis to scientific synthesis, by ensuring that AI systems align with human needs and values. The development of robust evaluation methods is essential for responsible AI development and deployment.
Papers
Level of agreement between emotions generated by Artificial Intelligence and human evaluation: a methodological proposal
Miguel Carrasco, Cesar Gonzalez-Martin, Sonia Navajas-Torrente, Raul Dastres
Multi-Facet Counterfactual Learning for Content Quality Evaluation
Jiasheng Zheng, Hongyu Lin, Boxi Cao, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun
MSEval: A Dataset for Material Selection in Conceptual Design to Evaluate Algorithmic Models
Yash Patawari Jain, Daniele Grandi, Allin Groom, Brandon Cramer, Christopher McComb
The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs
Anh Thu Maria Bui, Saskia Felizitas Brech, Natalie Hußfeldt, Tobias Jennert, Melanie Ullrich, Timo Breuer, Narjes Nikzad Khasmakhi, Philipp Schaer