Human Score
Human score research focuses on developing and evaluating automated methods for assessing various outputs, including text, images, and code, aiming to replicate or improve upon human judgment. Current research employs diverse approaches, ranging from adapting large language models (LLMs) for scoring tasks and using contrastive fine-tuning to improve embedding models, to developing novel scoring metrics based on attention mechanisms within LLMs or generative models for evaluating generated content. This work is significant because accurate and efficient automated scoring is crucial for advancing various fields, including natural language processing, computer vision, and automated assessment of student work, ultimately improving the efficiency and scalability of these processes.