Essay Scoring

Automated essay scoring (AES) aims to objectively and efficiently evaluate written essays, typically by predicting scores that align with human graders' judgments. Current research focuses on improving the accuracy and fairness of AES systems, often employing large language models (LLMs) and transformer-based architectures alongside techniques like multi-trait scoring and adversarial training to mitigate biases and enhance interpretability. The development of robust and equitable AES systems holds significant implications for education, streamlining assessment processes and providing valuable feedback to students while addressing concerns about bias and fairness in automated evaluation.

Papers