Paper ID: 2309.12546
Automatic Answerability Evaluation for Question Generation
Zifan Wang, Kotaro Funakoshi, Manabu Okumura
Conventional automatic evaluation metrics, such as BLEU and ROUGE, developed for natural language generation (NLG) tasks, are based on measuring the n-gram overlap between the generated and reference text. These simple metrics may be insufficient for more complex tasks, such as question generation (QG), which requires generating questions that are answerable by the reference answers. Developing a more sophisticated automatic evaluation metric, thus, remains an urgent problem in QG research. This work proposes PMAN (Prompting-based Metric on ANswerability), a novel automatic evaluation metric to assess whether the generated questions are answerable by the reference answers for the QG tasks. Extensive experiments demonstrate that its evaluation results are reliable and align with human evaluations. We further apply our metric to evaluate the performance of QG models, which shows that our metric complements conventional metrics. Our implementation of a GPT-based QG model achieves state-of-the-art performance in generating answerable questions.
Submitted: Sep 22, 2023