Distractor Generation
Distractor generation focuses on automatically creating plausible but incorrect answer choices for multiple-choice questions (MCQs), a crucial task for effective assessment and educational applications. Current research heavily utilizes large language models (LLMs) and transformer-based architectures, often incorporating techniques like retrieval augmentation, knowledge graph integration, and variational error modeling to improve distractor quality and relevance. This field is significant because high-quality distractors enhance the assessment value of MCQs, reducing the reliance on time-consuming manual creation and enabling more efficient and scalable testing across various subjects and languages.
Papers
October 5, 2024
September 26, 2024
June 27, 2024
June 19, 2024
June 3, 2024
May 29, 2024
April 19, 2024
April 2, 2024
March 15, 2024
February 2, 2024
January 13, 2024
July 30, 2023
July 27, 2023
May 28, 2023
April 10, 2023
April 6, 2023