Question Generation
Automatic question generation (QG) focuses on using computational methods to create questions from given text, aiming to improve various applications like education, question-answering systems, and fact-checking. Current research emphasizes improving question quality (e.g., clarity, relevance, diversity), exploring cross-lingual transfer to address data scarcity in many languages, and leveraging large language models (LLMs) like GPT-3.5 and Llama 2, often incorporating techniques like contrastive learning and reinforcement learning to enhance performance. The advancements in QG have significant implications for creating more effective educational materials, building more robust conversational AI, and automating various knowledge-based tasks.
Papers
Give me Some Hard Questions: Synthetic Data Generation for Clinical QA
Fan Bai, Keith Harrigian, Joel Stremmel, Hamid Hassanzadeh, Ardavan Saeedi, Mark Dredze
Leveraging Large Language Models to Generate Course-specific Semantically Annotated Learning Objects
Dominic Lohr, Marc Berges, Abhishek Chugh, Michael Kohlhase, Dennis Müller
MIRROR: A Novel Approach for the Automated Evaluation of Open-Ended Question Generation
Aniket Deroy, Subhankar Maity, Sudeshna Sarkar
Expanding Chatbot Knowledge in Customer Service: Context-Aware Similar Question Generation Using Large Language Models
Mengze Hong, Yuanfeng Song, Di Jiang, Lu Wang, Zichang Guo, Chen Jason Zhang