Question Generation
Automatic question generation (QG) focuses on using computational methods to create questions from given text, aiming to improve various applications like education, question-answering systems, and fact-checking. Current research emphasizes improving question quality (e.g., clarity, relevance, diversity), exploring cross-lingual transfer to address data scarcity in many languages, and leveraging large language models (LLMs) like GPT-3.5 and Llama 2, often incorporating techniques like contrastive learning and reinforcement learning to enhance performance. The advancements in QG have significant implications for creating more effective educational materials, building more robust conversational AI, and automating various knowledge-based tasks.
Papers
Towards End-to-End Open Conversational Machine Reading
Sizhe Zhou, Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Closed-book Question Generation via Contrastive Learning
Xiangjue Dong, Jiaying Lu, Jianling Wang, James Caverlee
Shortcomings of Question Answering Based Factuality Frameworks for Error Localization
Ryo Kamoi, Tanya Goyal, Greg Durrett
QA Is the New KR: Question-Answer Pairs as Knowledge Bases
Wenhu Chen, William W. Cohen, Michiel De Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting
Conditional Generation with a Question-Answering Blueprint
Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Fantine Huot, Anders Sandholm, Dipanjan Das, Mirella Lapata