Question Generation
Automatic question generation (QG) focuses on using computational methods to create questions from given text, aiming to improve various applications like education, question-answering systems, and fact-checking. Current research emphasizes improving question quality (e.g., clarity, relevance, diversity), exploring cross-lingual transfer to address data scarcity in many languages, and leveraging large language models (LLMs) like GPT-3.5 and Llama 2, often incorporating techniques like contrastive learning and reinforcement learning to enhance performance. The advancements in QG have significant implications for creating more effective educational materials, building more robust conversational AI, and automating various knowledge-based tasks.
Papers
TeleQnA: A Benchmark Dataset to Assess Large Language Models Telecommunications Knowledge
Ali Maatouk, Fadhel Ayed, Nicola Piovesan, Antonio De Domenico, Merouane Debbah, Zhi-Quan Luo
Evaluating Large Language Models on Controlled Generation Tasks
Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma
Diversify Question Generation with Retrieval-Augmented Style Transfer
Qi Gou, Zehua Xia, Bowen Yu, Haiyang Yu, Fei Huang, Yongbin Li, Nguyen Cam-Tu