Clarification Question
Clarification questions in human-computer interaction aim to improve the accuracy and efficiency of information retrieval and task completion by addressing ambiguities in user requests. Current research focuses on developing methods for large language models (LLMs) to strategically generate and utilize clarification questions, often employing reinforcement learning and fine-tuning techniques to optimize question selection and improve downstream task performance, such as code generation or open-domain question answering. These advancements are significant because they enable more natural and effective interactions between users and AI systems, leading to improved user experience and more accurate results across various applications.
Papers
Asking Clarification Questions to Handle Ambiguity in Open-Domain QA
Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung
Towards Asking Clarification Questions for Information Seeking on Task-Oriented Dialogues
Yue Feng, Hossein A. Rahmani, Aldo Lipani, Emine Yilmaz