Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
Ditch the Gold Standard: Re-evaluating Conversational Question Answering
Huihan Li, Tianyu Gao, Manan Goenka, Danqi Chen
Long Context Question Answering via Supervised Contrastive Learning
Avi Caciularu, Ido Dagan, Jacob Goldberger, Arman Cohan
Explanation as Question Answering based on Design Knowledge
Ashok Goel, Vrinda Nandan, Eric Gregori, Sungeun An, Spencer Rugaber
DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models
Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu, Haifeng Wang