Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales
Aaron Chan, Zhiyuan Zeng, Wyatt Lake, Brihi Joshi, Hanjie Chen, Xiang Ren
Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling
Mingzhu Cai, Siqi Bao, Xin Tian, Huang He, Fan Wang, Hua Wu
Source-Free Domain Adaptation for Question Answering with Masked Self-training
M. Yin, B. Wang, Y. Dong, C. Ling