Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
Optimizing Retrieval-augmented Reader Models via Token Elimination
Moshe Berchansky, Peter Izsak, Avi Caciularu, Ido Dagan, Moshe Wasserblat
Robust Training for Conversational Question Answering Models with Reinforced Reformulation Generation
Magdalena Kaiser, Rishiraj Saha Roy, Gerhard Weikum
Test-Time Self-Adaptive Small Language Models for Question Answering
Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park
Primacy Effect of ChatGPT
Yiwei Wang, Yujun Cai, Muhao Chen, Yuxuan Liang, Bryan Hooi