Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments
Bangzhao Shu, Lechen Zhang, Minje Choi, Lavinia Dunagan, Lajanugen Logeswaran, Moontae Lee, Dallas Card, David Jurgens
SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU
Evgeniia Razumovskaia, Goran Glavaš, Anna Korhonen, Ivan Vulić
Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs
Michael J. Q. Zhang, Eunsol Choi
Long-form Question Answering: An Iterative Planning-Retrieval-Generation Approach
Pritom Saha Akash, Kashob Kumar Roy, Lucian Popa, Kevin Chen-Chuan Chang
Never Lost in the Middle: Improving Large Language Models via Attention Strengthening Question Answering
Junqing He, Kunhao Pan, Xiaoqun Dong, Zhuoyang Song, Yibo Liu, Yuxin Liang, Hao Wang, Qianguo Sun, Songxin Zhang, Zejian Xie, Jiaxing Zhang
Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation
Vaishnavi Shrivastava, Percy Liang, Ananya Kumar