Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
KITLM: Domain-Specific Knowledge InTegration into Language Models for Question Answering
Ankush Agarwal, Sakharam Gawade, Amar Prakash Azad, Pushpak Bhattacharyya
Prompt Guided Copy Mechanism for Conversational Question Answering
Yong Zhang, Zhitao Li, Jianzong Wang, Yiming Gao, Ning Cheng, Fengying Yu, Jing Xiao
Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
KoBBQ: Korean Bias Benchmark for Question Answering
Jiho Jin, Jiseon Kim, Nayeon Lee, Haneul Yoo, Alice Oh, Hwaran Lee
No that's not what I meant: Handling Third Position Repair in Conversational Question Answering
Vevake Balaraman, Arash Eshghi, Ioannis Konstas, Ioannis Papaioannou