Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
A Claim Decomposition Benchmark for Long-form Answer Verification
Zhihao Zhang, Yixing Fan, Ruqing Zhang, Jiafeng Guo
Open Domain Question Answering with Conflicting Contexts
Siyi Liu, Qiang Ning, Kishaloy Halder, Wei Xiao, Zheng Qi, Phu Mon Htut, Yi Zhang, Neha Anna John, Bonan Min, Yassine Benajiba, Dan Roth
Question-Answering System for Bangla: Fine-tuning BERT-Bangla for a Closed Domain
Subal Chandra Roy, Md Motaleb Hossen Manik
ALR$^2$: A Retrieve-then-Reason Framework for Long-context Question Answering
Huayang Li, Pat Verga, Priyanka Sen, Bowen Yang, Vijay Viswanathan, Patrick Lewis, Taro Watanabe, Yixuan Su
Cross-lingual Transfer for Automatic Question Generation by Learning Interrogative Structures in Target Languages
Seonjeong Hwang, Yunsu Kim, Gary Geunbae Lee
CALF: Benchmarking Evaluation of LFQA Using Chinese Examinations
Yuchen Fan, Xin Zhong, Heng Zhou, Yuchen Zhang, Mingyu Liang, Chengxing Xie, Ermo Hua, Ning Ding, Bowen Zhou
PCQPR: Proactive Conversational Question Planning with Reflection
Shasha Guo, Lizi Liao, Jing Zhang, Cuiping Li, Hong Chen
Enhancing Retrieval in QA Systems with Derived Feature Association
Keyush Shah, Abhishek Goyal, Isaac Wasserman