Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems
Devang Kulshreshtha, Muhammad Shayan, Robert Belfer, Siva Reddy, Iulian Vlad Serban, Ekaterina Kochmar
Explanation as Question Answering based on a Task Model of the Agent's Design
Ashok Goel, Harshvardhan Sikka, Vrinda Nandan, Jeonghyun Lee, Matt Lisle, Spencer Rugaber
Reasoning over Logically Interacted Conditions for Question Answering
Haitian Sun, William W. Cohen, Ruslan Salakhutdinov
Asking the Right Questions in Low Resource Template Extraction
Nils Holzenberger, Yunmo Chen, Benjamin Van Durme
Re-Examining Calibration: The Case of Question Answering
Chenglei Si, Chen Zhao, Sewon Min, Jordan Boyd-Graber
MEKER: Memory Efficient Knowledge Embedding Representation for Link Prediction and Question Answering
Viktoriia Chekalina, Anton Razzhigaev, Albert Sayapin, Evgeny Frolov, Alexander Panchenko
Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering
Yu-Jung Heo, Eun-Sol Kim, Woo Suk Choi, Byoung-Tak Zhang