Question Answering
Question answering (QA) research aims to develop systems that accurately and efficiently respond to diverse questions posed in natural language. Current efforts focus on improving the robustness and efficiency of QA models, particularly in handling long contexts, ambiguous queries, and knowledge conflicts, often leveraging large language models (LLMs) and retrieval-augmented generation (RAG) architectures. These advancements are significant for various applications, including information retrieval, conversational AI, and educational tools, driving improvements in both the accuracy and accessibility of information.
Papers
Unimib Assistant: designing a student-friendly RAG-based chatbot for all their needs
Chiara Antico, Stefano Giordano, Cansu Koyuturk, Dimitri Ognibene
TQA-Bench: Evaluating LLMs for Multi-Table Question Answering with Scalable Context and Symbolic Extension
Zipeng Qiu, You Peng, Guangxin He, Binhang Yuan, Chen Wang
Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey
Jiayi Kuang, Jingyou Xie, Haohao Luo, Ronghao Li, Zhe Xu, Xianfeng Cheng, Yinghui Li, Xika Lin, Ying Shen
Task Progressive Curriculum Learning for Robust Visual Question Answering
Ahmed Akl, Abdelwahed Khamis, Zhe Wang, Ali Cheraghian, Sara Khalifa, Kewen Wang
Lexicalization Is All You Need: Examining the Impact of Lexical Knowledge in a Compositional QALD System
David Maria Schmidt, Mohammad Fazleh Elahi, Philipp Cimiano
MEG: Medical Knowledge-Augmented Large Language Models for Question Answering
Laura Cabello, Carmen Martin-Turrero, Uchenna Akujuobi, Anders Søgaard, Carlos Bobed