Question Answering Task
Question answering (QA) research focuses on enabling computers to accurately and reliably answer questions posed in natural language. Current research emphasizes improving the accuracy and efficiency of QA systems, particularly by leveraging large language models (LLMs) and incorporating external knowledge sources through retrieval-augmented generation (RAG) techniques. Key areas of investigation include enhancing model interpretability, mitigating biases and hallucinations, and optimizing retrieval strategies for improved efficiency and accuracy across diverse question types and domains. Advances in QA have significant implications for various applications, including information retrieval, education, and healthcare.
Papers
Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs
Giorgi Kokaia, Pratyush Sinha, Yutong Jiang, Nozha Boujemaa
mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences
David Uthus, Santiago Ontañón, Joshua Ainslie, Mandy Guo