Knowledge Retrieval
Knowledge retrieval aims to efficiently access and integrate relevant information from diverse sources to enhance the performance and interpretability of large language models (LLMs). Current research focuses on improving retrieval accuracy and efficiency through techniques like retrieval-augmented generation (RAG), prompt engineering (including Chain-of-Thought and self-consistency methods), and the development of novel architectures that leverage LLMs' internal states for adaptive retrieval. These advancements are significant for improving the reliability and factual accuracy of LLMs across various applications, from question answering and medical diagnosis to design and education.
Papers
Knowledge-Aware Query Expansion with Large Language Models for Textual and Relational Retrieval
Yu Xia, Junda Wu, Sungchul Kim, Tong Yu, Ryan A. Rossi, Haoliang Wang, Julian McAuley
Enhancing Fact Retrieval in PLMs through Truthfulness
Paul Youssef, Jörg Schlötterer, Christin Seifert
A Systematic Investigation of Knowledge Retrieval and Selection for Retrieval Augmented Generation
Xiangci Li, Jessica Ouyang
KARL: Knowledge-Aware Retrieval and Representations aid Retention and Learning in Students
Matthew Shu, Nishant Balepur, Shi Feng, Jordan Boyd-Graber
BIDER: Bridging Knowledge Inconsistency for Efficient Retrieval-Augmented LLMs via Key Supporting Evidence
Jiajie Jin, Yutao Zhu, Yujia Zhou, Zhicheng Dou