Retrieval Augmented Language Model
Retrieval-Augmented Language Models (RALMs) enhance large language models (LLMs) by incorporating external knowledge sources during inference, aiming to improve accuracy and address limitations like hallucinations and outdated information. Current research focuses on improving retrieval methods, refining the integration of retrieved information with LLM generation (e.g., through techniques like in-context learning and knowledge rewriting), and developing robust evaluation frameworks to assess RALM performance across diverse tasks. This field is significant because RALMs offer a path towards more reliable, adaptable, and trustworthy LLMs, with potential applications ranging from clinical medicine to scientific research and beyond.
Papers
Making Retrieval-Augmented Language Models Robust to Irrelevant Context
Ori Yoran, Tomer Wolfson, Ori Ram, Jonathan Berant
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih
BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models
Qingqing Cao, Sewon Min, Yizhong Wang, Hannaneh Hajishirzi