Retrieval Augmented Generation
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge sources to improve accuracy and address limitations like hallucinations. Current research focuses on optimizing retrieval strategies (e.g., using hierarchical graphs, attention mechanisms, or determinantal point processes for diverse and relevant information selection), improving the integration of retrieved information with LLM generation (e.g., through various prompting techniques and model adaptation methods), and mitigating biases and ensuring fairness in RAG systems. The impact of RAG is significant, improving performance on various tasks like question answering and enabling more reliable and contextually aware applications across diverse domains, including healthcare and scientific research.
Papers
Knowing When to Ask -- Bridging Large Language Models and Data
Prashanth Radhakrishnan, Jennifer Chen, Bo Xu, Prem Ramaswami, Hannah Pho, Adriana Olmos, James Manyika, R. V. Guha
GroUSE: A Benchmark to Evaluate Evaluators in Grounded Question Answering
Sacha Muller, António Loison, Bilel Omrani, Gautier Viaud
You Only Use Reactive Attention Slice For Long Context Retrieval
Yun Joon Soh, Hanxian Huang, Yuandong Tian, Jishen Zhao
The Role of Large Language Models in Musicology: Are We Ready to Trust the Machines?
Pedro Ramoneda, Emilia Parada-Cabaleiro, Benno Weck, Xavier Serra
In Defense of RAG in the Era of Long-Context Language Models
Tan Yu, Anbang Xu, Rama Akkiraju
A Knowledge-Centric Benchmarking Framework and Empirical Study for Retrieval-Augmented Generation
Shuo Yu (1 and 2), Mingyue Cheng (1 and 2), Jiqian Yang (1 and 2), Jie Ouyang (1 and 2) ((1) Anhui Province Key Laboratory of Big Data Analysis and Application, University of Science and Technology of China (2) State Key Laboratory of Cognitive Intelligence)
Benchmarking Cognitive Domains for LLMs: Insights from Taiwanese Hakka Culture
Chen-Chi Chang, Ching-Yuan Chen, Hung-Shin Lee, Chih-Cheng Lee