Paper ID: 2409.13694

A Knowledge-Centric Benchmarking Framework and Empirical Study for Retrieval-Augmented Generation

Shuo Yu (1 and 2), Mingyue Cheng (1 and 2), Jiqian Yang (1 and 2), Jie Ouyang (1 and 2) ((1) Anhui Province Key Laboratory of Big Data Analysis and Application, University of Science and Technology of China (2) State Key Laboratory of Cognitive Intelligence)

Retrieval-Augmented Generation (RAG) enhances generative models by integrating retrieval mechanisms, which allow these models to access and utilize external knowledge sources. Despite its advantages, RAG encounters significant challenges, particularly in effectively handling real-world queries and mitigating hallucinations. The KDD Cup 2024 CRAG competition brings these issues to the forefront by incorporating both web pages and a mock API as knowledge sources, adding the complexity of parsing HTML before large language models (LLMs) can process the information. In this paper, we propose a novel RAG benchmark designed to address these challenges. Our work provides a comprehensive set of experimental results, offering valuable insights for the study of RAG. We thoroughly examine the entire RAG process, including knowledge source selection, retrieval, organization, and reasoning. Key findings from our study include the impact of automated knowledge source selection using agents and the influence of noise chunks on RAG reasoning. Additionally, we conduct detailed experiments to analyze the effects of various hyperparameters on RAG performance. To support further research, we have made our results, the associated code, and a parsed version of the CRAG dataset publicly available\footnote{this https URL}, contributing to the advancement of RAG methodologies and establishing a solid foundation for future work in this domain.

Submitted: Sep 3, 2024