Retrieval Augmented
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) and other machine learning models by incorporating external knowledge sources during inference, improving accuracy and addressing limitations like hallucinations and factual errors. Current research focuses on optimizing retrieval methods (e.g., using graph structures, determinantal point processes, or hierarchical representations), improving the integration of retrieved information with LLMs (e.g., through various reasoning modules and adaptive retrieval strategies), and applying RAG across diverse domains, including autonomous vehicles, robotics, and biomedical applications. This approach significantly impacts various fields by improving the reliability and efficiency of AI systems, particularly in knowledge-intensive tasks where access to and effective use of external information is crucial.
Papers
AutoRAG: Automated Framework for optimization of Retrieval Augmented Generation Pipeline
Dongkyu Kim, Byoungwook Kim, Donggeon Han, Matouš Eibich
LLMs are Biased Evaluators But Not Biased for Retrieval Augmented Generation
Yen-Shan Chen, Jing Jin, Peng-Ting Kuo, Chao-Wei Huang, Yun-Nung Chen
Plan$\times$RAG: Planning-guided Retrieval Augmented Generation
Prakhar Verma, Sukruta Prakash Midigeshi, Gaurav Sinha, Arno Solin, Nagarajan Natarajan, Amit Sharma
MIRAGE-Bench: Automatic Multilingual Benchmark Arena for Retrieval-Augmented Generation Systems
Nandan Thakur, Suleman Kazi, Ge Luo, Jimmy Lin, Amin Ahmad
Failing Forward: Improving Generative Error Correction for ASR with Synthetic Data and Retrieval Augmentation
Sreyan Ghosh, Mohammad Sadegh Rasooli, Michael Levit, Peidong Wang, Jian Xue, Dinesh Manocha, Jinyu Li