Retrieval Augmented Generation
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge sources to improve accuracy and address limitations like hallucinations. Current research focuses on optimizing retrieval strategies (e.g., using hierarchical graphs, attention mechanisms, or determinantal point processes for diverse and relevant information selection), improving the integration of retrieved information with LLM generation (e.g., through various prompting techniques and model adaptation methods), and mitigating biases and ensuring fairness in RAG systems. The impact of RAG is significant, improving performance on various tasks like question answering and enabling more reliable and contextually aware applications across diverse domains, including healthcare and scientific research.
Papers
Initial Nugget Evaluation Results for the TREC 2024 RAG Track with the AutoNuggetizer Framework
Ronak Pradeep, Nandan Thakur, Shivani Upadhyay, Daniel Campos, Nick Craswell, Jimmy Lin
Adopting RAG for LLM-Aided Future Vehicle Design
Vahid Zolfaghari, Nenad Petrovic, Fengjunjie Pan, Krzysztof Lebioda, Alois Knoll
Harnessing multiple LLMs for Information Retrieval: A case study on Deep Learning methodologies in Biodiversity publications
Vamsi Krishna Kommineni, Birgitta König-Ries, Sheeba Samuel
Comprehensive and Practical Evaluation of Retrieval-Augmented Generation Systems for Medical Question Answering
Nghia Trung Ngo, Chien Van Nguyen, Franck Dernoncourt, Thien Huu Nguyen
Towards Optimizing a Retrieval Augmented Generation using Large Language Model on Academic Data
Anum Afzal, Juraj Vladika, Gentrit Fazlija, Andrei Staradubets, Florian Matthes
Refining Translations with LLMs: A Constraint-Aware Iterative Prompting Approach
Shangfeng Chen, Xiayang Shi, Pu Li, Yinlin Li, Jingjing Liu
Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle
Hui Dai, Ryan Teehan, Mengye Ren
Retrieval Augmented Time Series Forecasting
Kutay Tire, Ege Onur Taga, Muhammed Emrullah Ildiz, Samet Oymak
Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders
Xiaofeng Zhu, Jaya Krishna Mandivarapu
Query Optimization for Parametric Knowledge Refinement in Retrieval-Augmented Large Language Models
Youan Cong, Cheng Wang, Pritom Saha Akash, Kevin Chen-Chuan Chang
Likelihood as a Performance Gauge for Retrieval-Augmented Generation
Tianyu Liu, Jirui Qi, Paul He, Arianna Bisazza, Mrinmaya Sachan, Ryan Cotterell
Unlocking Legal Knowledge with Multi-Layered Embedding-Based Retrieval
João Alberto de Oliveira Lima
Leveraging Retrieval-Augmented Generation for University Knowledge Retrieval
Arshia Hemmat, Kianoosh Vadaei, Mohammad Hassan Heydari, Afsaneh Fatemi
Exploring Knowledge Boundaries in Large Language Models for Retrieval Judgment
Zhen Zhang, Xinyu Wang, Yong Jiang, Zhuo Chen, Feiteng Mu, Mengting Hu, Pengjun Xie, Fei Huang
Sufficient Context: A New Lens on Retrieval Augmented Generation Systems
Hailey Joren, Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, Cyrus Rashtchian
Audiobox TTA-RAG: Improving Zero-Shot and Few-Shot Text-To-Audio with Retrieval-Augmented Generation
Mu Yang, Bowen Shi, Matthew Le, Wei-Ning Hsu, Andros Tjandra
LLM-R: A Framework for Domain-Adaptive Maintenance Scheme Generation Combining Hierarchical Agents and RAG
Laifa Tao, Qixuan Huang, Xianjun Wu, Weiwei Zhang, Yunlong Wu, Bin Li, Chen Lu, Xingshuo Hai