Retrieval Augmented Generation
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge sources to improve accuracy and address limitations like hallucinations. Current research focuses on optimizing retrieval strategies (e.g., using hierarchical graphs, attention mechanisms, or determinantal point processes for diverse and relevant information selection), improving the integration of retrieved information with LLM generation (e.g., through various prompting techniques and model adaptation methods), and mitigating biases and ensuring fairness in RAG systems. The impact of RAG is significant, improving performance on various tasks like question answering and enabling more reliable and contextually aware applications across diverse domains, including healthcare and scientific research.
Papers
Retrieval Augmented Spelling Correction for E-Commerce Applications
Xuan Guo, Rohit Patki, Dante Everaert, Christopher Potts
ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability
Zhongxiang Sun, Xiaoxue Zang, Kai Zheng, Yang Song, Jun Xu, Xiao Zhang, Weijie Yu, Yang Song, Han Li
Self-adaptive Multimodal Retrieval-Augmented Generation
Wenjia Zhai
SEER: Self-Aligned Evidence Extraction for Retrieval-Augmented Generation
Xinping Zhao, Dongfang Li, Yan Zhong, Boren Hu, Yibin Chen, Baotian Hu, Min Zhang
On the Capacity of Citation Generation by Large Language Models
Haosheng Qian, Yixing Fan, Ruqing Zhang, Jiafeng Guo
Can Structured Data Reduce Epistemic Uncertainty?
Shriram M S, Sushmitha S, Gayathri K S, Shahina A
FLARE: Faithful Logic-Aided Reasoning and Exploration
Erik Arakelyan, Pasquale Minervini, Pat Verga, Patrick Lewis, Isabelle Augenstein
Graph of Records: Boosting Retrieval Augmented Generation for Long-context Summarization with Graphs
Haozhen Zhang, Tao Feng, Jiaxuan You
VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents
Shi Yu, Chaoyue Tang, Bokai Xu, Junbo Cui, Junhao Ran, Yukun Yan, Zhenghao Liu, Shuo Wang, Xu Han, Zhiyuan Liu, Maosong Sun
STACKFEED: Structured Textual Actor-Critic Knowledge Base Editing with FeedBack
Naman Gupta, Shashank Kirtania, Priyanshu Gupta, Krishna Kariya, Sumit Gulwani, Arun Iyer, Suresh Parthasarathy, Arjun Radhakrishna, Sriram K. Rajamani, Gustavo Soares
KBLaM: Knowledge Base augmented Language Model
Xi Wang, Liana Mikaelyan, Taketomo Isazawa, James Hensman
Parenting: Optimizing Knowledge Selection of Retrieval-Augmented Language Models with Parameter Decoupling and Tailored Tuning
Yongxin Xu, Ruizhe Zhang, Xinke Jiang, Yujie Feng, Yuzhen Xiao, Xinyu Ma, Runchuan Zhu, Xu Chu, Junfeng Zhao, Yasha Wang
EasyRAG: Efficient Retrieval-Augmented Generation Framework for Automated Network Operations
Zhangchi Feng, Dongdong Kuang, Zhongyuan Wang, Zhijie Nie, Yaowei Zheng, Richong Zhang
FunnelRAG: A Coarse-to-Fine Progressive Retrieval Paradigm for RAG
Xinping Zhao, Yan Zhong, Zetian Sun, Xinshuo Hu, Zhenyu Liu, Dongfang Li, Baotian Hu, Min Zhang
Audio Captioning via Generative Pair-to-Pair Retrieval with Refined Knowledge Base
Choi Changin, Lim Sungjun, Rhee Wonjong
Beyond-RAG: Question Identification and Answer Generation in Real-Time Conversations
Garima Agrawal, Sashank Gummuluri, Cosimo Spera
Retrieval Instead of Fine-tuning: A Retrieval-based Parameter Ensemble for Zero-shot Learning
Pengfei Jin, Peng Shu, Sekeun Kim, Qing Xiao, Sifan Song, Cheng Chen, Tianming Liu, Xiang Li, Quanzheng Li
Honest AI: Fine-Tuning "Small" Language Models to Say "I Don't Know", and Reducing Hallucination in RAG
Xinxi Chen, Li Wang, Wei Wu, Qi Tang, Yiyao Liu