Retrieval Augmented Generation
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge sources to improve accuracy and address limitations like hallucinations. Current research focuses on optimizing retrieval strategies (e.g., using hierarchical graphs, attention mechanisms, or determinantal point processes for diverse and relevant information selection), improving the integration of retrieved information with LLM generation (e.g., through various prompting techniques and model adaptation methods), and mitigating biases and ensuring fairness in RAG systems. The impact of RAG is significant, improving performance on various tasks like question answering and enabling more reliable and contextually aware applications across diverse domains, including healthcare and scientific research.
Papers
Retrieval-Augmented Test Generation: How Far Are We?
Jiho Shin, Reem Aleithan, Hadi Hemmati, Song Wang
RAD-Bench: Evaluating Large Language Models Capabilities in Retrieval Augmented Dialogues
Tzu-Lin Kuo, Feng-Ting Liao, Mu-Wei Hsieh, Fu-Chieh Chang, Po-Chun Hsu, Da-Shan Shiu
Should RAG Chatbots Forget Unimportant Conversations? Exploring Importance and Forgetting with Psychological Insights
Ryuichi Sumida, Koji Inoue, Tatsuya Kawahara
Familiarity-aware Evidence Compression for Retrieval Augmented Generation
Dongwon Jung, Qin Liu, Tenghao Huang, Ben Zhou, Muhao Chen
VERA: Validation and Enhancement for Retrieval Augmented systems
Nitin Aravind Birur, Tanay Baswa, Divyanshu Kumar, Jatan Loya, Sahil Agarwal, Prashanth Harshangi
FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement
Liam Hebert, Marialena Kyriakidi, Hubert Pham, Krishna Sayana, James Pine, Sukhdeep Sodhi, Ambarish Jash
Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation
To Eun Kim, Fernando Diaz
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models
Mengfei Liang, Archish Arun, Zekun Wu, Cristian Munoz, Jonathan Lutch, Emre Kazim, Adriano Koshiyama, Philip Treleaven
P-RAG: Progressive Retrieval Augmented Generation For Planning on Embodied Everyday Task
Weiye Xu, Min Wang, Wengang Zhou, Houqiang Li
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
Maojia Song, Shang Hong Sim, Rishabh Bhardwaj, Hai Leong Chieu, Navonil Majumder, Soujanya Poria
Investigating Context-Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style
Yuepei Li, Kang Zhou, Qiao Qiao, Bach Nguyen, Qing Wang, Qi Li
Trustworthiness in Retrieval-Augmented Generation Systems: A Survey
Yujia Zhou, Yan Liu, Xiaoxi Li, Jiajie Jin, Hongjin Qian, Zheng Liu, Chaozhuo Li, Zhicheng Dou, Tsung-Yi Ho, Philip S. Yu
SFR-RAG: Towards Contextually Faithful LLMs
Xuan-Phi Nguyen, Shrey Pandit, Senthil Purushwalkam, Austin Xu, Hailin Chen, Yifei Ming, Zixuan Ke, Silvio Savarese, Caiming Xong, Shafiq Joty
A RAG Approach for Generating Competency Questions in Ontology Engineering
Xueli Pan, Jacco van Ossenbruggen, Victor de Boer, Zhisheng Huang
LA-RAG:Enhancing LLM-based ASR Accuracy with Retrieval-Augmented Generation
Shaojun Li, Hengchao Shang, Daimeng Wei, Jiaxin Guo, Zongyao Li, Xianghui He, Min Zhang, Hao Yang
Exploring Information Retrieval Landscapes: An Investigation of a Novel Evaluation Techniques and Comparative Document Splitting Methods
Esmaeil Narimissa (Australian Taxation Office), David Raithel (Australian Taxation Office)
Retro-li: Small-Scale Retrieval Augmented Generation Supporting Noisy Similarity Searches and Domain Shift Generalization
Gentiana Rashiti, Geethan Karunaratne, Mrinmaya Sachan, Abu Sebastian, Abbas Rahimi
On the Vulnerability of Applying Retrieval-Augmented Generation within Knowledge-Intensive Application Domains
Xun Xian, Ganghua Wang, Xuan Bi, Jayanth Srinivasa, Ashish Kundu, Charles Fleming, Mingyi Hong, Jie Ding