Retrieval Augmented Generation
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating external knowledge sources to improve accuracy and address limitations like hallucinations. Current research focuses on optimizing retrieval strategies (e.g., using hierarchical graphs, attention mechanisms, or determinantal point processes for diverse and relevant information selection), improving the integration of retrieved information with LLM generation (e.g., through various prompting techniques and model adaptation methods), and mitigating biases and ensuring fairness in RAG systems. The impact of RAG is significant, improving performance on various tasks like question answering and enabling more reliable and contextually aware applications across diverse domains, including healthcare and scientific research.
Papers
RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models
Peng Xia, Kangyu Zhu, Haoran Li, Hongtu Zhu, Yun Li, Gang Li, Linjun Zhang, Huaxiu Yao
How do you know that? Teaching Generative Language Models to Reference Answers to Biomedical Questions
Bojana Bašaragin, Adela Ljajić, Darija Medvecki, Lorenzo Cassano, Miloš Košprdić, Nikola Milošević
RAMO: Retrieval-Augmented Generation for Enhancing MOOCs Recommendations
Jiarui Rao, Jionghao Lin
Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge
Yuanze Lin, Yunsheng Li, Dongdong Chen, Weijian Xu, Ronald Clark, Philip Torr, Lu Yuan
GPT vs RETRO: Exploring the Intersection of Retrieval and Parameter-Efficient Fine-Tuning
Aleksander Ficek, Jiaqi Zeng, Oleksii Kuchaiev
AriGraph: Learning Knowledge Graph World Models with Episodic Memory for LLM Agents
Petr Anokhin, Nikita Semenov, Artyom Sorokin, Dmitry Evseev, Mikhail Burtsev, Evgeny Burnaev
Meta-prompting Optimized Retrieval-augmented Generation
João Rodrigues, António Branco
TongGu: Mastering Classical Chinese Understanding with Knowledge-Grounded Large Language Models
Jiahuan Cao, Dezhi Peng, Peirong Zhang, Yongxin Shi, Yang Liu, Kai Ding, Lianwen Jin
CaseGPT: a case reasoning framework based on language models and retrieval-augmented generation
Rui Yang
DSLR: Document Refinement with Sentence-Level Re-ranking and Reconstruction to Enhance Retrieval-Augmented Generation
Taeho Hwang, Soyeong Jeong, Sukmin Cho, SeungYoon Han, Jong C. Park
RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs
Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi, Bryan Catanzaro
Why does in-context learning fail sometimes? Evaluating in-context learning on open and closed questions
Xiang Li, Haoran Tang, Siyu Chen, Ziwei Wang, Ryan Chen, Marcin Abram
Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation
Sirui Xia, Xintao Wang, Jiaqing Liang, Yifei Zhang, Weikang Zhou, Jiaji Deng, Fei Yu, Yanghua Xiao
Retrieval-augmented generation in multilingual settings
Nadezhda Chirkova, David Rau, Hervé Déjean, Thibault Formal, Stéphane Clinchant, Vassilina Nikoulina
Searching for Best Practices in Retrieval-Augmented Generation
Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, Ruicheng Yin, Changze Lv, Xiaoqing Zheng, Xuanjing Huang
Learning to Explore and Select for Coverage-Conditioned Retrieval-Augmented Generation
Takyoung Kim, Kyungjae Lee, Young Rok Jang, Ji Yong Cho, Gangwoo Kim, Minseok Cho, Moontae Lee
BERGEN: A Benchmarking Library for Retrieval-Augmented Generation
David Rau, Hervé Déjean, Nadezhda Chirkova, Thibault Formal, Shuai Wang, Vassilina Nikoulina, Stéphane Clinchant
Face4RAG: Factual Consistency Evaluation for Retrieval Augmented Generation in Chinese
Yunqi Xu, Tianchi Cai, Jiyan Jiang, Xierui Song
Exploring Advanced Large Language Models with LLMsuite
Giorgio Roffo