Retrieval Augmented
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) and other machine learning models by incorporating external knowledge sources during inference, improving accuracy and addressing limitations like hallucinations and factual errors. Current research focuses on optimizing retrieval methods (e.g., using graph structures, determinantal point processes, or hierarchical representations), improving the integration of retrieved information with LLMs (e.g., through various reasoning modules and adaptive retrieval strategies), and applying RAG across diverse domains, including autonomous vehicles, robotics, and biomedical applications. This approach significantly impacts various fields by improving the reliability and efficiency of AI systems, particularly in knowledge-intensive tasks where access to and effective use of external information is crucial.
Papers
Language Modeling with Editable External Knowledge
Belinda Z. Li, Emmy Liu, Alexis Ross, Abbas Zeitoun, Graham Neubig, Jacob Andreas
Multimodal Structured Generation: CVPR's 2nd MMFM Challenge Technical Report
Franz Louis Cesista
SeRTS: Self-Rewarding Tree Search for Biomedical Retrieval-Augmented Generation
Minda Hu, Licheng Zong, Hongru Wang, Jingyan Zhou, Jingjing Li, Yichen Gao, Kam-Fai Wong, Yu Li, Irwin King
Few-Shot Recognition via Stage-Wise Augmented Finetuning
Tian Liu, Huixin Zhang, Shubham Parashar, Shu Kong