Augmented Generation
Augmented generation enhances large language models (LLMs) by incorporating external knowledge to improve accuracy, address hallucination, and expand their capabilities beyond the limitations of their training data. Current research focuses on improving retrieval methods, exploring various architectures like Table-Augmented Generation (TAG) and Induction-Augmented Generation (IAG), and developing techniques to efficiently manage large context windows. This field is significant because it addresses critical limitations of LLMs, leading to more reliable and robust applications in question answering, IT support, and database interaction, among other areas.
Papers
November 5, 2024
November 3, 2024
October 30, 2024
October 27, 2024
October 21, 2024
October 18, 2024
October 16, 2024
October 11, 2024
October 2, 2024
September 6, 2024
August 27, 2024
August 6, 2024
June 17, 2024
May 30, 2024
May 24, 2024
March 27, 2024
March 22, 2024
March 13, 2024
November 30, 2023
March 20, 2023