Knowledge Grounded
Knowledge-grounded approaches aim to enhance natural language processing systems by integrating external knowledge sources, improving the factual accuracy, coherence, and relevance of generated text in tasks like question answering and dialogue. Current research focuses on developing methods for effective knowledge retrieval and integration, often employing retrieval-augmented generation (RAG) models, contrastive learning techniques, and large language models (LLMs) to address challenges like hallucination and the trade-off between specificity and attribution. These advancements are significant for building more reliable and informative AI systems, with applications ranging from improved chatbots and virtual assistants to more trustworthy information retrieval and claim verification tools.
Papers
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination Detection
Sehyun Choi, Tianqing Fang, Zhaowei Wang, Yangqiu Song
Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogue
Hongru Wang, Minda Hu, Yang Deng, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Wai-Chung Kwan, Irwin King, Kam-Fai Wong
Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for Knowledge-Grounded Dialogue
Lang Qin, Yao Zhang, Hongru Liang, Jun Wang, Zhenglu Yang
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong