Paper ID: 2401.13444

Clue-Guided Path Exploration: Optimizing Knowledge Graph Retrieval with Large Language Models to Address the Information Black Box Challenge

Dehao Tao, Feng Huang, Congqi Wang, Yongfeng Huang, Minghu Jiang

In recent times, large language models (LLMs) have showcased remarkable capabilities. However, updating their knowledge poses challenges, potentially leading to inaccuracies when confronted with unfamiliar queries. To address this issue, integrating external knowledge bases such as knowledge graphs with large language models is a viable approach. The key challenge lies in extracting the required knowledge from knowledge graphs based on natural language, demanding high semantic understanding. Therefore, researchers are considering leveraging large language models directly for knowledge retrieval from these graphs. Current efforts typically rely on the comprehensive problem-solving capabilities of large language models. We argue that a problem we term the 'information black box' can significantly impact the practical effectiveness of such methods. Moreover, this kind of methods is less effective for scenarios where the questions are unfamiliar to the large language models. In this paper, we propose a Clue-Guided Path Exploration (CGPE) framework to optimize knowledge retrieval based on large language models. By addressing the 'information black box' issue and employing single-task approaches instead of complex tasks, we have enhanced the accuracy and efficiency of using large language models for retrieving knowledge graphs. Experiments on open-source datasets reveal that CGPE outperforms previous methods and is highly applicable to LLMs with fewer parameters. In some instances, even ChatGLM3, with its 6 billion parameters, can rival the performance of GPT-4. Furthermore, the results indicate a minimal invocation frequency of CGPE on LLMs, suggesting reduced computational overhead. For organizations and individuals facing constraints in computational resources, our research offers significant practical value.

Submitted: Jan 24, 2024