Knowledge Graph Completion
Knowledge graph completion (KGC) aims to infer missing relationships within knowledge graphs, improving their completeness and utility. Current research emphasizes integrating diverse knowledge sources, such as common sense reasoning, external ontologies, and large language models (LLMs), into KGC models, often employing graph neural networks, transformer architectures, and embedding methods. These advancements enhance the accuracy and efficiency of KGC, impacting various applications including question answering, recommendation systems, and risk assessment in cybersecurity. Furthermore, there's a growing focus on improving the interpretability and trustworthiness of KGC models, addressing the "black box" nature of many existing approaches.
Papers
Start from Zero: Triple Set Prediction for Automatic Knowledge Graph Completion
Wen Zhang, Yajing Xu, Peng Ye, Zhiwei Huang, Zezhong Xu, Jiaoyan Chen, Jeff Z. Pan, Huajun Chen
Multilingual Knowledge Graph Completion from Pretrained Language Models with Knowledge Constraints
Ran Song, Shizhu He, Shengxiang Gao, Li Cai, Kang Liu, Zhengtao Yu, Jun Zhao
Assessing LLMs Suitability for Knowledge Graph Completion
Vasile Ionut Remus Iga, Gheorghe Cosmin Silaghi
Predicting from a Different Perspective: A Re-ranking Model for Inductive Knowledge Graph Completion
Yuki Iwamoto, Ken Kaneiwa
Multiple Heads are Better than One: Mixture of Modality Knowledge Experts for Entity Representation Learning
Yichi Zhang, Zhuo Chen, Lingbing Guo, Yajing Xu, Binbin Hu, Ziqi Liu, Wen Zhang, Huajun Chen