Paper ID: 2203.11639
Learning Relation-Specific Representations for Few-shot Knowledge Graph Completion
Yuling Li, Kui Yu, Yuhong Zhang, Xindong Wu
Recent years have witnessed increasing interest in few-shot knowledge graph completion (FKGC), which aims to infer unseen query triples for a few-shot relation using a few reference triples about the relation. The primary focus of existing FKGC methods lies in learning relation representations that can reflect the common information shared by the query and reference triples. To this end, these methods learn entity-pair representations from the direct neighbors of head and tail entities, and then aggregate the representations of reference entity pairs. However, the entity-pair representations learned only from direct neighbors may have low expressiveness when the involved entities have sparse direct neighbors or share a common local neighborhood with other entities. Moreover, merely modeling the semantic information of head and tail entities is insufficient to accurately infer their relational information especially when they have multiple relations. To address these issues, we propose a Relation-Specific Context Learning (RSCL) framework, which exploits graph contexts of triples to learn global and local relation-specific representations for few-shot relations. Specifically, we first extract graph contexts for each triple, which can provide long-term entity-relation dependencies. To encode the extracted graph contexts, we then present a hierarchical attention network to capture contextualized information of triples and highlight valuable local neighborhood information of entities. Finally, we design a hybrid attention aggregator to evaluate the likelihood of the query triples at the global and local levels. Experimental results on two public datasets demonstrate that RSCL outperforms state-of-the-art FKGC methods.
Submitted: Mar 22, 2022