RulE: Neural-Symbolic Knowledge Graph Reasoning with Rule Embedding. (arXiv:2210.14905v1 [cs.AI])

Knowledge graph (KG) reasoning is an important problem for knowledge graphs.
It predicts missing links by reasoning on existing facts. Knowledge graph
embedding (KGE) is one of the most popular methods to address this problem. It
embeds entities and relations into low-dimensional vectors and uses the learned
entity/relation embeddings to predict missing facts. However, KGE only uses
zeroth-order (propositional) logic to encode existing triplets (e.g., “Alice
is Bob’s wife.”); it is unable to leverage first-order (predicate) logic to
represent generally applicable logical textbf{rules} (e.g., “$forall x,y
colon x ~text{is}~ ytext{‘s wife} rightarrow y ~text{is}~ xtext{‘s
husband}$”). On the other hand, traditional rule-based KG reasoning methods
usually rely on hard logical rule inference, making it brittle and hardly
competitive with KGE. In this paper, we propose RulE, a novel and principled
framework to represent and model logical rules and triplets. RulE jointly
represents entities, relations and logical rules in a unified embedding space.
By learning an embedding for each logical rule, RulE can perform logical rule
inference in a soft way and give a confidence score to each grounded rule,
similar to how KGE gives each triplet a confidence score. Compared to KGE
alone, RulE allows injecting prior logical rule information into the embedding
space, which improves the generalization of knowledge graph embedding. Besides,
the learned confidence scores of rules improve the logical rule inference
process by softly controlling the contribution of each rule, which alleviates
the brittleness of logic. We evaluate our method with link prediction tasks.
Experimental results on multiple benchmark KGs demonstrate the effectiveness of
RulE.

Source: https://arxiv.org/abs/2210.14905

webmaster

Related post