Learning through structure: towards deep neuromorphic knowledge graph embeddings. (arXiv:2109.10376v1 [cs.NE])

Computing latent representations for graph-structured data is an ubiquitous
learning task in many industrial and academic applications ranging from
molecule synthetization to social network analysis and recommender systems.
Knowledge graphs are among the most popular and widely used data
representations related to the Semantic Web. Next to structuring factual
knowledge in a machine-readable format, knowledge graphs serve as the backbone
of many artificial intelligence applications and allow the ingestion of context
information into various learning algorithms. Graph neural networks attempt to
encode graph structures in low-dimensional vector spaces via a message passing
heuristic between neighboring nodes. Over the recent years, a multitude of
different graph neural network architectures demonstrated ground-breaking
performances in many learning tasks. In this work, we propose a strategy to map
deep graph learning architectures for knowledge graph reasoning to neuromorphic
architectures. Based on the insight that randomly initialized and untrained
(i.e., frozen) graph neural networks are able to preserve local graph
structures, we compose a frozen neural network with shallow knowledge graph
embedding models. We experimentally show that already on conventional computing
hardware, this leads to a significant speedup and memory reduction while
maintaining a competitive performance level. Moreover, we extend the frozen
architecture to spiking neural networks, introducing a novel, event-based and
highly sparse knowledge graph embedding algorithm that is suitable for
implementation in neuromorphic hardware.

Source: https://arxiv.org/abs/2109.10376

webmaster

Related post