A Review on Language Models as Knowledge Bases. (arXiv:2204.06031v1 [cs.CL])

Recently, there has been a surge of interest in the NLP community on the use
of pretrained Language Models (LMs) as Knowledge Bases (KBs). Researchers have
shown that LMs trained on a sufficiently large (web) corpus will encode a
significant amount of knowledge implicitly in its parameters. The resulting LM
can be probed for different kinds of knowledge and thus acting as a KB. This
has a major advantage over traditional KBs in that this method requires no
human supervision. In this paper, we present a set of aspects that we deem a LM
should have to fully act as a KB, and review the recent literature with respect
to those aspects.

Source: https://arxiv.org/abs/2204.06031

webmaster

Related post