Metaphors We Learn By. (arXiv:2211.06441v1 [cs.LG])
Gradient based learning using error back-propagation (“backprop”) is a
well-known contributor to much of the recent progress in AI. A less obvious,
but arguably equally important, ingredient is parameter sharing – most
well-known in the context of convolutional networks. In this essay we relate
parameter sharing (“weight sharing”) to analogy making and the school of
thought of cognitive metaphor. We discuss how recurrent and auto-regressive
models can be thought of as extending analogy making from static features to
dynamic skills and procedures. We also discuss corollaries of this perspective,
for example, how it can challenge the currently entrenched dichotomy between
connectionist and “classic” rule-based views of computation.
Source: https://arxiv.org/abs/2211.06441