Understanding AI Cognition: A Neural Module for Inference Inspired by Human Memory Mechanisms. (arXiv:2310.09297v1 [cs.LG])
How humans and machines make sense of current inputs for relation reasoning
and question-answering while putting the perceived information into context of
our past memories, has been a challenging conundrum in cognitive science and
artificial intelligence. Inspired by human brain’s memory system and cognitive
architectures, we propose a PMI framework that consists of perception, memory
and inference components. Notably, the memory module comprises working and
long-term memory, with the latter endowed with a higher-order structure to
retain more accumulated knowledge and experiences. Through a differentiable
competitive write access, current perceptions update working memory, which is
later merged with long-term memory via outer product associations, averting
memory overflow and minimizing information conflicts. In the inference module,
relevant information is retrieved from two separate memory origins and
associatively integrated to attain a more comprehensive and precise
interpretation of current perceptions. We exploratively apply our PMI to
improve prevailing Transformers and CNN models on question-answering tasks like
bAbI-20k and Sort-of-CLEVR datasets, as well as relation calculation and image
classification tasks, and in each case, our PMI enhancements consistently
outshine their original counterparts significantly. Visualization analyses
reveal that memory consolidation, along with the interaction and integration of
information from diverse memory sources, substantially contributes to the model
effectiveness on inference tasks.
Source: https://arxiv.org/abs/2310.09297