Turning large language models into cognitive models. (arXiv:2306.03917v1 [cs.CL])
Large language models are powerful systems that excel at many tasks, ranging
from translation to mathematical reasoning. Yet, at the same time, these models
often show unhuman-like characteristics. In the present paper, we address this
gap and ask whether large language models can be turned into cognitive models.
We find that — after finetuning them on data from psychological experiments —
these models offer accurate representations of human behavior, even
outperforming traditional cognitive models in two decision-making domains. In
addition, we show that their representations contain the information necessary
to model behavior on the level of individual subjects. Finally, we demonstrate
that finetuning on multiple tasks enables large language models to predict
human behavior in a previously unseen task. Taken together, these results
suggest that large, pre-trained models can be adapted to become generalist
cognitive models, thereby opening up new research directions that could
transform cognitive psychology and the behavioral sciences as a whole.
Source: https://arxiv.org/abs/2306.03917