Judgments of research co-created by generative AI: experimental evidence. (arXiv:2305.11873v1 [cs.HC])
The introduction of ChatGPT has fuelled a public debate on the use of
generative AI (large language models; LLMs), including its use by researchers.
In the current work, we test whether delegating parts of the research process
to LLMs leads people to distrust and devalue researchers and scientific output.
Participants (N=402) considered a researcher who delegates elements of the
research process to a PhD student or LLM, and rated (1) moral acceptability,
(2) trust in the scientist to oversee future projects, and (3) the accuracy and
quality of the output. People judged delegating to an LLM as less acceptable
than delegating to a human (d = -0.78). Delegation to an LLM also decreased
trust to oversee future research projects (d = -0.80), and people thought the
results would be less accurate and of lower quality (d = -0.85). We discuss how
this devaluation might transfer into the underreporting of generative AI use.
Source: https://arxiv.org/abs/2305.11873