AI model GPT-3 (dis)informs us better than humans. (arXiv:2301.11924v1 [cs.CY])

Artificial intelligence is changing the way we create and evaluate
information, and this is happening during an infodemic, which has been having
dramatic effects on global health. In this paper we evaluate whether recruited
individuals can distinguish disinformation from accurate information,
structured in the form of tweets, and determine whether a tweet is organic or
synthetic, i.e., whether it has been written by a Twitter user or by the AI
model GPT-3. Our results show that GPT-3 is a double-edge sword, which, in
comparison with humans, can produce accurate information that is easier to
understand, but can also produce more compelling disinformation. We also show
that humans cannot distinguish tweets generated by GPT-3 and real Twitter
users. Starting from our results, we reflect on the dangers of AI for
disinformation, and on how we can improve information campaigns to benefit
global health.



Related post