Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making. (arXiv:2101.05303v1 [cs.AI])

Although AI holds promise for improving human decision making in societally
critical domains, it remains an open question how human-AI teams can reliably
outperform AI alone and human alone in challenging prediction tasks (also known
as complementary performance). We explore two directions to understand the gaps
in achieving complementary performance. First, we argue that the typical
experimental setup limits the potential of human-AI teams. To account for lower
AI performance out-of-distribution than in-distribution because of distribution
shift, we design experiments with different distribution types and investigate
human performance for both in-distribution and out-of-distribution examples.
Second, we develop novel interfaces to support interactive explanations so that
humans can actively engage with AI assistance. Using in-person user study and
large-scale randomized experiments across three tasks, we demonstrate a clear
difference between in-distribution and out-of-distribution, and observe mixed
results for interactive explanations: while interactive explanations improve
human perception of AI assistance’s usefulness, they may magnify human biases
and lead to limited performance improvement. Overall, our work points out
critical challenges and future directions towards complementary performance.

Source: https://arxiv.org/abs/2101.05303

webmaster

Related post