CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation. (arXiv:2206.08931v1 [cs.HC])

Human annotated data plays a crucial role in machine learning (ML) research
and development. However, the ethical considerations around the processes and
decisions that go into dataset annotation have not received nearly enough
attention. In this paper, we survey an array of literature that provides
insights into ethical considerations around crowdsourced dataset annotation. We
synthesize these insights, and lay out the challenges in this space along two
layers: (1) who the annotator is, and how the annotators’ lived experiences can
impact their annotations, and (2) the relationship between the annotators and
the crowdsourcing platforms, and what that relationship affords them. Finally,
we introduce a novel framework, CrowdWorkSheets, for dataset developers to
facilitate transparent documentation of key decisions points at various stages
of the data annotation pipeline: task formulation, selection of annotators,
platform and infrastructure choices, dataset analysis and evaluation, and
dataset release and maintenance.

Source: https://arxiv.org/abs/2206.08931

webmaster

Related post