Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches. (arXiv:2205.04460v1 [cs.LG])

This survey article assesses and compares existing critiques of current
fairness-enhancing technical interventions into machine learning (ML) that draw
from a range of non-computing disciplines, including philosophy, feminist
studies, critical race and ethnic studies, legal studies, anthropology, and
science and technology studies. It bridges epistemic divides in order to offer
an interdisciplinary understanding of the possibilities and limits of hegemonic
computational approaches to ML fairness for producing just outcomes for
society’s most marginalized. The article is organized according to nine major
themes of critique wherein these different fields intersect: 1) how “fairness”
in AI fairness research gets defined; 2) how problems for AI systems to address
get formulated; 3) the impacts of abstraction on how AI tools function and its
propensity to lead to technological solutionism; 4) how racial classification
operates within AI fairness research; 5) the use of AI fairness measures to
avoid regulation and engage in ethics washing; 6) an absence of participatory
design and democratic deliberation in AI fairness considerations; 7) data
collection practices that entrench “bias,” are non-consensual, and lack
transparency; 8) the predatory inclusion of marginalized groups into AI
systems; and 9) a lack of engagement with AI’s long-term social and ethical
outcomes. Drawing from these critiques, the article concludes by imagining
future ML fairness research directions that actively disrupt entrenched power
dynamics and structural injustices in society.



Related post