How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis. (arXiv:2206.00667v1 [cs.LG])

Fairness in machine learning has attained significant focus due to the
widespread application of machine learning in high-stake decision-making tasks.
Unless regulated with a fairness objective, machine learning classifiers might
demonstrate unfairness/bias towards certain demographic populations in the
data. Thus, the quantification and mitigation of the bias induced by
classifiers have become a central concern. In this paper, we aim to quantify
the influence of different features on the bias of a classifier. To this end,
we propose a framework of Fairness Influence Function (FIF), and compute it as
a scaled difference of conditional variances in the prediction of the
classifier. We also instantiate an algorithm, FairXplainer, that uses variance
decomposition among the subset of features and a local regressor to compute
FIFs accurately, while also capturing the intersectional effects of the
features. Our experimental analysis validates that FairXplainer captures the
influences of both individual features and higher-order feature interactions,
estimates the bias more accurately than existing local explanation methods, and
detects the increase/decrease in bias due to affirmative/punitive actions in
the classifier.

Source: https://arxiv.org/abs/2206.00667

webmaster

Related post