Disparate Impact in Differential Privacy from Gradient Misalignment. (arXiv:2206.07737v1 [cs.LG])

As machine learning becomes more widespread throughout society, aspects
including data privacy and fairness must be carefully considered, and are
crucial for deployment in highly regulated industries. Unfortunately, the
application of privacy enhancing technologies can worsen unfair tendencies in
models. In particular, one of the most widely used techniques for private model
training, differentially private stochastic gradient descent (DPSGD),
frequently intensifies disparate impact on groups within data. In this work we
study the fine-grained causes of unfairness in DPSGD and identify gradient
misalignment due to inequitable gradient clipping as the most significant
source. This observation leads us to a new method for reducing unfairness by
preventing gradient misalignment in DPSGD.

Source: https://arxiv.org/abs/2206.07737

webmaster

Related post