Privately Learning Subspaces. (arXiv:2106.00001v1 [cs.CR])

Private data analysis suffers a costly curse of dimensionality. However, the
data often has an underlying low-dimensional structure. For example, when
optimizing via gradient descent, the gradients often lie in or near a
low-dimensional subspace. If that low-dimensional structure can be identified,
then we can avoid paying (in terms of privacy or accuracy) for the high ambient
dimension.

We present differentially private algorithms that take input data sampled
from a low-dimensional linear subspace (possibly with a small amount of error)
and output that subspace (or an approximation to it). These algorithms can
serve as a pre-processing step for other procedures.

Source: https://arxiv.org/abs/2106.00001

webmaster

Related post