Set-Membership Inference Attacks using Data Watermarking. (arXiv:2307.15067v1 [cs.CV])

In this work, we propose a set-membership inference attack for generative
models using deep image watermarking techniques. In particular, we demonstrate
how conditional sampling from a generative model can reveal the watermark that
was injected into parts of the training data. Our empirical results demonstrate
that the proposed watermarking technique is a principled approach for detecting
the non-consensual use of image data in training generative models.

Source: https://arxiv.org/abs/2307.15067

webmaster

Related post