Fundamental Tradeoffs in Distributionally Adversarial Training. (arXiv:2101.06309v1 [cs.LG])

Adversarial training is among the most effective techniques to improve the
robustness of models against adversarial perturbations. However, the full
effect of this approach on models is not well understood. For example, while
adversarial training can reduce the adversarial risk (prediction error against
an adversary), it sometimes increase standard risk (generalization error when
there is no adversary). Even more, such behavior is impacted by various
elements of the learning problem, including the size and quality of training
data, specific forms of adversarial perturbations in the input, model
overparameterization, and adversary’s power, among others. In this paper, we
focus on emph{distribution perturbing} adversary framework wherein the
adversary can change the test distribution within a neighborhood of the
training data distribution. The neighborhood is defined via Wasserstein
distance between distributions and the radius of the neighborhood is a measure
of adversary’s manipulative power. We study the tradeoff between standard risk
and adversarial risk and derive the Pareto-optimal tradeoff, achievable over
specific classes of models, in the infinite data limit with features dimension
kept fixed. We consider three learning settings: 1) Regression with the class
of linear models; 2) Binary classification under the Gaussian mixtures data
model, with the class of linear classifiers; 3) Regression with the class of
random features model (which can be equivalently represented as two-layer
neural network with random first-layer weights). We show that a tradeoff
between standard and adversarial risk is manifested in all three settings. We
further characterize the Pareto-optimal tradeoff curves and discuss how a
variety of factors, such as features correlation, adversary’s power or the
width of two-layer neural network would affect this tradeoff.

Source: https://arxiv.org/abs/2101.06309

webmaster

Related post