Uniform Convergence, Adversarial Spheres and a Simple Remedy. (arXiv:2105.03491v1 [cs.LG])

Previous work has cast doubt on the general framework of uniform convergence
and its ability to explain generalization in neural networks. By considering a
specific dataset, it was observed that a neural network completely
misclassifies a projection of the training data (adversarial set), rendering
any existing generalization bound based on uniform convergence vacuous. We
provide an extensive theoretical investigation of the previously studied data
setting through the lens of infinitely-wide models. We prove that the Neural
Tangent Kernel (NTK) also suffers from the same phenomenon and we uncover its
origin. We highlight the important role of the output bias and show
theoretically as well as empirically how a sensible choice completely mitigates
the problem. We identify sharp phase transitions in the accuracy on the
adversarial set and study its dependency on the training sample size. As a
result, we are able to characterize critical sample sizes beyond which the
effect disappears. Moreover, we study decompositions of a neural network into a
clean and noisy part by considering its canonical decomposition into its
different eigenfunctions and show empirically that for too small bias the
adversarial phenomenon still persists.

Source: https://arxiv.org/abs/2105.03491

webmaster

Related post