Generalization Error Analysis of Neural networks with Gradient Based Regularization. (arXiv:2107.02797v1 [cs.LG])

We study gradient-based regularization methods for neural networks. We mainly
focus on two regularization methods: the total variation and the Tikhonov
regularization. Applying these methods is equivalent to using neural networks
to solve some partial differential equations, mostly in high dimensions in
practical applications. In this work, we introduce a general framework to
analyze the generalization error of regularized networks. The error estimate
relies on two assumptions on the approximation error and the quadrature error.
Moreover, we conduct some experiments on the image classification tasks to show
that gradient-based methods can significantly improve the generalization
ability and adversarial robustness of neural networks. A graphical extension of
the gradient-based methods are also considered in the experiments.

Source: https://arxiv.org/abs/2107.02797

webmaster

Related post