Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time. (arXiv:2107.10868v1 [cs.LG])

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-layer
neural networks with Rectified Linear Unit (ReLU) activation function in
polynomial time. Despite the established convergence theory of Local SGD on
optimizing general smooth functions in communication-efficient distributed
optimization, its convergence on non-smooth ReLU networks still eludes full
theoretical understanding. The key property used in many Local SGD analysis on
smooth function is gradient Lipschitzness, so that the gradient on local models
will not drift far away from that on averaged model. However, this decent
property does not hold in networks with non-smooth ReLU activation function. We
show that, even though ReLU network does not admit gradient Lipschitzness
property, the difference between gradients on local models and average model
will not change too much, under the dynamics of Local SGD. We validate our
theoretical results via extensive experiments. This work is the first to show
the convergence of Local SGD on non-smooth functions, and will shed lights on
the optimization theory of federated training of deep neural networks.

Source: https://arxiv.org/abs/2107.10868

webmaster

Related post