A Stochastic Newton Algorithm for Distributed Convex Optimization. (arXiv:2110.02954v1 [math.OC])

We propose and analyze a stochastic Newton algorithm for homogeneous
distributed stochastic convex optimization, where each machine can calculate
stochastic gradients of the same population objective, as well as stochastic
Hessian-vector products (products of an independent unbiased estimator of the
Hessian of the population objective with arbitrary vectors), with many such
stochastic computations performed between rounds of communication. We show that
our method can reduce the number, and frequency, of required communication
rounds compared to existing methods without hurting performance, by proving
convergence guarantees for quasi-self-concordant objectives (e.g., logistic
regression), alongside empirical evidence.

Source: https://arxiv.org/abs/2110.02954

webmaster

Related post