Training Saturation in Layerwise Quantum Approximate Optimisation. (arXiv:2106.13814v1 [quant-ph])

Quantum Approximate Optimisation (QAOA) is the most studied gate based
variational quantum algorithm today. We train QAOA one layer at a time to
maximize overlap with an $n$ qubit target state. Doing so we discovered that
such training always saturates — called textit{training saturation} — at
some depth $p^*$, meaning that past a certain depth, overlap can not be
improved by adding subsequent layers. We formulate necessary conditions for
saturation. Numerically, we find layerwise QAOA reaches its maximum overlap at
depth $p^*=n$. The addition of coherent dephasing errors to training removes
saturation, recovering robustness to layerwise training. This study sheds new
light on the performance limitations and prospects of QAOA.

Source: https://arxiv.org/abs/2106.13814

webmaster

Related post