Integer Factorisation, Fermat & Machine Learning on a Classical Computer. (arXiv:2308.12290v1 [cs.LG])
In this paper we describe a deep learning–based probabilistic algorithm for
integer factorisation. We use Lawrence’s extension of Fermat’s factorisation
algorithm to reduce the integer factorisation problem to a binary
classification problem. To address the classification problem, based on the
ease of generating large pseudo–random primes, a corpus of training data, as
large as needed, is synthetically generated. We will introduce the algorithm,
summarise some experiments, analyse where these experiments fall short, and
finally put out a call to others to reproduce, verify and see if this approach
can be improved to a point where it becomes a practical, scalable factorisation
algorithm.
Source: https://arxiv.org/abs/2308.12290