A Cautionary Tale of Decorrelating Theory Uncertainties. (arXiv:2109.08159v1 [hep-ph])

A variety of techniques have been proposed to train machine learning
classifiers that are independent of a given feature. While this can be an
essential technique for enabling background estimation, it may also be useful
for reducing uncertainties. We carefully examine theory uncertainties, which
typically do not have a statistical origin. We will provide explicit examples
of two-point (fragmentation modeling) and continuous (higher-order corrections)
uncertainties where decorrelating significantly reduces the apparent
uncertainty while the actual uncertainty is much larger. These results suggest
that caution should be taken when using decorrelation for these types of
uncertainties as long as we do not have a complete decomposition into
statistically meaningful components.

Source: https://arxiv.org/abs/2109.08159

webmaster

Related post