Enhancing Trustworthiness in ML-Based Network Intrusion Detection with Uncertainty Quantification. (arXiv:2310.10655v1 [cs.CR])

The evolution of Internet and its related communication technologies have
consistently increased the risk of cyber-attacks. In this context, a crucial
role is played by Intrusion Detection Systems (IDSs), which are security
devices designed to identify and mitigate attacks to modern networks. In the
last decade, data-driven approaches based on Machine Learning (ML) have gained
more and more popularity for executing the classification tasks required by
IDSs. However, typical ML models adopted for this purpose do not properly take
into account the uncertainty associated with their own prediction. This poses
significant challenges, as they tend to produce misleadingly high
classification scores for both misclassified inputs and inputs belonging to
unknown classes (e.g. novel attacks), limiting the trustworthiness of existing
ML-based solutions. In this paper we argue that ML-based IDSs should always
provide accurate uncertainty quantification to avoid overconfident predictions.
In fact, an uncertainty-aware classification would be beneficial to enhance
closed-set classification performance, would make it possible to efficiently
carry out Active Learning, and would help recognize inputs of unknown classes
as truly unknowns (i.e., not belonging to any known class), unlocking open-set
classification capabilities and Out-of-Distribution (OoD) detection. To verify
it, we compare various ML-based methods for uncertainty quantification and for
OoD detection, either specifically designed for or tailored to the domain of
network intrusion detection, showing how a proper estimation of the model
uncertainty can be exploited to significantly enhance the trustworthiness of
ML-based IDSs. Our results also confirm that conventional ML-based approaches
to network intrusion detection (e.g. based on traditional feed-forward Neural
Networks) may not be appropriate and should be adopted with caution.

Source: https://arxiv.org/abs/2310.10655


Related post