The Role of Explainability in Assuring Safety of Machine Learning in Healthcare. (arXiv:2109.00520v1 [cs.LG])

Established approaches to assuring safety-critical systems and software are
difficult to apply to systems employing machine learning (ML). In many cases,
ML is used on ill-defined problems, e.g. optimising sepsis treatment, where
there is no clear, pre-defined specification against which to assess validity.
This problem is exacerbated by the “opaque” nature of ML where the learnt model
is not amenable to human scrutiny. Explainable AI methods have been proposed to
tackle this issue by producing human-interpretable representations of ML models
which can help users to gain confidence and build trust in the ML system.
However, there is not much work explicitly investigating the role of
explainability for safety assurance in the context of ML development. This
paper identifies ways in which explainable AI methods can contribute to safety
assurance of ML-based systems. It then uses a concrete ML-based clinical
decision support system, concerning weaning of patients from mechanical
ventilation, to demonstrate how explainable AI methods can be employed to
produce evidence to support safety assurance. The results are also represented
in a safety argument to show where, and in what way, explainable AI methods can
contribute to a safety case. Overall, we conclude that explainable AI methods
have a valuable role in safety assurance of ML-based systems in healthcare but
that they are not sufficient in themselves to assure safety.

Source: https://arxiv.org/abs/2109.00520

webmaster

Related post