Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods. (arXiv:2111.02398v1 [eess.IV])

Artificial Intelligence has emerged as a useful aid in numerous clinical
applications for diagnosis and treatment decisions. Deep neural networks have
shown same or better performance than clinicians in many tasks owing to the
rapid increase in the available data and computational power. In order to
conform to the principles of trustworthy AI, it is essential that the AI system
be transparent, robust, fair and ensure accountability. Current deep neural
solutions are referred to as black-boxes due to a lack of understanding of the
specifics concerning the decision making process. Therefore, there is a need to
ensure interpretability of deep neural networks before they can be incorporated
in the routine clinical workflow. In this narrative review, we utilized
systematic keyword searches and domain expertise to identify nine different
types of interpretability methods that have been used for understanding deep
learning models for medical image analysis applications based on the type of
generated explanations and technical similarities. Furthermore, we report the
progress made towards evaluating the explanations produced by various
interpretability methods. Finally we discuss limitations, provide guidelines
for using interpretability methods and future directions concerning the
interpretability of deep neural networks for medical imaging analysis.

Source: https://arxiv.org/abs/2111.02398

webmaster

Related post