Cognitive Explainers of Graph Neural Networks Based on Medical Concepts. (arXiv:2201.07798v1 [cs.LG])

Although deep neural networks (DNN) have achieved state-of-the-art
performance in various fields, some unexpected errors are often found in the
neural network, which is very dangerous for some tasks requiring high
reliability and high security.The non-transparency and unexplainably of CNN
still limit its application in many fields, such as medical care and finance.
Despite current studies that have been committed to visualizing the decision
process of DNN, most of these methods focus on the low level and do not take
into account the prior knowledge of medicine.In this work, we propose an
interpretable framework based on key medical concepts, enabling CNN to explain
from the perspective of doctors’ cognition.We propose an interpretable
automatic recognition framework for the ultrasonic standard plane, which uses a
concept-based graph convolutional neural network to construct the relationships
between key medical concepts, to obtain an interpretation consistent with a
doctor’s cognition.

Source: https://arxiv.org/abs/2201.07798

webmaster

Related post