Analysis of Explainable Artificial Intelligence Methods on Medical Image Classification. (arXiv:2212.10565v1 [eess.IV])

The use of deep learning in computer vision tasks such as image
classification has led to a rapid increase in the performance of such systems.
Due to this substantial increment in the utility of these systems, the use of
artificial intelligence in many critical tasks has exploded. In the medical
domain, medical image classification systems are being adopted due to their
high accuracy and near parity with human physicians in many tasks. However,
these artificial intelligence systems are extremely complex and are considered
black boxes by scientists, due to the difficulty in interpreting what exactly
led to the predictions made by these models. When these systems are being used
to assist high-stakes decision-making, it is extremely important to be able to
understand, verify and justify the conclusions reached by the model. The
research techniques being used to gain insight into the black-box models are in
the field of explainable artificial intelligence (XAI). In this paper, we
evaluated three different XAI methods across two convolutional neural network
models trained to classify lung cancer from histopathological images. We
visualized the outputs and analyzed the performance of these methods, in order
to better understand how to apply explainable artificial intelligence in the
medical domain.

Source: https://arxiv.org/abs/2212.10565

webmaster

Related post