Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling. (arXiv:2302.03033v1 [eess.IV])
Explainable AI consists in developing mechanisms allowing for an interaction
between decision systems and humans by making the decisions of the formers
understandable. This is particularly important in sensitive contexts like in
the medical domain. We propose a use case study, for skin lesion diagnosis,
illustrating how it is possible to provide the practitioner with explanations
on the decisions of a state of the art deep neural network classifier trained
to characterize skin lesions from examples. Our framework consists of a trained
classifier onto which an explanation module operates. The latter is able to
offer the practitioner exemplars and counterexemplars for the classification
diagnosis thus allowing the physician to interact with the automatic diagnosis
system. The exemplars are generated via an adversarial autoencoder. We
illustrate the behavior of the system on representative examples.
Source: https://arxiv.org/abs/2302.03033