A Quantitative Comparison between Shannon and Tsallis Havrda Charvat Entropies Applied to Cancer Outcome Prediction. (arXiv:2203.11943v1 [eess.IV])

In this paper, we propose to quantitatively compare loss functions based on
parameterized Tsallis-Havrda-Charvat entropy and classical Shannon entropy for
the training of a deep network in the case of small datasets which are usually
encountered in medical applications. Shannon cross-entropy is widely used as a
loss function for most neural networks applied to the segmentation,
classification and detection of images. Shannon entropy is a particular case of
Tsallis-Havrda-Charvat entropy. In this work, we compare these two entropies
through a medical application for predicting recurrence in patients with
head-neck and lung cancers after treatment. Based on both CT images and patient
information, a multitask deep neural network is proposed to perform a
recurrence prediction task using cross-entropy as a loss function and an image
reconstruction task. Tsallis-Havrda-Charvat cross-entropy is a parameterized
cross entropy with the parameter $alpha$. Shannon entropy is a particular case
of Tsallis-Havrda-Charvat entropy for $alpha$ = 1. The influence of this
parameter on the final prediction results is studied. In this paper, the
experiments are conducted on two datasets including in total 580 patients, of
whom 434 suffered from head-neck cancers and 146 from lung cancers. The results
show that Tsallis-Havrda-Charvat entropy can achieve better performance in
terms of prediction accuracy with some values of $alpha$.

Source: https://arxiv.org/abs/2203.11943

webmaster

Related post