Interpretability Aware Model Training to Improve Robustness against Out-of-Distribution Magnetic Resonance Images in Alzheimer’s Disease Classification. (arXiv:2111.08701v1 [eess.IV])

Owing to its pristine soft-tissue contrast and high resolution, structural
magnetic resonance imaging (MRI) is widely applied in neurology, making it a
valuable data source for image-based machine learning (ML) and deep learning
applications. The physical nature of MRI acquisition and reconstruction,
however, causes variations in image intensity, resolution, and signal-to-noise
ratio. Since ML models are sensitive to such variations, performance on
out-of-distribution data, which is inherent to the setting of a deployed
healthcare ML application, typically drops below acceptable levels. We propose
an interpretability aware adversarial training regime to improve robustness
against out-of-distribution samples originating from different MRI hardware.
The approach is applied to 1.5T and 3T MRIs obtained from the Alzheimer’s
Disease Neuroimaging Initiative database. We present preliminary results
showing promising performance on out-of-distribution samples.



Related post