Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance. (arXiv:2307.13698v1 [cs.CV])
Discovering a high-performing sparse network within a massive neural network
is advantageous for deploying them on devices with limited storage, such as
mobile phones. Additionally, model explainability is essential to fostering
trust in AI. The Lottery Ticket Hypothesis (LTH) finds a network within a deep
network with comparable or superior performance to the original model. However,
limited study has been conducted on the success or failure of LTH in terms of
explainability. In this work, we examine why the performance of the pruned
networks gradually increases or decreases. Using Grad-CAM and Post-hoc concept
bottleneck models (PCBMs), respectively, we investigate the explainability of
pruned networks in terms of pixels and high-level concepts. We perform
extensive experiments across vision and medical imaging datasets. As more
weights are pruned, the performance of the network degrades. The discovered
concepts and pixels from the pruned networks are inconsistent with the original
network — a possible reason for the drop in performance.
Source: https://arxiv.org/abs/2307.13698