Towards Better Model Understanding with Path-Sufficient Explanations. (arXiv:2109.06181v1 [cs.LG])

Feature based local attribution methods are amongst the most prevalent in
explainable artificial intelligence (XAI) literature. Going beyond standard
correlation, recently, methods have been proposed that highlight what should be
minimally sufficient to justify the classification of an input (viz. pertinent
positives). While minimal sufficiency is an attractive property, the resulting
explanations are often too sparse for a human to understand and evaluate the
local behavior of the model, thus making it difficult to judge its overall
quality. To overcome these limitations, we propose a novel method called
Path-Sufficient Explanations Method (PSEM) that outputs a sequence of
sufficient explanations for a given input of strictly decreasing size (or
value) — from original input to a minimally sufficient explanation — which
can be thought to trace the local boundary of the model in a smooth manner,
thus providing better intuition about the local model behavior for the specific
input. We validate these claims, both qualitatively and quantitatively, with
experiments that show the benefit of PSEM across all three modalities (image,
tabular and text). A user study depicts the strength of the method in
communicating the local behavior, where (many) users are able to correctly
determine the prediction made by a model.

Source: https://arxiv.org/abs/2109.06181

webmaster

Related post