Taking a Step Back with KCal: Multi-Class Kernel-Based Calibration for Deep Neural Networks. (arXiv:2202.07679v1 [stat.ML])

Deep neural network (DNN) classifiers are often overconfident, producing
miscalibrated class probabilities. Most existing calibration methods either
lack theoretical guarantees for producing calibrated outputs or reduce the
classification accuracy in the process. This paper proposes a new Kernel-based
calibration method called KCal. Unlike other calibration procedures, KCal does
not operate directly on the logits or softmax outputs of the DNN. Instead, it
uses the penultimate-layer latent embedding to train a metric space in a
supervised manner. In effect, KCal amounts to a supervised dimensionality
reduction of the neural network embedding, and generates a prediction using
kernel density estimation on a holdout calibration set. We first analyze KCal
theoretically, showing that it enjoys a provable asymptotic calibration
guarantee. Then, through extensive experiments, we confirm that KCal
consistently outperforms existing calibration methods in terms of both the
classification accuracy and the (confidence and class-wise) calibration error.

Source: https://arxiv.org/abs/2202.07679

webmaster

Related post