Adaptive Block Floating-Point for Analog Deep Learning Hardware. (arXiv:2205.06287v1 [cs.LG])

Analog mixed-signal (AMS) devices promise faster, more energy-efficient deep
neural network (DNN) inference than their digital counterparts. However, recent
studies show that DNNs on AMS devices with fixed-point numbers can incur an
accuracy penalty because of precision loss. To mitigate this penalty, we
present a novel AMS-compatible adaptive block floating-point (ABFP) number
representation. We also introduce amplification (or gain) as a method for
increasing the accuracy of the number representation without increasing the bit
precision of the output. We evaluate the effectiveness of ABFP on the DNNs in
the MLPerf datacenter inference benchmark — realizing less than $1%$ loss in
accuracy compared to FLOAT32. We also propose a novel method of finetuning for
AMS devices, Differential Noise Finetuning (DNF), which samples device noise to
speed up finetuning compared to conventional Quantization-Aware Training.



Related post