Publication | Closed Access
AdaBits: Neural Network Quantization With Adaptive Bit-Widths
125
Citations
45
References
2020
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningComputer ArchitectureAdaptive Bit-widthsData ScienceSparse Neural NetworkEmbedded Machine LearningAdaptive ConfigurationsNeural Network QuantizationComputer EngineeringComputer ScienceDeep LearningNeural Architecture SearchQuantization (Signal Processing)Model CompressionDeep Neural NetworksEdge ComputingSpeech Processing
Deep neural networks with adaptive configurations have gained increasing attention due to the instant and flexible deployment of these models on platforms with different resource budgets. In this paper, we investigate a novel option to achieve this goal by enabling adaptive bit-widths of weights and activations in the model. We first examine the benefits and challenges of training quantized model with adaptive bit-widths, and then experiment with several approaches including direct adaptation, progressive training and joint training. We discover that joint training is able to produce comparable performance on the adaptive model as individual models. We also propose a new technique named Switchable Clipping Level (S-CL) to further improve quantized models at the lowest bit-width. With our proposed techniques applied on a bunch of models including MobileNet V1/V2 and ResNet50, we demonstrate that bit-width of weights and activations is a new option for adaptively executable deep neural networks, offering a distinct opportunity for improved accuracy-efficiency trade-off as well as instant adaptation according to the platform constraints in real-world applications.
| Year | Citations | |
|---|---|---|
Page 1
Page 1