Concepedia

Publication | Closed Access

Deep Learning with Limited Numerical Precision

771

Citations

25

References

2015

Year

TLDR

Training of large‑scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. We also demonstrate an energy‑efficient hardware accelerator that implements low‑precision fixed‑point arithmetic with stochastic rounding. Within low‑precision fixed‑point computations, the rounding scheme critically influences training dynamics, and our results show that deep networks can be trained with only 16‑bit fixed‑point representation using stochastic rounding, with little to no loss in classification accuracy.

Abstract

Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.

References

YearCitations

Page 1