Publication | Open Access
Fixed-point feedforward deep neural network design using weights +1, 0, and −1
259
Citations
18
References
2014
Year
Unknown Venue
Artificial IntelligenceConvolutional Neural NetworkEngineeringMachine LearningRecurrent Neural NetworkSpeech RecognitionPattern RecognitionSparse Neural NetworkTernary WeightsEmbedded Machine LearningCharacter RecognitionComputer EngineeringComputer ScienceDeep LearningNeural Architecture SearchModel CompressionDirect QuantizationDeep Neural NetworksSpeech Processing
Feedforward deep neural networks that employ multiple hidden layers show high performance in many applications, but they demand complex hardware for implementation. The hardware complexity can be much lowered by minimizing the word-length of weights and signals, but direct quantization for fixed-point network design does not yield good results. We optimize the fixed-point design by employing backpropagation based retraining. The designed fixed-point networks with ternary weights (+1, 0, and -1) and 3-bit signal show only negligible performance loss when compared to the floating-point coun-terparts. The backpropagation for retraining uses quantized weights and fixed-point signal to compute the output, but utilizes high precision values for adapting the networks. A character recognition and a phoneme recognition examples are presented.
| Year | Citations | |
|---|---|---|
Page 1
Page 1