Publication | Closed Access
A 5.1pJ/Neuron 127.3us/Inference RNN-based Speech Recognition Processor using 16 Computing-in-Memory SRAM Macros in 65nm CMOS
97
Citations
0
References
2019
Year
Unknown Venue
EngineeringMachine LearningNeural Networks (Machine Learning)Computer ArchitectureNeurochipSocial SciencesSpeech RecognitionOutput-weight Dual StationaryEarly Batch-normalizationComputing SystemsRobust Speech RecognitionNeuromorphic EngineeringCim-aware Weight AdaptationReal-time LanguageComputer EngineeringNeural Networks (Computational Neuroscience)Computer ScienceHardware AccelerationSpeech ProcessingSpeech InputBrain-like ComputingComputing-in-memory Sram MacrosIn-memory Computing
This work presents a 65nm CMOS speech recognition processor, named Thinker-IM, which employs 16 computing-in-memory (SRAM-CIM) macros for binarized recurrent neural network (RNN) computation. Its major contributions are: 1) A novel digital-CIM mixed architecture that runs an output-weight dual stationary (OWDS) dataflow, reducing 85.7% memory accessing; 2) Multi-bit XNOR SRAM-CIM macros and corresponding CIM-aware weight adaptation that reduces 9.9% energy consumption in average; 3) Predictive early batch-normalization (BN) and binarization units (PBUs) that reduce at most 28.3% computations in RNN. Measured results show the processing speed of 127.3us/Inference and over 90.2% accuracy, while achieving neural energy efficiency of 5.1pJ/Neuron, which is 2.8 × better than state-of-the-art.