Publication | Closed Access
A Novel Voltage-Accumulation Vector-Matrix Multiplication Architecture Using Resistor-shunted Floating Gate Flash Memory Device for Low-power and High-density Neural Network Applications
22
Citations
2
References
2018
Year
Unknown Venue
Electrical EngineeringEngineeringHardware AccelerationNeural NetworkFlash MemoryComputer ArchitectureComputer EngineeringVoltage Summation ConceptNeural Network ApplicationsMemory DeviceComputer ScienceNeuromorphic EngineeringBrain-like ComputingDeep LearningMicroelectronicsIn-memory Computing
We propose a novel processing-in-memory (PIM) architecture based on the voltage summation concept to accelerate the vector-matrix multiplication for neural network (NN) applications. The core device is formed by adding a buried shunt resistor to a floating gate Flash memory device. The NN string is constructed the same way as in NAND Flash by connecting the core devices in series. In perceptron operation the weighting factors are stored in the floating gate device and the sum-of-product is readily obtained by summing the voltage drop of the cells in each NN string. The energy consumption for 128 multiply-and-sum operations within a string can be as low as 0.2pJ. Finally, with the weight values stored in the non-volatile memory there is no need to move data around and this greatly improves the performance and energy efficiency for neural network applications.
| Year | Citations | |
|---|---|---|
Page 1
Page 1