Publication | Closed Access
An Energy-Efficient Computing-in-Memory NN Processor With Set-Associate Blockwise Sparsity and Ping-Pong Weight Update
17
Citations
37
References
2023
Year
Computing-in-memory (CIM) chips have demonstrated the potential high energy efficiency for low-power neural network (NN) processors. Even with energy-efficient CIM macros, the existing system-level CIM chips still lack deep exploration on sparsity and large models, which prevents a higher system energy efficiency. This work presents a CIM NN processor with more sufficient support of sparsity and higher utilization rate. Three key innovations are proposed. First, a set-associate blockwise sparsity strategy is designed, which simultaneously saves execution time, power, and storage space. Second, a ping-pong weight update mechanism is proposed for a higher utilization rate, enabling simultaneous execution of CIM and write operations. Third, an efficient CIM macro is implemented with adaptive analog-digital converter (ADC) precision for better sparsity utilization and performance-accuracy trade-off. The 65-nm fabricated chip shows 9.5-TOPS/W system energy efficiency at 4-bit precision, with 6.25 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> actual improvement compared with a state-of-the-art CIM chip. Besides, this work supports high CIM execution accuracy on the ImageNet dataset.
| Year | Citations | |
|---|---|---|
Page 1
Page 1