Concepedia

Publication | Closed Access

An Efficient Racetrack Memory-Based Processing-in-Memory Architecture for Convolutional Neural Networks

16

Citations

18

References

2017

Year

Abstract

As a promising architectural paradigm for applications which demand high I/O bandwidth, Processing-in-Memory (PIM) computing techniques have been adopted in designing Convolutional Neural Networks (CNNs). However, due to the notorious memory wall problem, PIM based on existing device memory still cannot deal with complex CNN applications under the constraints of memory bandwidth and processing latency. To mitigate this problem, this paper proposes an efficient PIM archi- tecture based on skyrmion and domain-wall racetrack memories, which can further exploit the potential of PIM architectures in terms of processing latency and energy efficiency. By adopting full adders and multipliers developed using skyrmion and domain- wall nanowires, our proposed PIM architecture can accommodate complex CNNs at different scales. Experimental results show that comparing with both traditional and state-of-the-art PIM architectures, our proposed PIM architecture can improve the processing latency and energy efficiency of CNNs drastically.

References

YearCitations

Page 1