Publication | Closed Access
Reducing memory access latency with asymmetric DRAM bank organizations
113
Citations
52
References
2013
Year
Unknown Venue
Hardware SecurityRandom Access LatencyEngineeringLast-level CachesShared MemoryEdge ComputingHigh-performance ArchitectureIn-memory DatabaseComputer EngineeringComputer ArchitectureTransactional MemoryComputer ScienceAverage Access LatencyParallel ComputingMemory ArchitectureMemory Access LatencyMulti-channel Memory Architecture
DRAM has been a de facto standard for main memory, and advances in process technology have led to a rapid increase in its capacity and bandwidth. In contrast, its random access latency has remained relatively stagnant, as it is still around 100 CPU clock cycles. Modern computer systems rely on caches or other latency tolerance techniques to lower the average access latency. However, not all applications have ample parallelism or locality that would help hide or reduce the latency. Moreover, applications' demands for memory space continue to grow, while the capacity gap between last-level caches and main memory is unlikely to shrink. Consequently, reducing the main-memory latency is important for application performance. Unfortunately, previous proposals have not adequately addressed this problem, as they have focused only on improving the bandwidth and capacity or reduced the latency at the cost of significant area overhead.
| Year | Citations | |
|---|---|---|
Page 1
Page 1