Publication | Closed Access
Implementing cooperative prefetching and caching in a globally-managed memory system
82
Citations
34
References
1998
Year
Unknown Venue
Cluster ComputingEngineeringComputer ArchitectureParallel StorageMemory Model (Programming)Shared MemoryHigh-performance ArchitectureParallel ComputingCaching SystemNetwork-wide Global ResourcesCooperative PrefetchingWeb CacheComputer EngineeringCachingComputer ScienceMemory ArchitectureEdge ComputingCloud ComputingParallel ProgrammingSystem Software
This paper presents cooperative prefetching and caching --- the use of network-wide global resources (memories, CPUs, and disks) to support prefetching and caching in the presence of hints of future demands. Cooperative prefetching and caching effectively unites disk-latency reduction techniques from three lines of research: prefetching algorithms, cluster-wide memory management, and parallel I/O. When used together, these techniques greatly increase the power of prefetching relative to a conventional (non-global-memory) system. We have designed and implemented PGMS, a cooperative prefetching and caching system, under the Digital Unix operating system running on a 1.28 Gb/sec Myrinet-connected cluster of DEC Alpha workstations. Our measurements and analysis show that by using available global resources, cooperative prefetching can obtain significant speedups for I/O-bound programs. For example, for a graphics rendering application, our system achieves a speedup of 4.9 over a non-prefetching version of the same program, and a 3.1-fold improvement over that program using local-disk prefetching alone.
| Year | Citations | |
|---|---|---|
Page 1
Page 1