Publication | Closed Access
Rethinking erasure codes for cloud file systems: minimizing I/O for recovery and degraded reads
306
Citations
41
References
2012
Year
Unknown Venue
Distributed File SystemStorage PerformanceEngineeringStorage ManagementComputer ArchitectureData DeduplicationCloud File SystemsStorage SystemsData ScienceIn-storage ComputingPopular Erasure CodesParallel ComputingCoding TheoryStorage OverheadErasure CodesData ManagementComputer EngineeringComputer ScienceData SecurityCryptographyCloud ComputingFile SystemSystem Software
To reduce storage overhead, cloud file systems are transitioning from replication to erasure codes. This process has revealed new dimensions on which to evaluate the performance of different coding schemes: the amount of data used in recovery and when performing degraded reads. We present an algorithm that finds the optimal number of codeword symbols needed for recovery for any XOR-based erasure code and produces recovery schedules that use a minimum amount of data. We differentiate popular erasure codes based on this criterion and demonstrate that the differences improve I/O performance in practice for the large block sizes used in cloud file systems. Several cloud systems [15, 10] have adopted Reed-Solomon (RS) codes, because of their generality and their ability to tolerate larger numbers of failures. We define a new class of rotated Reed-Solomon codes that perform degraded reads more efficiently than all known codes, but otherwise inherit the reliability and performance properties of Reed-Solomon codes.
| Year | Citations | |
|---|---|---|
Page 1
Page 1