Concepedia

Publication | Closed Access

Efficient replica maintenance for distributed storage systems

288

Citations

25

References

2006

Year

TLDR

Distributed storage systems that aggregate disks across many Internet nodes face costly replication due to transient failures, and efficient algorithms must prioritize durability over availability, rapidly create new copies before permanent disk loss, tolerate burst failures, and exploit replicas that recover from temporary outages. The paper proposes the Carbonite replication algorithm to keep data durable at low cost. Carbonite was evaluated via a 1‑TB simulation over a 365‑day PlanetLab trace, demonstrating full data durability while incurring 44% more network traffic than a system that only reacts to permanent failures. Compared to Carbonite, Total Recall and DHash consume nearly twice the network traffic of the hypothetical permanent‑failure‑only system.

Abstract

This paper considers replication strategies for storage systems that aggregate the disks of many nodes spread over the Internet. Maintaining replication in such systems can be prohibitively expensive, since every transient network or host failure could potentially lead to copying a server's worth of data over the Internet to maintain replication levels. The following insights in designing an efficient replication algorithm emerge from the paper's analysis. First, durability can be provided separately from availability; the former is less expensive to ensure and a more useful goal for many wide-area applications. Second, the focus of a durability algorithm must be to create new copies of data objects faster than permanent disk failures destroy the objects; careful choice of policies for what nodes should hold what data can decrease repair time. Third, increasing the number of replicas of each data object does not help a system tolerate a higher disk failure probability, but does help tolerate bursts of failures. Finally, ensuring that the system makes use of replicas that recover after temporary failure is critical to efficiency. Based on these insights, the paper proposes the Carbonite replication algorithm for keeping data durable at a low cost. A simulation of Carbonite storing 1 TB of data over a 365 day trace of PlanetLab activity shows that Carbonite is able to keep all data durable and uses 44% more network traffic than a hypothetical system that only responds to permanent failures. In comparison, Total Recall and DHash require almost a factor of two more network traffic than this hypothetical system.

References

YearCitations

Page 1