Loading...

Implicit effect of decoding time on fault tolerance in erasure coded cloud storage systems

Safaei, B ; Sharif University of Technology

865 Viewed
  1. Type of Document: Article
  2. DOI: 10.1109/ICSEC.2016.7859937
  3. Abstract:
  4. International Data Company (IDC) has estimated the total amount of digital data stored in the world will reach 40 thousand Exabytes at the end of 2020. The idea of accessing this volume of data, anywhere at any time by exploiting commodity hardware, led to the introduction of cloud storage. The abounded rate and variety of failures in the equipment used in cloud storage systems, placed fault tolerance, at top of the challenges in these systems. Hadoop Distributed File System (HDFS) has provided cloud with reliable storage via replication. Storage overhead of replication is high and therefore it's going to be replaced with erasure codes. Despite the significant number of researches on overcoming different challenges in erasure codes (including decoding time), the study on fault tolerance of these codes is pale in previous works. This was the motivation for this paper to focus on fault tolerance of erasure codes. In this regard, a model has been proposed for decoding time procedure to identify parameters such as block size, number of blocks, stripe size, stripe length and number of stripes, which explicitly affect decoding time and implicitly affect fault tolerance. Minimum code distance is the concept which establishes the implicit relation between decoding time and fault tolerance. Identified parameters have an identical impact on decoding and encoding time in Maximum Distance Separable (MDS) erasure codes. Since the encoding time and bandwidth consumption of simultaneous decoding and encoding operations can affect decoding time, the influence of identified parameters has been also evaluated on encoding time and bandwidth consumption. Evaluations have been taken place by embedding erasure codes in Hadoop by means of HDFS-RAID module. Results of implementations show that use of small blocks and small stripe lengths and less number of stripes, could implicitly strengthen the fault tolerance of erasure coded cloud storage systems, through decrement of decoding time. © 2016 IEEE
  5. Keywords:
  6. Cloud storage system ; Decoding ; Encoding ; Erasure code ; Fault tolerance ; Hadoop ; Bandwidth ; Codes (symbols) ; Digital storage ; Encoding (symbols) ; File organization ; Forward error correction ; Bandwidth consumption ; Cloud storage systems ; Erasure codes ; Hadoop distributed file system (HDFS) ; Identified parameter ; Maximum distance separable erasure codes ; Simultaneous decoding ; Fault tolerance
  7. Source: 20th International Computer Science and Engineering Conference: Smart Ubiquitos Computing and Knowledge, ICSEC 2016, 14 December 2016 through 17 December 2016 ; 2017 ; 9781509044207 (ISBN)
  8. URL: https://ieeexplore.ieee.org/document/7859937