|
|
| Line 1: |
Line 1: |
| {{Merge to|RAID|discuss=Talk:RAID#Merger proposal|date=January 2014}}
| | Andera is what you can contact her but she by no means really liked that name. My working day job is a journey agent. It's not a common factor but what she likes performing is to perform domino but she doesn't have the time recently. My spouse and I reside in Mississippi and I love every day residing right here.<br><br>My web blog: [http://breenq.com/index.php?do=/profile-1144/info/ clairvoyants] |
| | |
| The '''standard RAID levels''' are a basic set of [[RAID]] configurations that employ the techniques of [[data striping|striping]], [[Disk mirroring|mirroring]], or [[Parity bit#RAID|parity]] to create large reliable data stores from general purpose computer [[hard disk drive]]s. The most common types today are RAID 0 (striping), RAID 1 and variants (mirroring), RAID 5 (distributed parity) and RAID 6 (dual parity). RAID levels and their associated data formats are standardized by the [[Storage Networking Industry Association]] in the Common RAID Disk Drive Format (DDF) standard.<ref>{{Cite web|title=Common raid Disk Data Format (DDF)|url=http://www.snia.org/tech_activities/standards/curr_standards/ddf/|publisher=Storage Networking Industry Association|work=SNIA.org|accessdate=2013-04-23}}</ref>
| |
| | |
| == RAID 0 ==
| |
| [[File:RAID 0.svg|thumb|150px|Diagram of a RAID 0 setup]]
| |
| | |
| A '''RAID 0''' (also known as a ''stripe set'' or ''striped volume'') splits data evenly across two or more disks ([[data striping|striped]]) without [[Parity bit|parity]] information for speed. RAID 0 was not one of the original RAID levels and provides no [[data redundancy]]. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical disk out of two or more physical ones.
| |
| | |
| A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk.
| |
| For example, if a 100 GB disk is striped together with a 350 GB disk, the size of the array will be 200 GB (100 GB × 2).
| |
| :<math>\begin{align} \mathrm{Size} & = 2 \cdot \min \left( 100\,\mathrm{GB}, 350\,\mathrm{GB} \right) \\
| |
| & = 2 \cdot 100\,\mathrm{GB} \\
| |
| & = 200\,\mathrm{GB} \end{align}</math>
| |
| | |
| The diagram shows how the data is distributed into A''x'' stripes to the disks. Accessing the stripes in the order A1, A2, A3, ... provides the illusion of a larger and faster drive. Once the stripe size is defined on creation it needs to be maintained at all times.
| |
| | |
| === Performance ===
| |
| RAID 0 is also used in some computer gaming systems where performance is desired and data integrity is not very important. However, real-world tests with computer games have shown that RAID-0 performance gains are minimal, although some desktop applications will benefit.<ref>{{Cite web|url=http://www.anandtech.com/storage/showdoc.aspx?i=2101 |title=Western Digital's Raptors in RAID-0: Are two drives better than one? |date=July 1, 2004 |publisher= AnandTech |accessdate=2007-11-24}}</ref><ref>{{Cite web|url=http://www.anandtech.com/storage/showdoc.aspx?i=2974 |title=Hitachi Deskstar 7K1000: Two Terabyte RAID Redux |date=April 23, 2007 |publisher=AnandTech |accessdate=2007-11-24}}</ref> Another article examined these claims and concludes: "Striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance." <ref>{{Cite web|url=http://tweakers.net/reviews/515/1/raid-0-hype-or-blessing-pagina-1.html |title=RAID 0: Hype or blessing? |date=August 7, 2004 |publisher=Tweakers.net |accessdate=2008-07-23}}</ref>
| |
| | |
| == RAID 1 ==
| |
| [[File:RAID 1.svg|thumb|150px|Diagram of a RAID 1 setup]]
| |
| An exact copy (or ''mirror'') of a set of data on two disks. This is useful when read performance or reliability is more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks (see reliability [[Geometric growth|geometrically]]) over a single disk.
| |
| {{clear}}
| |
| | |
| == RAID 2 ==
| |
| [[File:RAID2 arch.svg|thumb|350px|RAID Level 2]]
| |
| A '''RAID 2''' stripes data at the [[bit]] (rather than block) level, and uses a [[Hamming code]] for [[error correction]]. The disks are synchronized by the controller to spin at the same angular orientation (they reach Index at the same time), so it generally cannot service multiple requests simultaneously. Extremely high data transfer rates are possible. This is the only original level of RAID that is not currently used.<ref name="vadala">{{Cite book| title = Managing RAID on Linux. O'Reilly Series | author = Derek Vadala | edition = illustrated | publisher = [[O'Reilly Media|O'Reilly]] | year = 2003 | isbn = 9781565927308 | page = 6 | url = http://books.google.com/?id=RM4tahggCVcC&pg=PA6&dq=raid+2+implementation#v=onepage&q=raid%202%20implementation }}</ref><ref name="marcus">{{Cite book| title = Blueprints for high availability | author = Evan Marcus, Hal Stern | edition = 2, illustrated | publisher = [[John Wiley and Sons]] | year = 2003 | isbn = 9780471430261 | page = 167 | url = http://books.google.com/?id=D_jYqFoJVEAC&pg=RA2-PA167&dq=raid+2+implementation#v=onepage&q=raid%202%20implementation }}</ref>
| |
| | |
| All hard disks eventually implemented Hamming code error correction. This made RAID 2 error correction redundant and unnecessarily complex. This level quickly became useless and is now obsolete. There are no commercial applications of RAID 2.<ref name="vadala" /><ref name="marcus" />
| |
| {{clear}}
| |
| | |
| == RAID 3 ==
| |
| [[File:RAID 3.svg|thumb|300px|Diagram of a RAID 3 setup of 6-byte blocks and two [[Parity bit|parity]] bytes, shown are two blocks of data in different colors.]]
| |
| | |
| A '''RAID 3''' uses [[byte]]-level striping with a dedicated [[Parity bit|parity]] disk. RAID 3 is very rare in practice. One of the characteristics of RAID 3 is that it generally cannot service multiple requests simultaneously. This happens because any single block of data will, by definition, be spread across all members of the set and will reside in the same location. So, any [[Input/output|I/O]] operation requires activity on every disk and usually requires synchronized spindles.
| |
| | |
| This makes it suitable for applications that demand the highest transfer rates in long sequential reads and writes, for example [[uncompressed video]] editing. Applications that make small reads and writes from random disk locations will get the worst performance out of this level.<ref name="marcus" />
| |
| | |
| The requirement that all disks spin synchronously, a.k.a. [[Lockstep (computing)|lockstep]], added design considerations to a level that didn't give significant advantages over other RAID levels, so it quickly became useless and is now obsolete.<ref name="vadala" /> Both RAID 3 and RAID 4 were quickly replaced by RAID 5.<ref name="meyers">{{Cite book| title = Mike Meyers' A+ Guide to Managing and Troubleshooting PCs | author = Michael Meyers, Scott Jernigan | edition = illustrated | publisher = [[McGraw-Hill Professional]] | year = 2003 |isbn = 9780072231465 | page = 321 | url = http://books.google.com/?id=9vfQKUT_BjgC&pg=PT348&dq=raid+2+implementation#v=onepage&q=raid%202%20implementation }}</ref> RAID 3 was usually implemented in hardware, and the performance issues were addressed by using large disk caches.<ref name="marcus" />
| |
| {{clear}}
| |
| | |
| == RAID 4 ==
| |
| [[File:RAID 4.svg|thumb|300px|Diagram of a RAID 4 setup with dedicated [[Parity bit|parity]] disk with each color representing the group of blocks in the respective [[Parity bit|parity]] block (a stripe)]]
| |
| | |
| A '''RAID 4''' uses [[Block size (data storage and transmission)|block]]-level striping with a dedicated [[Parity bit|parity]] disk.
| |
| | |
| In the example on the right, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.
| |
| | |
| RAID 4 is very uncommon, but one enterprise level company that has previously used it is [[NetApp]]. The aforementioned performance problems were solved with their proprietary [[Write Anywhere File Layout]] (WAFL), an approach to writing data to disk locations that minimizes the conventional parity RAID write penalty. By storing system metadata (inodes, block maps, and inode maps) in the same way application data is stored, WAFL is able to write file system metadata blocks anywhere on the disk. This approach in turn allows multiple writes to be "gathered" and scheduled to the same RAID stripe—eliminating the traditional read-modify-write penalty prevalent in parity-based RAID schemes.<ref>{{Closed access}}{{Cite web|title=NetApp DNA|url=http://partners.netapp.com/go/techontap/matl/NetApp_DNA.html}}{{Password-protected}}</ref>
| |
| {{clear}}
| |
| | |
| == RAID 5 ==
| |
| [[File:RAID 5.svg|thumb|300px|Diagram of a RAID 5 setup with distributed [[Parity bit|parity]] with each color representing the group of blocks in the respective [[Parity bit|parity]] block (a stripe). This diagram shows left asymmetric algorithm]]
| |
| | |
| A '''RAID 5''' comprises block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.<ref name="Patterson_1994">{{Cite journal |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.3889 |first1=Peter |last1=Chen |first2=Edward |last2=Lee |first3=Garth |last3=Gibson |first4=Randy |last4=Katz |first5=David |last5=Patterson |title=RAID: High-Performance, Reliable Secondary Storage |work=ACM Computing Surveys |volume=26 |pages=145–185|year=1994}}</ref>
| |
| | |
| {{clear}}
| |
| == RAID 6 ==
| |
| [[File:RAID 6.svg|thumb|300px|Diagram of a RAID 6 setup, which is identical to RAID 5 other than the addition of a second [[Parity bit|parity]] block]]
| |
| | |
| '''RAID 6''' extends RAID 5 by adding an additional [[Parity bit|parity]] block; thus it uses [[Block (data storage)|block]]-level striping with two [[Parity bit|parity]] blocks distributed across all member disks. | |
| | |
| === Performance (speed) ===
| |
| RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with [[Parity bit|parity]] calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture – in software, firmware or by using firmware and specialized [[ASIC]]s for intensive [[Parity bit|parity]] calculations. It can be as fast as a RAID-5 system with one fewer drive (same number of data drives).<ref name="Faith">{{Cite journal|author=Rickard E. Faith|title=A Comparison of Software RAID Types|url=http://alephnull.com/benchmarks/sata2009/raidtype.html|date=13 May 2009}}</ref>
| |
| | |
| === Implementation ===
| |
| According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations ([[Parity bit|parity]] and [[Reed-Solomon error correction|Reed-Solomon]]), orthogonal dual [[Parity bit|parity]] check data and diagonal [[Parity bit|parity]], have been used to implement RAID Level 6."<ref name="SNIA-def">{{Cite web|url=http://www.snia.org/education/dictionary/r/ |title=Dictionary R |publisher=Storage Networking Industry Association |accessdate = 2007-11-24}}</ref>
| |
| | |
| ==== Computing parity ====
| |
| Two different ''syndromes'' need to be computed in order to allow the loss of any two drives. One of them, '''P''' can be the simple XOR of the data across the stripes, as with RAID 5. A second, independent syndrome is more complicated and requires the assistance of [[Field (mathematics)|field theory]].
| |
| | |
| To deal with this, the [[Finite field|Galois field]] <math>GF(m)</math> is introduced with <math>m=2^k</math>, where <math>GF(m) \cong F_2[x]/(p(x))</math> for a suitable [[irreducible polynomial]] <math>p(x)</math> of degree <math>k</math>. A chunk of data can be written as <math>d_{k-1}d_{k-2}...d_0</math> in base 2 where each <math>d_i</math> is either 0 or 1. This is chosen to correspond with the element <math>d_{k-1}x^{k-1} + d_{k-2}x^{k-2} + ... + d_1x + d_0</math> in the Galois field. Let <math>D_0,...,D_{n-1} \in GF(m)</math> correspond to the stripes of data across hard drives encoded as field elements in this manner (in practice they would probably be broken into byte-sized chunks). If <math>g</math> is some [[Field (mathematics)#Some first theorems|generator]] of the field and <math>\oplus</math> denotes addition in the field while concatenation denotes multiplication, then <math>\mathbf{P}</math> and <math>\mathbf{Q}</math> may be computed as follows (<math>n</math> denotes the number of data disks):
| |
| :<math>
| |
| \mathbf{P} = \bigoplus_i{D_i} = \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \mathbf{D}_2 \;\oplus\; ... \;\oplus\; \mathbf{D}_{n-1}</math>
| |
| | |
| :<math>
| |
| \mathbf{Q} = \bigoplus_i{g^iD_i} = g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; g^2\mathbf{D}_2 \;\oplus\; ... \;\oplus\; g^{n-1}\mathbf{D}_{n-1}
| |
| </math>
| |
| | |
| ''For a computer scientist, a good way to think about this is that <math>\oplus</math> is a bitwise XOR operator and <math>g^i</math> is the action of a [[linear feedback shift register]] on a chunk of data.'' Thus, in the formula above,<ref>{{Cite web|url=http://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf|last=Anvin|first=H. Peter|title=The mathematics of RAID-6|date=21 May 2009|accessdate=November 4, 2009}}</ref> the calculation of '''P''' is just the XOR of each stripe. This is because addition in any [[Characteristic (algebra)|characteristic two]] finite field reduces to the XOR operation. The computation of '''Q''' is the XOR of a shifted version of each stripe.
| |
| | |
| Mathematically, the ''generator'' is an element of the field such that <math>g^i</math> is different for each nonnegative <math>i</math> satisfying <math>i < n</math>.
| |
| | |
| If one data drive is lost, the data can be recomputed from '''P''' just like with RAID 5. If two data drives are lost or a data drive and the drive containing '''P''' are lost, the data can be recovered from '''P''' and '''Q''' or from just '''Q''', respectively, using a more complex process. Working out the details is extremely hard with field theory. Suppose that <math>D_i</math> and <math>D_j</math> are the lost values with <math>i \neq j</math>. Using the other values of <math>D</math>, constants <math>A</math> and <math>B</math> may be found so that <math>D_i \oplus D_j = A</math> and <math>g^iD_i \oplus g^jD_j = B</math>:
| |
| | |
| :<math>
| |
| A = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{D_\ell} = \mathbf{P} \;\oplus\; \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \dots \;\oplus\; \mathbf{D}_{i-1} \;\oplus\; \mathbf{D}_{i+1} \;\oplus\; \dots \;\oplus\; \mathbf{D}_{j-1} \;\oplus\; \mathbf{D}_{j+1} \;\oplus\; \dots \;\oplus\; \mathbf{D}_{n-1}
| |
| </math>
| |
| :<math>
| |
| B = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{g^{\ell}D_\ell} = \mathbf{Q} \;\oplus\; g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; \dots \;\oplus\; g^{i-1}\mathbf{D}_{i-1} \;\oplus\; g^{i+1}\mathbf{D}_{i+1} \;\oplus\; \dots \;\oplus\; g^{j-1}\mathbf{D}_{j-1} \;\oplus\; g^{j+1}\mathbf{D}_{j+1} \;\oplus\; \dots \;\oplus\; g^{n-1}\mathbf{D}_{n-1}
| |
| </math>
| |
| | |
| Multiplying both sides of the equation for <math>B</math> by <math>g^{n-i}</math> and adding to the former equation yields <math>(g^{n-i+j}\oplus1)D_j = g^{n-i}B\oplus A</math> and thus a solution for <math>D_j</math>, which may be used to compute <math>D_i</math>.
| |
| | |
| The computation of '''Q''' is CPU intensive compared to the simplicity of '''P'''. Thus, a RAID 6 implemented in software will have a more significant effect on system performance, and a hardware solution will be more complex.
| |
| | |
| == Non-standard RAID levels and non-RAID drive architectures ==
| |
| {{Main|Non-standard RAID levels|Non-RAID drive architectures}}
| |
| | |
| Alternatives to the above designs include [[nested RAID levels]], [[non-standard RAID levels]], and [[non-RAID drive architectures]]. Non-RAID drive architectures are referred to by similar acronyms, notably [[non-RAID drive architectures#SLED|SLED]], [[JBOD|Just a Bunch of Disks]], [[Spanned volume|SPAN/BIG]], and [[Massive array of idle disks|MAID]].
| |
| | |
| == References ==
| |
| {{Reflist|30em}}
| |
| | |
| == External links ==
| |
| * [http://www.icc-usa.com/raid-calculator.asp RAID Calculator for Standard RAID Levels and Other RAID Tools]
| |
| * [http://www-1.ibm.com/support/docview.wss?uid=swg21149421 IBM summary on RAID levels]
| |
| * [http://www.dtidata.com/resourcecenter/2008/05/08/raid-configuration-parity-check/ RAID 5 parity explanation and checking tool]
| |
| * [http://support.dell.com/support/topics/global.aspx/support/entvideos/raid?c=us&l=en&s=gen Animations and details on RAID levels 0, 1, and 5]
| |
| * [http://blog.open-e.com/how-does-raid-5-work/ The Open-E Blog. "How does RAID 5 work? The Shortest and Easiest explanation ever!"]
| |
| | |
| {{DEFAULTSORT:Standard Raid Levels}}
| |
| [[Category:RAID]]
| |