Sunteți pe pagina 1din 3

RAID Groups and Types

RAID GROUPS and RAID Types


The above slide illustrates the concept of creating a RAID Group and the supported RAID
types of the Clariions.
RAID Groups
The concept of a RAID Group on a Clariion is to group together a number of disks on the
Clariion into one big group. Lets say that we need a 1 TB LUN. The disks we have a 200 GB
in size. We would have to group together five (5) disks to get to the 1 TB size needed for
the LUN. I know we havent taken into account for parity and what the RAW capacity of a
drive is, but that is just a very basic idea of what we mean by a RAID Group. RAID Groups
also allow you to configure the Clariion in a way so that you will know what LUNs,
Applications, etclive on what set of disks in the back of the Clariion. For instance, you
wouldnt want an Oracle Database LUN on the same RAID Group (Disks) as a SQL Database
running on the same Clariion. This allows you to create a RAID Group of a # of disks for the
Oracle Database, and another RAID Group of a different set of disks for the SQL Database.
RAID Types
Above are the supported RAID types of the Clariion.

RAID 0 Striping Data with NO Data Protection. The Clariions Cache will write the data
out to disk in blocks (chunks) that we will discuss later. For RAID 0, the Clariion
writes/stripes the data across all of the disks in the RAID Group. This is fantastic for
performance, but if one of the disks fail in the RAID 0 Group, then the data will be lost
because there is no protection of that data (i.e. mirroring, parity).
RAID 1 Mirroring. The Clariion will write the Data out to the first disk in the RAID Group,
and write the exact data to another disk in that RAID 1 Group. This is great in terms of data
protection because if you were to lose the data disk, the mirror would have the exact copy
of the data disk, allowing the user to access the disk.
RAID 1_0 Mirroring and Striping Data. This is the best of both worlds if set up
properly. This type of RAID Group will allow the Clariion to stripe data and mirror the data
onto other disks. However, the illustration above of RAID 1_0, is not the best way of
configuring that type of RAID Group. The next slide will go into detail as to why this isnt the
best method of configuring RAID 1_0.
RAID 3 Striping Data with a Dedicated Parity Drive. This type of RAID Group allows
the Clariion to stripe data the first X number of disks in the RAID Group, and dedicate the
last disk in the RAID Group for Parity of the data stripe. In the event of a single drive failure
in this RAID Group, the failed disk can be rebuilt from the remaining disks in the RAID
Group.
RAID 5 Striping Data with Distributed Parity. RAID type 5 allows the Clariion to
distribute the Parity information to rebuild a failed disk across the disks that make up the
RAID Group. As in RAID 3, in the event of a single drive failure in this RAID Group, the
failed disk can be rebuilt from the remaining disks in the RAID Group.
RAID 6 Striping Data with Double Parity. This is new to Clariion world starting in Flare
Code 26 of Navisphere. The simplest explanation of RAID 6 we can use for RAID 6 is the
RAID Group uses striping, such as RAID 5, with double the parity. This allows a RAID 6 RAID
Group to be able to have two drive failures in the RAID Group, while maintaining access to
the LUNs.
HOT SPARE A Dedicated Single Disk that Acts as a Failed Disk. A Hot Spare is
created as a single disk RAID Group, and is bound/created as a HOT SPARE in Navisphere.
The purpose of this disk is to act as the failed disk in the event of a drive failure. Once a
disk is set as a HOT SPARE, it is always a HOT SPARE, even after the failed disk is replaced.
In the slide above, we list the steps of a HOT SPARE taking over in the event of a disk
failure in the Clariion.
1. A disk fails a disk fails in a RAID Group somewhere in the back of the Clariion.
2. Hot Spare is Invoked a Clariion dedicated HOT SPARE acts as the failed disk in
Navisphere. It will assume the identity of the failed disks Bus_Enclosure_Disk Address.
3. Data is REBUILT Completely onto the Hot Spare from the other disks in the RAID Group
The Clariion begins to recalculate and rebuild the failed disk onto the Hot Spare from the
other disks in the RAID Group, whether it be copying from the MIRRORed copy of the disk,
or through parity and data calculations of a RAID 3 or RAID 5 Group.
4. Disk is replaced Somewhere throughout the process, the failed drive is replaced.

5. Data is Copied back to new disk The data is then copied back to the new disk that was
replaced. This will take place automatically, and will not begin until the failed disk is
completely rebuilt onto the Hot Spare.
6. Hot Spare is back to a Hot Spare Once the data is written from the Hot Spare back to
the failed disk, the Hot Spare goes back to being a Hot Spare waiting for another disk
failure.
Hot Spares are going to be size and drive type specific.
Size. The Hot Spare must be at least the same size as the largest size disk in the Clariion. A
Hot Spare will replace a drive that is the same size or a smaller size drive. The Clariion does
not allow multiple smaller Hot Spares replace a failed disk.
Drive Type Specific. If your Clariion has a mixture of Drive Types, such as Fibre and S.ATA
disks, you will need Hot Spares of those particular Drive Types. A Fibre Hot Spare will not
replace a failed S.ATA disk and vice versa.
Hot Spares are not assigned to any particular RAID Group. They are used by the Clariion in
the event of any failure of that Drive Type. The recommendation for Hot Spares is one (1)
Hot Spare for every thirty (30) disks.
There are multiple ways to create a RAID Group. One is via the Navisphere GUI, and the
other is through the Command Line Interface. In later slides we will list the commands to
create a RAID Group.

source: http://clariionblogs.blogspot.com/2008/01/raid-groups-and-types.html

S-ar putea să vă placă și