Sunteți pe pagina 1din 5

MetaLUNs

MetaLUNs
The purpose of a MetaLUN is that a Clariion can grow the size of a LUN on the fly. Lets say
that a host is running out of space on a LUN. From Navisphere, we can Expand a LUN by
adding more LUNs to the LUN that the host has access to. To the host, we are not adding more
LUNs. All the host is going to see is that the LUN has grown in size. We will explain later how
to make space available to the host.
There are two types of MetaLUNs, Concatenated and Striped. Each has their advantages and
disadvantages, but the end result which ever you use, is that you are growing, expanding a
LUN.
A Concatenated MetaLUN is advantageous because it allows a LUN to be grown quickly and
the space made available to the host rather quickly as well. The other advantage is that the
Component LUNs that are added to the LUN assigned to the Host can be of a different RAID
type and of a different size.
The host writes to Cache on the Storage Processor, the Storage Processor then flushes out to the
disk. With a Concatenated MetaLUN, the Clariion only writes to one LUN at a time. The
Clariion is going to write to LUN 6 first. Once the Clariion fills LUN 6 with data, it then begins
writing to the next LUN in the MetaLUN, which is LUN 23. The Clariion will continue writing
to LUN 23 until it is full, then write to LUN 73. Because of this writing process, there is no
performance gain. The Clariion is still only writing to one LUN at a time.
A Striped MetaLUN is advantageous because if setup properly could enhance performance as
well as protection. Lets look first at how the MetaLUN is setup and written to, and how
performance can be gained. With the Striped MetaLUN, the Clariion writes to all LUNs that

make up the MetaLUN, not just one at a time. The advantage of this is more spindles/disks. The
Clariion will stripe the data across all of the LUNs in the MetaLUN, and if the LUNs are on
different Raid Groups, on different Buses, this will allow the application to be striped across
fifteen (15) disks, and in the example above, three back-end buses of the Clariion. The workload
of the application is being spread out across the back-end of the Clariion, thereby possibly
increasing speed. As illustrated above, the first Data Stripe (Data Stripe 1) that the Clariion
writes out to disk will go across the five disks on Raid Group 5 where LUN 6 lives. The next
stripe of data (Data Stripe 2), is striped across the five disks that make up RAID Group 10 where
LUN23 lives. And finally, the third stripe of data (Data Stripe 3) is striped across the five disks
that make up Raid Group 20 where LUN 73 lives. And then the Clariion starts the process all
over again with LUN6, then LUN 23, then LUN 73. This gives the application 15 disks to be
spread across, and three buses.
As for data protection, this would be similar to building a 15 disk raid group. The problem with a
15 disk raid group is that if one disk where to fail, it would take a considerable amount of time to
rebuild the failed disk from the other 14 disks. Also, if there were two disks to fail in this raid
group, and it was RAID 5, data would be lost. In the drawing above, each of the LUNs is on a
different RAID group. That would mean that we could lose a disk in RAID Group 5, RAID
Group 10, and RAID Group 20 at the same time, and still have access to the data. The other
advantage of this configuration is that the rebuilds are occurring within each individual RAID
Group. Rebuilding from four disks is going to be much faster than the 14 disks in a fifteen disk
RAID Group.
The disadvantage of using a Striped MetaLUN is that it takes time to create. When a component
LUN is added to the MetaLUN, the Clariion must restripe the data across the existing LUN(s)
and the new LUN. This takes time and resources of the Clariion. There may be a performance
impact while a Striped MetaLUN is re-striping the data. Also, the space is not available to the
host until the MetaLUN has completed re-striping the data.

LUN Migration

LUN Migration
The process of a LUN Migration has been available in Navisphere as of Flare Code or Release
16. The LUN Migration is a move of a LUN within a Clariion from one location to another
location. It is a two step process. First it is a block by block copy of a Source LUN to its new
location Destination LUN. After the copy is complete, it then moves the Source LUNs
location to its new place in the Clariion.
The Process of the Migration.
Again, this type of LUN Migration is an internal move of a LUN, not like a SANCopy where a
Data Migration occurs between a Clariion and another storage device. In the illustration above,
we are showing that we are moving Exchange off of the Vault drives onto Raid Group 10 on
another Enclosure in the Clariion. We will first discuss the process of the Migration, and then the
Rules of the Migration.
1. Create a Destination LUN. This is going to be the Source LUNs new location in the Clariion
on the disks. The Destination LUN is a LUN which can be on a different Raid Group, on a
different BUS, on a different Enclosure. The reason for a LUN Migration might be an instance
where we may want to offload a LUN from a busy Raid Group for performance issues. Or, we
want to move a LUN from Fibre Drives to ATA Drives. This we will discuss in the RULES
portion.
2. Start the Migration from the Source LUN. From the LUN in Navisphere, we simply rightclick and select Migrate. Navisphere gives us a window that displays the current information
about the Source LUN, and a selection window of the Destination LUN. Once we select the
Destination LUN and click Apply, the migration begins. The migration process is actually a two
step process. It is a copy first, then a move. Once the migration begins, it is a block for block
copy from the Source LUN (Original Location) to the Destination LUN (New Location). This is
important to know because the Source LUN does not have to be offline while this process is
running. The host will continue to read and write to the Source LUN, which will write to Cache,
then Cache writing out to the disk. Because it is a copy, any new write to the source lun will also
write to the destination lun. At any time during this process, you may cancel the Migration if the

wrong LUN was selected, or to wait until a later time. A priority level is also available to speed
up or slow down the process.
3. Migration Completes. When the migration completes, the Source LUN will then MOVE to
its new location in the Clariion. Again, there is nothing that needs to be done from the host, as it
is still the same LUN as it was to begin with, just in a new space on the Clariion. The host
doesnt even know that the LUN is on a Clariion. It thinks the LUN is a local disk. The
Destination LUN ID that you give a LUN when creating, will disappear. To the Clariion, that
LUN never existed. The Source LUN will occupy the space of the Destination LUN, taking with
it the same LUN ID, SP Ownership, and host connectivity. The only things that may or may not
change based on your selection of the Destination might be the Raid type, Raid Group, size of
the LUN, or Drive Type. The original space that the Source LUN once occupied is going to show
as FREE Space in Navisphere on the Clariion. If you were to look at the Raid Group where the
Source LUN used to live, under the Partitions tab, you will see the space the original LUN
occupied as a Free. The Source LUN is still in the same Storage Group, assigned to the Host as it
was before.
Migration Rules
The rules of a Migration as illustrated above are as follows.
The Destination LUN can be:
1. Equal to in size or larger. You can migrate a LUN to a LUN that is the exact same block
count size, or to a LUN that is larger in size, so long as the host has the ability to see the
additional space once the migration has completed. Windows would need a rescan, reboot of the
disks to see the additional space, then using Diskpart, extend the Volume on the host. A host that
doesnt havent the ability to extend a volume, would need a Volume Manager software to grow a
filesystem, etc.
2. The same or a different drive type. A destination LUN can be on the same type of drives as
the source, or a different type of drive. For instance, you can migrate a LUN from Fibre Drives to
ATA Drives when the Source LUN no longer needs the faster type drives. This is a LUN to LUN
copy/move, so again, disk types will not stop a migration from happening, although it may slow
the process from completing.
3. The same or a different raid type. Again, because it is a LUN to LUN copy, raid types dont
matter. You can move a LUN from Raid 1_0 to Raid 5 and reclaim some of the space on the Raid
1_0 disks. Or find that Raid 1_0 better suits your needs for performance and redundancy than
Raid 5.
4. A Regular LUN or MetaLUN. The destination LUN only has to be equal in size, so whether it
is a regular LUN on a 5 disk Raid 5 group or a Striped MetaLUN spread across multiple
enclosures, buses, raid groups for performance is completely up to you.
However, the Destination LUN cannot be:
1. Smaller in size. There is no way on a Clariion to shrink a LUN to allow a user to reclaim
space that is not being used.

2. A SnapView, MirrorView, or SanCopy LUN. Because these LUNs are being used by the
Clariion to replicate data for local recoveries, replicate data to another Clariion for Disaster
Recovery, or to move the data to/from another storage device, they are not available as a
Destination LUN.
3. In a Storage Group. If a LUN is in a Storage Group, it is believed to belong to a Host.
Therefore, the Clariion will not let you write over a LUN that potentially belongs to another host.

S-ar putea să vă placă și