Sunteți pe pagina 1din 6

Lesson 3: Manage and Administer - Managing Devices Within the VxVM Architecture - ...

Page 1 of 6

LESSON 3: MANAGE AND ADMINISTER - MANAGING DEVICES WITHIN THE VXVM ARCHITECTURE
PRINT DOCUMENT

Lesson 3 Managing Devices Within the VxVM Architecture


Welcome to Lesson 3, Managing Devices Within the VxVM Architecture.
Lesson topics and objectives
In this lesson, we'll talk about managing components within the architecture and describe the components of the architecture,
discovering disk devices and managing multiple paths to disk devices including the DMP technology.
Managing components in the VxVM architecture
So let's start by taking a look at the architecture of a Volume Manager system.
VxVM architecture
Volume Manager is really not an application at all. It's a connection of scripts, daemons and device drivers working together to really
route I/O to the hardware that's underneath it. And that's shown by this diagram here. So if we start from the top, we see the user
areas up by where it says user applications and we see the configuration utilities like vxdiskadm, Veritas Enterprise Administrator, the
CLI, this might also include the Storage Foundation Manager that you might be using to connect to the system as a managed host
and the daemons running in the user process space such as vxconfigd which is the master daemon in Volume Manager as well as
vxreload, vxcached and vxbackupd. Those daemons are going to take commands from the interfaces, so when you do a command in
the command line or the VEA of these disk adm, they send the commands directly to vxconfigd which gets information from these
other daemons to the left of vxconfigd in terms of making decisions as to what to do with that storage. So once vxconfigd has made
a decision as to what it wants to do, it sends the information into the kernel. Some of the information is logged in the kernel log
especially if you have, for example a mirrored volume and plexes being written in that mirrored volume. The kernel log records some
of that information and then that information goes eventually down to the vxvmconfig driver which is usually the combination of vxio
or another one called vxspec and those are both device drivers in Volume Manager. The vxio driver controls the kernel copy of each
of the imported disk groups configuration databases. So in addition to having the database written on disk private regions for that
disk group, Volume Manager keeps a kernel-based copy of all the imported active configuration databases just in case system or disks
go down which is why sometimes if you run vxdisk list or vxprint command, you'll still see disks that may have been failed and the
corresponding LUNs from which they were broken apart from. That information is shared with some of the other kernel modules or
device drivers such as vxdmp and if you're using I/O fencing especially with cluster server you will see another driver called vxfen in
that area too. These are all drivers that are loaded at boot time when the system boots. Those drivers in addition to the config driver
communicate with the operating system device drivers and they send and receive I/O through their block devices and character
devices which are the devdsk and devrdsk paths to communicate and route I/O to and from the disks. So that's one of the reasons
why we said earlier in the lessons, you should never remove the devdsk and remove the devrdsk device paths, you should leave them
there because they are an integral part of that I/O pathing on its way to the disk.
Dynamic multipathing (DMP)
Now as we also discussed in earlier lessons when you create a volume in Volume Manager, Volume Manager adds its own device
paths in addition to what's already there. So the Volume Manager device paths will always have /dev/vx and then something. Here
are some examples of the Volume Manager device paths if we are using dmp. These are yet another set of device paths that are
added to the operating system when you install Volume Manager on a system and now you are using either active/passive or
active/active dynamic multipathing. So here we have dev/vx/rdmp for the raw device path and dev/vx/dmp for the block device path.
If you're going to use dmp in that system, you need to have these paths in place at all times to all the devices underneath them so
you can use multipathing and take full advantage of the performance advantages and failover advantages of dmp. Also if path
becomes disabled, you will still have a pair of these dmp paths to be able to send I/O down the previously secondary path that's now
been updated to a primary path. If you have two primary paths or using active/active dmp, then you should have paths to both
controllers and the same set of disks and each will be transferring I/O at the same time, which can potentially even double your
performance and throughput to that same set of disks. So regardless of how you have your SAN configured or whether you have
Fibre Channel switches or whether you have direct connections to more than one HBA. If you have multiple HBAs inside the enclosure
and they are connected to the same set of disks and that array supports DMP, we will be able to transfer I/O based on the policies
that DMP uses and the array supports. Not all arrays support active/active DMP, some only support active/passive or active/passive
concurrent DMP. That is determined by the array vendor and the release notes of the firmware inside the array.
Why use DMP?
So, why we use DMP is really to make this a long story short. Two reasons, one is for performance benefits, meaning additional
throughput and better performance and failover or fault tolerance in case one HBA goes down, we still have another HBA that can
communicate I/O across those DMP paths. Most enterprise class arrays provide enterprise class performance by supporting
active/active DMP and some of the many policies that are discussed in DMP. If you have or you need more information about this,
you can go to the FTP site that's indicated on your slide and download that PDF file and that will give you information about what

http://symantecpartners.vportal.net/media/symantecpartners/media/_generated/transcripts/t... 8/23/2011

Lesson 3: Manage and Administer - Managing Devices Within the VxVM Architecture - ... Page 2 of 6

arrays are supported with the hardware compatibility list for Storage Foundation 5.1 and the modes of DMP that are supported. So
we can do different types of DMP. We can do path restoration. We can do load-balancing. If we are adding HBAs and adding more
arrays we scale very well to larger configurations and we can integrate those new HBAs and paths into our DMP environment. We
also can manage DMP paths and policies through Storage Foundation Manager.
VxVM configuration database
Let's talk a little bit about what's inside the private region of a disk. One of the things that's inside the private region of an active disk
and an imported disk group is the VxVM configuration database. You saw earlier in the architecture slide that there is a copy of the
configuration database for each imported disk group in the kernel of the system that's running those disk groups. Well, there's also a
disk-based copy of that same database on the disk itself. Now Volume Manager for redundancy purposes always maintains a default
of at least five activated copies of the configuration database for the disk group that's imported. So each disk group has by default
five activated copies on the disk that are in the disk group. If you have less than five disks in the disk group, then all the disks will be
maintained as active copies. That active copy on those disks is going to be synchronized with the kernel active copy whenever
changes get made to the disk group configuration database. Changes to records, what types of records, records like changes to
volume records, subdisk records, plex records and logical disk records. So, anything that changes a record gets changed in the
database and the entire database gets updated, not individual record edits. So it's different than say, an Oracle or Sybase or SAP type
of database where you can update individual records and then you don't have to touch the rest of them. This particular database is
small enough so we update the entire database when any change gets made. So the change goes from the kernel to all the activated
database copies on the disk in that disk group. If you have more than five disks in your disk group there may be a chance that you
are going to have disabled configuration database copies. Those copies and disks will not participate in the changes from the kernel
change that are happening in the kernel. So they will maintain and be static and not take on those changes. That means if you ever
have to replace a disk in the disk group and you are replacing a disk that has an active copy of the database and you bring in a
disabled disk, that disabled disk is going to have to get an activated copy of the updated disk group database image. For that you can
do a vxdctl enable command to make that happen. If you decide that you need more or less numbers of activated copies in the disk
group, you can change that.
Displaying VxVM configuration database information
The best way to see how many active copies are being set up by default or if somebody has actually changed it to a different value is
to run this command on your slide vxdg list and the disk group name. Now, this example happens to be in Solaris, but the output is
very similar to any other operating system supporting Storage Foundation 5.1. If you look at the copies line, you'll see
nconfig=default and the nlog=default. The default indication means that the number of configuration databases that are active in the
disk group is 5 which is the default and the nlog refers to another thing in the private region called the kernel log, and that is also set
to the default. The kernel log also has active copies of itself manifest on the disk in the disk group. So the disk will contain typically
an activated database copy and an activated kernel log copy together. So, in this case, nobody has changed those values. They are
still set to the default which is typically 5. If you look underneath that, you will see the config line. Now when we talked about layered
volumes in an earlier lesson and we compared layered volumes to nonlayered volumes, we said that one of the disadvantages of
layered volumes was that it takes a lot more Volume Manager objects to be created and stored in the configuration database. I'm
going to go a little step further with that explanation. So if you look at the permanent link which is permlen, it's set to 48144, that's
the number of available space that is actually the number of capacity space available to store more records in the configuration
database. If you look at the one right next to it, that says free=48138. It looks like the difference between the permlen and the free
is very, very small and that means we don't have too many records in that disk group. We probably only have one volume or may be
two volumes and possibly one of the volumes is mirrored, or has a few subdisks inside it. If your free space gets down to near 0, it's
going to be a very slow process when you want to do any Volume Manager specific I/O in that disk group for example, I/O like if
you're mirroring a volume or you're converting one layout type to another. That's going to take a long time because there's not a lot
of free space to create and to leave and manage records. If you ever get to 0 in the free= designation you will run out of space in
your disk group. The disk group will disable itself, then you have to find a way to actually delete Volume Manager objects before you
can reimport that disk group. Now the space that's included in the permlen variable and the free variable is governed by how large
the private region is. We said in one of the early lessons to this training that by default every private region is created as 32 MB in
size on a UNIX-based system. Only in rare cases, would you want that to be a lot larger, but in cases where you have any disk group
on that system it's going to have thousands or even more than thousands of objects. You might want to consider creating a private
region with a larger size and space. So may be you have to go back and reinitialize the disk because you cannot change the size of
the private region on the fly without having to backup and restore data that may be resident on that disk. This is not an uptime
operation. If you want to change the size of the private region after there is already volumes on it, you have to remove those
volumes or vxevacuate volumes to another disk and then go back and reinitialize the disk with a few options to change or set up a
new private region size. Now if you look further down that output, you'll see that we have seven disks inside that disk group
accounting dg. You will see that five of the disks have a clean online status for the configuration database. That refers to an activated
copy of the configuration database. That means all the changes happening in the kernel version of that database are happening on all
five of those online disks at the same time. Then you have two disks which are disabled. They are not taking on the changes. Volume
Manager typically selects the right disk on its own and tries to be as redundant as it possibly can within that disk group when it
determines which disks are going to get the online copies and which disks will get the disabled copies. I have a diagram that I'd like
to show you to give you a better understanding of how the private region works and looks like, and take a little closer look into the
configuration database itself. When you create or initialize a disk or even encapsulate a disk in Volume Manager, private and public
regions are created. So here we are just looking at the private region by itself. This is a blow up of the entire private region on any
disk that's initialized. Default is 32 MB, so we'll assume that we accept the default. First thing you see in there is the vtoc which is the
volume table of contents which is kind of like the superblock to a file system. This maps out the disk in partition tables on the disk
and all the space in the disk. Then you have the Volume Manager header. Now to see what's inside the header is actually very
simple, in Volume Manager you are going to run a vxdisk list and the disk name. The disk name could be the disk media name or the
disk access name. That's going to do a header dump which gives you things like the disk name whether the disk is part of a disk

http://symantecpartners.vportal.net/media/symantecpartners/media/_generated/transcripts/t... 8/23/2011

Lesson 3: Manage and Administer - Managing Devices Within the VxVM Architecture - ... Page 3 of 6

group or not. If it is, what the disk group name is and it gives you a lot of good information about what's on the disk. In addition to
that information, at the bottom of the vxdisklist output, the last thing you'll see on that output is the disk names and whether they
have an online enabled status or a disabled status for the configuration database for that disk group. That's the same results that
you'll have when you run the vxdglist command, except this time it will be with a little bit more detail. Because in this output,
vxdisklist disk, it will show you the sector addresses and offset and size of portions of the configuration database. It will actually show
you where the database is on those disks. So all of that information is held in the header and there is also a header copy at sector
240 by default of the private region in that disk. So you have two headers, if something happens to a block underneath this header
and you have a bad block and now you have a failing disk at least you can still see hopefully the header copy where that information
is stored. Most or the bulk of the region or size of the private region is taken up by what we call the configuration database. That's
what we've been talking about with respect to the disk group. To see what the configuration database has inside it, that's when you
do your vxdglist and the disk group name, and it'll give you information about some of the records inside the disk group. Now there
are additional commands, diagnostic commands, that you can do to dump the entire configuration database and actually look at all
the objects inside the database. The volumes, plexes, subdisk and disk, the tags and flags that were set on those objects and the
state diagram information that is associated with those objects. That is a command that we typically only want to use when we are
dealing with our Symantec Tech Support Department; if you have a problem or a question, you call Tech Support or contact them.
That's one thing that they typically run in tandem with some of these other commands. But I want you know about it, just in case
they ever ask you to run the command you know what it does, so it's a deeper view of the configuration database on your private
region on your disk. It does not change anything whatsoever on the disk, it just reports and queries and comes back with
information. We also have another section called the kernel log, which is a companion to the configuration database. And we
identified the kernel log in the slide when we talked about the nlog=default. So if you have an activated configuration database, you
almost always have in addition to that an activated kernel log copy. The log is responsible for logging changes to the objects in that
database before they're actually written to the database. Much the same way an intent log to a Veritas file system logs and journals
metadata changes before the allocation units get the change where the data is. So the kernel log also contains information about the
objects and it also contains information about the last plex written in a mirrored volume for example and state information about the
plexes themselves. Earlier, we ran a vxprint-ht command in some of the lessons in some of the slides and we showed you states of
the plexes and volumes and kernels states of plexes and volumes. This is where some of those states are held on the actual disk, so
if the system goes down, system comes back, Volume Manager's vxconfigd can read the disk in the private region, it can read the
kernel log and pick up the state of the disk or volume or object and continue with the volume recovery operation. So this diagram
hopefully helps you understand the private region in a little bit more detail. Obviously, if you have any additional questions or you
want a deeper look, you can always contact our Technical Support Department at support.symantec.com.
Displaying disk header information
Now on this slide, I have some updated information about the header in Volume Manager. This is the output of the actual vxdisklist
disk command. So here are some of the things I was just talking about, the device, the device tag, the type, the host ID and when I
say host ID, please don't confuse, this host ID output with host name output. This output is actually the equivalent of a Solaris host
name command, not host ID. So the system or host name that currently owns this disk is called cassius. The disk group that it looks
like it belongs to is named newdg, you can see that on the line that starts with group. You see the ID numbers which are unique
numbers for that disk in that disk group and they also have the host name of the system that gave birth to the disk group and gave
birth to its disk and it happens to be the same system cassius. If this disk were part of a disk group that has since been failed over
from one host to another, you might see a difference in the ID host name versus the host ID host name. That would indicate that the
disk is now owned by a different host than the one that gave birth to it. You also see some flags and information indicating the
format of the disk whether it's a cds, sliced or simple disk, private region offset or the sector where the private region starts. The
public region slice or partition which happens to be slice number two, it must be a Solaris box and the private slice or partition which
also happens to be number two. And as we said in the CDS lessons from before, CDS disks will always have the public and private
region inside a single partition, and that is the case here because they are in partition number two. Underneath that, you'll see your
public block and character DMP device paths and then you'll see some other information related to the public and private regions and
their length in sectors.
Displaying disk header information (continued)
On the next slide, we see the continuation of this output. The last time this particular database was updated was that particular time
and that's the ongoing unique number in seconds. And then we have the configuration database length and the kernel log length in
sectors. And then after that, under defined regions we can see the offset and the size of different components of the configuration
database as well as the kernel log. So that tells us exactly where in the private region the kernel log configuration database exists.
And that may be different from one disk to the other determined by the geometry of the disk. At the bottom of this information, we
see information specific to DMP, the multipathing information. This particular disk is using what looks like to be active/passive DMP
and you can tell that because the number of paths is more than one or two. They are different controllers. One state is enabled in
primary and the other disk is at state disabled and type secondary. If this were an active/active DMP designation then you'd see both
paths enabled.
VxVM disk types and formats
Also in vxdisklist, you can see information about whether this is a CDS disk, a slice disk, a simple disk or HP disk or none which
means there is no public or private region on the disk yet, it is still a physical disk.
VxVM configuration daemon
Now let's talk about the configuration daemon, vxconfigd. vxconfigd is the daemon in Volume Manager that takes in the commands

http://symantecpartners.vportal.net/media/symantecpartners/media/_generated/transcripts/t... 8/23/2011

Lesson 3: Manage and Administer - Managing Devices Within the VxVM Architecture - ... Page 4 of 6

that you give it either through the VEA, vxdiskadm or command line translates those commands into calls and information the drivers
can understand and sends that information to the device drivers which then execute the I/O on behalf of the SCSI drivers. If
vxconfigd is not running, volume I/O will not be affected, but if you want to make any changes to the configuration database, the
objects or do anything within those volumes and objects, those are not possible until you restart vxconfigd.
VxVM configuration daemon (continued)
If vxconfigd needs to recover a volume or pickup an updated state of one of the objects in Volume Manager, it does so by reading
the kernel log typically on the configuration database kernel log copy that are in the kernel of the system. If for some reason those
aren't available in the kernel, then it can read one of the disks that has an activated copy of those things. Now vxconfigd is by default
started up at boot time in single user level in booted mode. At some point, after that, vxconfigd is going to then run a vxdctl enable
command or vxdisk scandisks command to go out and scan all the disks and pick up any information about the states, the objects,
changes in the objects and any updated configuration database images or kernel log images that it needs.
Managing the VxVM configuration daemon
It is possible if you need to do maintenance on the system in single user mode, it is possible to disable the daemon temporarily. Now
one of the reasons you might want to do this is to test out some things outside of Volume Manager and you didn't want Volume
Manager synchronization or other operations getting in the way of what you're testing. So these commands can help you manipulate
the vxconfigd daemon. Notice there is also a way to stop the daemon or even send a kill-9 signal to the daemon if it can't be stopped
normally. This is not something you want to do on a running system unless you really need to, because if you do this and you are in
the middle of some Volume Manager operation that Volume Manager operation is going to be, let's just say uncleanly cancelled and
forced out and then you may have to synchronize the volume completely when you come back and start the daemon again. Most of
the time you are going to be doing commands like vxdctl mode to display what status the daemon is in, you're going to be doing
vxdctl license to see what your license would do in terms of product features in that system or vxdctl support to determine what level
of support and what version you have on the product and support.
The volboot file
There is also a file that you need to know about called the volboot file. The volboot file has a couple of purposes. One is to hold the
values of the environment variables, default dg and boot dg that we discussed in an earlier lesson. volboot should never be edited
because what happens is it's checksummed against what's going on in the kernel and if any edit or any checksum is different than
what the file says, Volume Manager is not going to start and it's going to stop vxconfigd and put it in disable mode. So if you want to
display the contents of volboot, you run a command called vxdctl list. If you need to change something in volboot, you need to do
commands like vxdctl default dg or vxdctl host ID if you're changing a new host or you can do things like vxdctl init. And that creates
a brand new volboot file if for some reason somebody did edit it by hand and it was corrupted and you need to make a new one.
Those are not commands you are going to typically do unless something goes wrong. So usually you're going to be doing things like
changing your default disk group name or changing the host ID where that system is running and all that stuff can be done through
the vxdctl commands.
Discovering disk devices
Let's dive into discovering disk devices and explore how that works.
Device Discovery Layer (DDL)
This is at a much lower level than applications and volumes. We have to work in tandem with the array or enclosure that's presenting
the disk to us and devices to us. The array needs to be supported by Volume Manager 5.1 or Storage Foundation 5.1 and it has to
provide some kind of a library with a vendor ID and other information specific to that array so that we know how to access the disk
and what to expect in the way of enclosure based naming, dynamic multipathing and thin provisioning and reclamation. So that when
we want to run commands like vxdiskconfig, vxdisk scan disks or vxdctl enable we get what we expect to get from those disks in that
array. The device discovery layer is part of a process of locating the specific array that the library points us to. And whenever we
attach a new array or add more disks inside the current array, DDL is supposed to inform vxconfigdisk daemon about these disks and
about the changes in the array. So the library, the ASL, the Array Support Library, and our DDL, Device Discovery Layer work in
tandem to get you, the administrator, this information.
DMP architecture
And here we have some more information about those components, the DDL, the array support library and another thing called the
array policy module which is a dynamically loadable kernel module that you can add. That allows array specific procedures such as
thin reclamation and thin provisioning.
Displaying information about ASLs and APMs
We have a command called vxddladm which allows you to dive a little bit deeper into what DDL is seeing on that array and inside the
array. We also have a command called vxcheckasl and then you give it a library name and that will display the ASLs that have
claimed a device and it'll also show you the devices that the ASL has found as a result of the library compatibility with DDL. There is a
chance you will have an outdated ASL if you have had an array attached to your system for a long long time and you've upgraded

http://symantecpartners.vportal.net/media/symantecpartners/media/_generated/transcripts/t... 8/23/2011

Lesson 3: Manage and Administer - Managing Devices Within the VxVM Architecture - ... Page 5 of 6

Storage Foundation, but the new ASL hasn't been applied. Remember that the ASLs typically come from the hardware vendor and the
hardware libraries. So you may need to go to the hardware vendor's website or contact the hardware vendor or refer to its release
notes if you think you need an updated ASL because that will have information about the latest and greatest version of those ASLs.
But generally speaking when you upgrade Storage Foundation to a new version in that same archive or on that same DVD, we will
have updated versions of the array vendor's ASLs and ASMs or APMs that have been provided to us as a part of that support matrix.
So if the array is supported under that version chances are we have an updated ASL and array module already for you.
Updating ASLs / APMs
If you need to upgrade or update the ASL or APN manually, these are the commands that will help you do that. There is a new
change in Storage Foundation 5.1 with respect to this and that is there is now a single package for all the ASLs and APMs that are
supported with the new version and that package name is on your slide. It's VRTSaslapm, because before it was seperate packages
and now it's all based in one packet. So it's easier and more convenient to update everything at once if you need to do that. Don't
forget to run a vxdctl enable or one of those other scan commands after you've attached the new array and after you've applied the
new ASL and/or APM to make sure that the most updated information is sent to the Volume Manager kernel in the system about all
the arrays and devices.
Partial device discovery
One of the things we mentioned earlier in fact about the vxdctl enable command is that it scans all the devices on your SAN which
can take a really long time especially for large organizations. Chances are if you've only changed just a few things with respect to one
array or one controller in that array, you don't have to scan all the other disks at the same time, just want to scan the disks that have
changed and the devices that have changed. So this is the command to allow you to do that. vxdisks scan disks and you can put in
new or fabric if you are only looking at one SAN fabric. You can put in enclosure, controller, you can specify the controllers or you can
actually give it a hardware path to an actual hardware controller. This is a subform of vxdctl enable. It does the same thing as vxdctl
enable, but only does it on specific devices and hardware.
Displaying extended device attributes
If you do a vxdisk-p list; p stands for properties and you can get information about the disk itself, the device or the array itself or
enclosure you get the disk ID, vendor ID, SCSI version serial number, worldwide number, array port. And all this information is
typically found in DDL but if you want to see what DDL found you can do the vxddladm command and you can also do this -p list for
vxdisk which also gives you information about the DMP devices.
Managing multiple paths to disk devices
So let's talk about DMP, managing multiple paths to disk devices.
Types of multiported arrays
DMP has been around the product for many many years and many versions, there are two general categories of DMP and each one
of these is governed by what the array and array firmware attached to it supports. They are active/active and active/passive. With
active/active DMP, you have two benefits. One is failover and fault tolerance from one HBA to the other. The second is load balancing
and performance because you are using two active paths at the same time to transfer I/O to the same set of disks. With
active/passive DMP, you still have the benefit of fault tolerance and availability, but you do not have the benefit of load-balancing and
performance because you only have one active path at a time. In version 5.1 Storage Foundation, one of the updates to DMP is if you
have active/passive DMP and one path is secondary. And your primary path fails, the secondary will be upgraded to primary
automatically by the restore daemon and then you continue to transfer I/O. In the past, you may have had to manually promote that
path to active or primary before you actually started to use it for the I/O. But now, what happens is you can set the restore daemon
parameters to automatically do that for you. So that's a nice ease of management change in the product.
Setting I/O policies and path attributes
To manipulate DMP nodes, devices, paths and controllers, you use the vxdmpadm command and there are many different
parameters, options and switches and policies that you can set to do what you need to do with DMP. Here we have an example of
setting attributes on the enclosure and the path respectively and changing the I/O policy or the path type. So you can look at the I/O
policies in that first example which include adaptive, adaptive minimum queue, balanced, minimum queue, priority, round robin, and
single active. And by the way single active is only effective for active/passive DMP. All those other methods of DMP are available for
active/passive and some are available for something called active/passive concurrent DMP which is a simulation form of active/active
DMP. At the bottom example, you also have path attributes to a disk array or enclosure. So if you want to change one path from
primary to secondary manually or from secondary to primary manually, you can use the path type switch with vxdmpadm set
attribute and you can do it manually on your own. Or you can set to preferred path or no preferred path, there are a lot of different
things you can do to make sure your DMP is transferring out at the highest possible performance level and still remain fault tolerant
in case a path dies.
Displaying I/O statistics for paths
You can also get statistics information and read and write operation information from DMP. Very similar to the vxstat command we

http://symantecpartners.vportal.net/media/symantecpartners/media/_generated/transcripts/t... 8/23/2011

Lesson 3: Manage and Administer - Managing Devices Within the VxVM Architecture - ... Page 6 of 6

discussed in earlier lessons, we also have the vxdmpadm I/O stat command. This one additionally will show you things like CPU
usage and per CPU memory that the regular vxstat command may not be able to show you. But it's very similar in its output because
it also shows you operations, reads and writes, bytes read and written and the average time in milliseconds for each read or write.
That's very similar to the vxstat. The purpose of using this one is to see which path is performing better if you have active/active or
you can also see which path is primary or secondary in the case of active/passive or active/passive concurrent.
Preventing DMP for a device
Now not all arrays can support DMP, some arrays use other vendors' multipathing device technology as well which may or may not be
compatible or supported with our dynamic multipathing. If that's the case and you cannot run our multipathing simultaneously to the
vendors' multipathing, you might have to suppress or even prevent our multipathing from running on that system. And if you need to
do that, the best way to set that up is to run vxdiskadm and select one of these options on your slide here. If you want to do this,
you're relying on the hardware vendors' DMP or multipathing device drivers and technology to show you the true view of what your
disk farm looks like. So if you do that and then you see duplicate paths for each one of your disks which really may not be the case,
you only have one disk but it does have two paths to it, there may be a problem with your hardware vendors' multipathing. So you
want to make sure you have the latest and greatest firmware update, the latest and greatest release notes and version of that
vendors' DMP to be able to work with our DMP if you needed to work with it. Or if you are not using ours then make sure you're
getting the right information from the vendor.
Preventing DMP for a device (continued)
So this continues that example of suppressing all paths from a controller or preventing multipathing altogether for all disks on a
controller with VxVM. And again this is really only necessary if the array doesn't support our DMP or doesn't support interoperability
of our DMP with the vendors' DMP or multipathing device.
Disabling I/O to a controller
If you need to disable I/O to a controller in DMP manually, you can use these commands on your slide here. You can also do it
through the VEA, but I would recommend doing this through the command line, because you could do a lot more switches and you
have a lot more parameters available to you if you use vxdmpadm from the command line.
Controlling the restore daemon
Earlier in the lessons, we talked about something called the restore daemon which is a daemon that detects failed paths for DMP in
paths to devices and allows for probing and upgrading of those paths between enable and disable. You can also probe the frequency
-- you can change the frequency of analysis on how often that path is probed, but typically a path will only be probed in earlier
versions of Storage Foundation if there is an I/O failure detected going down that path and the I/O failure tells the driver and
daemon that you have a failed path. So one of the new features in the product is that it allows for more specific checking of specific
paths. It also allows for something called subpath failover grouping. So instead of probing the entire set of paths, controllers, arrays
and everything which may cause a lot of contention you can be more specific about which paths, controllers, enclosures, and arrays
are actually probed and may be marked as suspect if one of the devices on those paths fails. These are some of the commands that
you can use to tune the DMP restore policy daemon, you can start or stop it and you can set different policies of that daemon to
either only check disabled paths or check all paths for example. So we use subpath failover grouping more extensively now in Storage
Foundation 5.1 and it involves more proactive probing of that daemon down the paths where there might be a failure even though
I/O isn't actually going down those paths. So for example if you lose one or two devices on a particular DMP path, but the other
devices on that path are not failed, they may be soon if you actually are going to fail, if the HBA fails or chances are more devices
may be affected on that path. If that's the case what subpath failover grouping can do is failover that entire group from one path to
the other path and activate that path automatically so that you will not lose virtually any downtime on the failed path. That involves
some proactive or more proactive probing of the paths that haven't yet failed. Now sometimes that can cause more contention on the
devices that are still available and the paths that are still available. So to mediate that problem, we also have a new feature called
low impact path probing which allows you to control the amount of proactive probing on those paths that involves those devices. So
you can not only do more proactive probing in the new versions of the product, you can also control where that proactive probing
happens.
Lesson summary
And that completes Lesson 3, thank you very much.

http://symantecpartners.vportal.net/media/symantecpartners/media/_generated/transcripts/t... 8/23/2011

S-ar putea să vă placă și