Sunteți pe pagina 1din 13

Dual VIO server

rmtcpip -all

Creating SEA

mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 -attr ha_mode=auto ctl_chan=ent5

mktcpip -hostname testvio1 -inetaddr 172.24.145.81 -interface en5 -start -netmask


255.255.255.0 -gateway 172.24.145.1

checking

lstcpip -interfaces
netstat -state -num

lsnports -> value for the fabric is 1 means HBA is connected to a SAN switch supporting NPIV
features.

IF value 0 does not support NPIV feature.


IF no SAN connectivity there wont be any connectivity.

Mapping adapter to FC port

lsdev -vpd | grep vfchost

vfchost0 U9117.MMA.6583005-V1-C15 Virtual FC Server adapter

vfcmap -vadapter vfchost0 -fcp fcs0

mapping virtual fibe channed adpater vfchost0 to HBA port fcs0

lsmap -vadapter vfchost0 -npiv -> check the status of mapping

status of the port -> NOT_LOGGED_IN (client configuration not yet completed,Hence it
cannot be login to the fabric

Multipathing with dual vio config

on vio server

#lsdev -dev <hdisk> -attr


#lsdev -dev < fc adapter> -attr

#chdev -dev fscsi0 -attr fc_err_recover=fast_fail drntrk=yes -perm

fc_err_recov=fast_fail
# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes-perm <--reboot is needed for
these
fc_err_recov=fast_fail <--in case of a link event IO will fail
immediately
dyntrk=yes <--allows the VIO server to tolerate cabling
changes in the SAN

# chdev -dev hdisk3 -attr reserve_policy=no_reserve <--each disk must be set to


no_reservr
reserve_policy=no_reserve <--if this is configured, dual vio server
can present a disk to client

on VIO client:
# chdev -l vscsi0 -a vscsi_path_to=30 -a vscsi_err_recov=fast_fail -P <--path timout checks
health of VIOS and detects if VIO Server adapter isn't responding
vscsi_path_to=30 <--by default it is disabled (0), each client adapter must
be configured, minimum is 30
vscsi_err_recov=fast_fail <--failover will happen immediately rather than
delayed

# chdev -l hdisk0 -a queue_depth=20 -P <--it must match the queue depth value used
for the physical disk on the VIO Server
queue_depth <--it determines how many requests will be queued on
the disk

# chdev -l hdisk0 -a hcheck_interval=60 -a hcheck_mode=nonactive -P <--health check


updates automatically paths state
(otherwise failed path must be set manually))
hcheck_interval=60 <--how often do hcheck, each disk must be configured
(hcheck_interval=0 means it is disabled)
hcheck_mode=nonactive <--hcheck is performed on nonactive paths (paths
with no active IO)

Never set the hcheck_interval lower than the read/write timeout value of the underlying
physical disk on the Virtual I/O Server. Otherwise, an error detected by the Fibre Channel
adapter causes new healthcheck requests to be sent before the running requests time out.

The minimum recommended value for the hcheck_interval attribute is 60 for both Virtual I/O
and non Virtual I/O configurations.
In the event of adapter or path issues, setting the hcheck_interval too low can cause severe
performance degradation or possibly cause I/O hangs.
It is best not to configure more than 4 to 8 paths per LUN (to avoid too many hchecks IO), and
set the hcheck_interval to 60 in the client partition and on the Virtual I/O Server.

----------------------------

TESTING PATH PRIORITIES:

By default all the paths are defined with priority 1 meaning that traffic will go through the first
path.
If you want to control the paths 'path priority' has to be updated.
Priority of the VSCSI0 path remains at 1, so it is the primary path.
Priority of the VSCSI1 path will be changed to 2, so it will be lower priority.

PREPARATION ON CLIENT:

# lsattr -El hdisk1 | grep hcheck


hcheck_cmd test_unit_rdy <--hcheck is configured, so path should come
back automatically from failed state
hcheck_interval 60
hcheck_mode nonactive

# chpath -l hdisk1 -p vscsi1 -a priority=2 <--I changed priority=2 on vscsi1 (by default
both paths are priority=1)

# lspath -AHE -l hdisk1 -p vscsi0


priority 1 Priority True

# lspath -AHE -l hdisk1 -p vscsi1


priority 2 Priority True

So, configuration looks like this:


VIOS1 -> vscsi0 -> priority 1
VIOS2 -> vscsi1 -> priority 2

TEST 1:

1. ON VIOS2: # lsmap -all <--checking disk mapping on VIOS2


VTD testdisk
Status Available
LUN 0x8200000000000000
Backing device hdiskpower1
...

2. ON VIOS2: # rmdev -dev testdisk <--removing disk mapping from VIOS2


3. ON CLIENT: # lspath
Enabled hdisk1 vscsi0
Failed hdisk1 vscsi1 <--it will show failed path on vscsi2 (this is coming
from VIOS2)

4. ON CLIENT: # errpt <--error report will show "PATh HAS FAILED"


IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
DE3B8540 0324120813 P H hdisk1 PATH HAS FAILED

5. ON VIOS2: # mkvdev -vdev hdiskpower1 -vadapter vhost0 -dev testdisk <--configure


back disk mapping from VIOS2

6. ON CLIENT: # lspath <--in 30 seconds path will come back


automatically
Enabled hdisk1 vscsi0
Enabled hdisk1 vscsi1 <--because of hcheck, path came back
automatically (no manual action was needed)

7. ON CLIENT: # errpt <--error report will show path has been


recovered
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
F31FFAC3 0324121213 I H hdisk1 PATH HAS RECOVERED

TEST 2:

I did the same on VIOS1 (rmdev...disk, which has path priority 1 (IO is going there by default)

ON CLIENT: # lspath
Failed hdisk1 vscsi0
Enabled hdisk1 vscsi1

ON CLIENT: # errpt <--an additional disk operation error will be in


errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
DCB47997 0324121513 T H hdisk1 DISK OPERATION ERROR
DE3B8540 0324121513 P H hdisk1 PATH HAS FAILED

----------------------------

How to change a VSCSI adapter on client:

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi2 <--we want to change vsci2 to vscsi1

On VIO client:
1. # rmpath -p vscsi2 -d <--remove paths from vscsi2 adapter
2. # rmdev -dl vscsi2 <--remove adapter

On VIO server:
3. # lsmap -all <--check assignment and vhost device
4. # rmdev -dev vhost0 -recursive <--remove assignment and vhost
device

On HMC:
5. Remove deleted adapter from client (from profil too)
6. Remove deleted adapter from VIOS (from profil too)
7. Create new adapter on client (in profil too) <--cfgmgr on client
8. Create new adapter on VIOS (in profil too) <-cfgdev on VIO server

On VIO server:
9. # mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev rootvg_hdisk0 <--create new
assignment

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1 <--vscsi1 is there (cfgmgr may needed)

----------------------------

Assigning and moving DVD RAM between LPARS

1. lsdev -type optical <--check if VIOS owns optical device (you should see sg.
like: cd0 Available SATA DVD-RAM Drive)
2. lsmap -all <--to see if cd0 is already mapped and which vhost to use for
assignment (lsmap -all | grep cd0)
3. mkvdev -vdev cd0 -vadapter vhost0 <--it will create vtoptX as a virtual target device
(check with lsmap -all )

4. cfgmgr (on client lpar) <--bring up cd0 device on client (before moving cd0 device
rmdev device on client first)

5. rmdev -dev vtopt0 -recursive <--to move cd0 to another client, remove assignment
from vhost0
6. mkvdev -vdev cd0 -vadapter vhost1 <--create new assignment to vhost1

7. cfgmgr (on other client lpar) <--bring up cd0 device on other client

(Because VIO server adapter is configured with "Any client partition can connect" option,
these pairs are not suited for client disks.)
Mapping Virtual Fibre Channel Adapters on VIO Servers

A new LPAR was created and new virtual fibre channel adapters were presented to both VIO
servers using DLPAR. Now, its time to map the newly created virtual fibre channel adapter to
a physical fibre channel adapter.

But which vfchost device to map? What are checks needed to be done?

Ill step you through the process of mapping NPIV virtual fibre channel adapter to a physical
adapter on the VIO server.

Check Your Virtual Adapter (vfchost)

In this example, I have performed a DLPAR of virtual fibre channel adapter with the ID 38 on
both the VIO servers. We now need to identify the vfchost device presented to the VIO server
in order to be able to map them later. Use the lsdev command with the slots flag in the VIO
restricted shell.

On VIO server 1:

vios1 $ lsdev -slots|grep C38

U9117.MMD.0321EA4-V2-C38 Virtual I/O Slot vfchost17

On VIO server 2:

vios2 $ lsdev -slots|grep C38

U9117.MMD.0321EA4-V3-C38 Virtual I/O Slot vfchost17

Now we have identified both the virtual fibre channel adapter on both VIO servers. The virtual
fibre channel adapters configured to both the VIO servers is vfchost17.

TIPS: If you configured and DLPAR you virtual adapters in the same sequence for both VIO
servers from the very first one, your virtual fibre channel device should be the same on both
VIO servers like the above example. Make this a practice as it makes VIO server
administration simpler and easier.

Identify the Physical Fibre Channel Adapter

Next we need to identify which physical fibre channel adapter we want to map to. The
command to use for this is lsnports.
On VIO server 1:

vios1 $ lsnports

name physloc fabric tports aports swwpns awwpns

fcs0 U2C4E.001.DBAA235-P2-C2-T1 1 64 59 2048 2007

fcs1 U2C4E.001.DBAA235-P2-C2-T2 1 64 60 2048 2014

fcs2 U2C4E.001.DBAA211-P2-C6-T1 1 64 60 2048 2014

fcs3 U2C4E.001.DBAA211-P2-C6-T2 1 64 61 2048 2021

On VIO server 2:

vios2 $ lsnports

name physloc fabric tports aports swwpns awwpns

fcs0 U2C4E.001.DBAA235-P2-C6-T1 1 64 59 2048 2007

fcs1 U2C4E.001.DBAA235-P2-C6-T2 1 64 60 2048 2014

fcs2 U2C4E.001.DBAA211-P2-C2-T1 1 64 60 2048 2014

fcs3 U2C4E.001.DBAA211-P2-C2-T2 1 64 61 2048 2021

The above command lists all the fibre channel adapters and their available NPIV capable
ports. The first column shows the name of the fibre channel adapter. The second column
shows the physical location of the fibre channel adapter. For path redundancy in this
environment, all adapters that have physical location containing P2-C2 are cabled to SAN
fabric A and all that have physical location of P2-C6 are cabled to SAN fabric B.

Now, another variable that we are interested in is the aports. It shows information on the
number of available NPIV capable ports. In the example above, the adapter fcs3 has the most
available NPIV ports for both the VIOS server. Therefore, I will use that and map it to my
virtual fibre channel adapter device.

Notice that fcs3 on vios1 is has the physical address containing P2-C6. Therefore, fcs3 on
vios1 is cabled to SAN fabric B. On the other hand on vios2, fcs3 has the physical address
containing P2-C2. Therefore fcs3 on vios2 is cabled to SAN fabric A. You may be required to
provide this information along with the NPIV WWNs to the storage administrator to perform
zone configuration and SAN LUN mappings.

Mapping the Virtual Fibre Channel Adapter


Now we can map the virtual fibre channel adapter to fcs3. We can use the vfcmap command
to accomplish this.

On VIO server1:

vfcmap -vadapter vfchost17 -fcp fcs3

On VIO server2:

vfcmap -vadapter vfchost17 -fcp fcs3

Checking the Mappings

You can now check the mappings by using the lsmap command from the VIO server restricted
shell.

On VIO server 1:

vios1 $ lsmap -vadapter vfchost17 -npiv

Name Physloc ClntID ClntName ClntOS

------------- ---------------------------------- ------ -------------- -------

vfchost17 U9117.MMD.0321EA4-V2-C38 38 lpar1 AIX

Status:NOT_LOGGED_IN

FC name:fcs3 FC loc code:U2C4E.001.DBAA235-P2-C2-T1

Ports logged in:0

Flags:a<NOT_LOGGED>

VFC client name: VFC client DRC:

On VIO server 2:

vios2 $ lsmap -vadapter vfchost17 -npiv

Name Physloc ClntID ClntName ClntOS

------------- ---------------------------------- ------ -------------- -------

vfchost17 U9117.MMD.0321EA4-V3-C38 38 lpar1 AIX


Status:NOT_LOGGED_IN

FC name:fcs3 FC loc code:U2C4E.001.DBAA235-P2-C6-T1

Ports logged in:

Flags:a<NOT_LOGGED>

VFC client name: VFC client DRC:

Since this is a new LPAR, it is not activated yet. Therefore, the status is NOT_LOGGED_IN.
However, the lsmap command output above is a good check to confirm that vfchost17 is
mapped to fibre channel adapter fcs3 on both the VIO servers.

POWER NPIV - Quick & Dirty

Over the last few months working with IBM PowerVC I've had to get used to using NPIV, not
my first choice of virtual storage adapter due to its large memory footprint (130Mb per adapter
compared to vSCSI for 1.5Mb). But given its ease of use for PowerVC allowing my colleagues
to deploy IBM Power servers without me, it more then makes up for this. When I initially
started using NPIV it took a little while to get the adapters set-up, a number of documents that
I followed made some assumptions. This meant that I was looking at more then one guide, so
in the end I collated the information together and made my own. Now that its become second
nature, I figured I'd best get this information blogged up, not just for everyone else but due to
my change in jobs I can see me forgetting how to do this and I might need it again in the
future.

Some limitations to take note:

NPIV is only supported on 8Gb fibre adapters. The fibre switch needs to support NPIV, but
does not need to be 8Gb (the 8Gb adapter can negotiate down to 2 and 4Gb). In my case it
means none of my POWER6 servers will work unless the cards are replaced.
Maximum number of 64 NPIV adapters per physical adapter (see lsnports command).

Virtual Machines (LPARs) and VIO Server Setup:

I'm not going to tell you how to setup a virtual machine here, there is plenty of guides on how
to do that. I would expect that now you are looking at NPIV your more then familiar with this
part, so from the virtual server profile these are key items to take note, that is the client and
server ids:

image

image
The first image is take from the virtual machine profile (client) looking at the configuration of
the created virtual fibre adapter, the client adapter ID here needs to match the information on
the second image. This is taken from the DLPAR created virtual fibre adapter on the VIO
Server, you need to set this correctly as the default populated information is not always
correct, and if this doesn't match you devices won't function. As mentioned this was created
on the VIO Server from the DLPAR menu as I'm building a temporary virtual machine:

image

As you can see my linux client 'iic-sles-nkd-50G' has 2 adapters mapped for redundancy,
those client/server IDs need to match back correctly on both adapters, I'll take note as they
are needed later.

Server ID 9 - Client ID 3

Server ID 10 - Client ID 4

Along with this you need to take note of the WWPN number for each of the client fibre
adapters, the first WWPN is the number you be using as the second is used for virtual
machine mobility, so in our case:

c0507603caee0058
c0507603caee005a

Next we need to map the those to the physical fibre cards we are using, so log onto the VIOS
and check the status of the adapters:

[sp006vs01:padmin] /padmin $ lsnports


name physloc fabric tports aports swwpns awwpns
fcs0 U78AA.001.#######-P1-C4-T1 1 64 63 2048 2041
fcs1 U78AA.001.#######-P1-C4-T2 0 64 64 2048 2048
fcs2 U78AA.001.#######-P1-C5-T1 1 64 63 2048 2041
fcs3 U78AA.001.#######-P1-C5-T2 0 64 64 2048 2048
fcs4 U78AA.001.#######-P1-C1-C4-T1 1 64 64 2048 2046
fcs5 U78AA.001.#######-P1-C1-C4-T2 0 64 64 2048 2048
fcs6 U78AA.001.#######-P1-C1-C3-T1 1 64 64 2048 2046
fcs7 U78AA.001.#######-P1-C1-C3-T2 0 64 64 2048 2048

On this system only every other card is patched into the switch, this is shown by the value of
'fabric' equaling 1, which means available for NPIV configuration. It could also mean that the
adapter does not support NPIV or even the switch that it is connected to.

So lets look at the adapters we created above to confirm what we need to map over to our
fcs# devices:

[sp006vs01:padmin] /padmin $ lsdev -vpd | grep vfchost


vfchost3 U8205.E6B.#######-V2-C10 Virtual FC Server Adapter
vfchost2 U8205.E6B.#######-V2-C9 Virtual FC Server Adapter
vfchost1 U8205.E6B.#######-V2-C8 Virtual FC Server Adapter
vfchost0 U8205.E6B.#######-V2-C7 Virtual FC Server Adapter

These vfchost adapters then need to map back to the fibre ports, so this is the status of the
mapped NPIV devices before:

[sp006vs01:padmin] /padmin $ lsmap -all -npiv


-<cut>-
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost2 U8205.E6B.#######-V2-C9 5

Status:NOT_LOGGED_IN
FC name: FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name: VFC client DRC:

Name Physloc ClntID ClntName ClntOS


------------- ---------------------------------- ------ -------------- -------
vfchost3 U8205.E6B.#######-V2-C10 5

Status:NOT_LOGGED_IN
FC name: FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name: VFC client DRC:

Then you map to the devices as follows:

[sp006vs01:padmin] /padmin $ vfcmap -vadapter vfchost2 -fcp fcs4


[sp006vs01:padmin] /padmin $ vfcmap -vadapter vfchost3 -fcp fcs6

Which now shows up as follows:

[sp006vs01:padmin] /padmin $ lsmap -all -npiv


-<cut>-
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost2 U8205.E6B.#######-V2-C9 5

Status:NOT_LOGGED_IN
FC name:fcs4 FC loc code:U78AA.001.#######-P1-C1-C4-T1
Ports logged in:0
Flags:4<NOT_LOGGED>
VFC client name: VFC client DRC:

Name Physloc ClntID ClntName ClntOS


------------- ---------------------------------- ------ -------------- -------
vfchost3 U8205.E6B.#######-V2-C10 5

Status:NOT_LOGGED_IN
FC name:fcs6 FC loc code:U78AA.001.#######-P1-C1-C3-T1
Ports logged in:0
Flags:4<NOT_LOGGED>
VFC client name: VFC client DRC:

Now as you can see the devices are mapped from the vfchost to the fcs# but there is the
message NOT_LOGGED_IN, this is due to our server not being up or zoned in on our SAN.
Once you have zoned in the device and booted it you can look at adding the storage you
need using the WWPNs that where mentioned eariler, if its all setup correctly then it will look
something like this:

[sp006vs01:padmin] /padmin $ lsmap -all -npiv


-<cut>-
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost2 U8205.E6B.#######-V2-C9 5 iic-sles11-nkd Linux

Status:LOGGED_IN
FC name:fcs4 FC loc code:U78AA.001.#######-P1-C1-C4-T1
Ports logged in:5
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:host1 VFC client DRC:U8205.E6B.#######-V5-C3

Name Physloc ClntID ClntName ClntOS


------------- ---------------------------------- ------ -------------- -------
vfchost3 U8205.E6B.#######-V2-C10 5 iic-sles11-nkd Linux

Status:LOGGED_IN
FC name:fcs6 FC loc code:U78AA.001.#######-P1-C1-C3-T1
Ports logged in:5
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:host2 VFC client DRC:U8205.E6B.#######-V5-C4

You can also display individual adapter details:

[sp006vs01:padmin] /padmin $ lsmap -npiv -vadapter vfchost2


Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost2 U8205.E6B.#######-V2-C9 5 iic-sles11-nkd Linux

Status:LOGGED_IN
FC name:fcs4 FC loc code:U78AA.001.#######-P1-C1-C4-T1
Ports logged in:5
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:host1 VFC client DRC:U8205.E6B.#######-V5-C3

S-ar putea să vă placă și