Documente Academic
Documente Profesional
Documente Cultură
rmtcpip -all
Creating SEA
mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 -attr ha_mode=auto ctl_chan=ent5
checking
lstcpip -interfaces
netstat -state -num
lsnports -> value for the fabric is 1 means HBA is connected to a SAN switch supporting NPIV
features.
status of the port -> NOT_LOGGED_IN (client configuration not yet completed,Hence it
cannot be login to the fabric
on vio server
fc_err_recov=fast_fail
# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes-perm <--reboot is needed for
these
fc_err_recov=fast_fail <--in case of a link event IO will fail
immediately
dyntrk=yes <--allows the VIO server to tolerate cabling
changes in the SAN
on VIO client:
# chdev -l vscsi0 -a vscsi_path_to=30 -a vscsi_err_recov=fast_fail -P <--path timout checks
health of VIOS and detects if VIO Server adapter isn't responding
vscsi_path_to=30 <--by default it is disabled (0), each client adapter must
be configured, minimum is 30
vscsi_err_recov=fast_fail <--failover will happen immediately rather than
delayed
# chdev -l hdisk0 -a queue_depth=20 -P <--it must match the queue depth value used
for the physical disk on the VIO Server
queue_depth <--it determines how many requests will be queued on
the disk
Never set the hcheck_interval lower than the read/write timeout value of the underlying
physical disk on the Virtual I/O Server. Otherwise, an error detected by the Fibre Channel
adapter causes new healthcheck requests to be sent before the running requests time out.
The minimum recommended value for the hcheck_interval attribute is 60 for both Virtual I/O
and non Virtual I/O configurations.
In the event of adapter or path issues, setting the hcheck_interval too low can cause severe
performance degradation or possibly cause I/O hangs.
It is best not to configure more than 4 to 8 paths per LUN (to avoid too many hchecks IO), and
set the hcheck_interval to 60 in the client partition and on the Virtual I/O Server.
----------------------------
By default all the paths are defined with priority 1 meaning that traffic will go through the first
path.
If you want to control the paths 'path priority' has to be updated.
Priority of the VSCSI0 path remains at 1, so it is the primary path.
Priority of the VSCSI1 path will be changed to 2, so it will be lower priority.
PREPARATION ON CLIENT:
# chpath -l hdisk1 -p vscsi1 -a priority=2 <--I changed priority=2 on vscsi1 (by default
both paths are priority=1)
TEST 1:
TEST 2:
I did the same on VIOS1 (rmdev...disk, which has path priority 1 (IO is going there by default)
ON CLIENT: # lspath
Failed hdisk1 vscsi0
Enabled hdisk1 vscsi1
----------------------------
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi2 <--we want to change vsci2 to vscsi1
On VIO client:
1. # rmpath -p vscsi2 -d <--remove paths from vscsi2 adapter
2. # rmdev -dl vscsi2 <--remove adapter
On VIO server:
3. # lsmap -all <--check assignment and vhost device
4. # rmdev -dev vhost0 -recursive <--remove assignment and vhost
device
On HMC:
5. Remove deleted adapter from client (from profil too)
6. Remove deleted adapter from VIOS (from profil too)
7. Create new adapter on client (in profil too) <--cfgmgr on client
8. Create new adapter on VIOS (in profil too) <-cfgdev on VIO server
On VIO server:
9. # mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev rootvg_hdisk0 <--create new
assignment
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1 <--vscsi1 is there (cfgmgr may needed)
----------------------------
1. lsdev -type optical <--check if VIOS owns optical device (you should see sg.
like: cd0 Available SATA DVD-RAM Drive)
2. lsmap -all <--to see if cd0 is already mapped and which vhost to use for
assignment (lsmap -all | grep cd0)
3. mkvdev -vdev cd0 -vadapter vhost0 <--it will create vtoptX as a virtual target device
(check with lsmap -all )
4. cfgmgr (on client lpar) <--bring up cd0 device on client (before moving cd0 device
rmdev device on client first)
5. rmdev -dev vtopt0 -recursive <--to move cd0 to another client, remove assignment
from vhost0
6. mkvdev -vdev cd0 -vadapter vhost1 <--create new assignment to vhost1
7. cfgmgr (on other client lpar) <--bring up cd0 device on other client
(Because VIO server adapter is configured with "Any client partition can connect" option,
these pairs are not suited for client disks.)
Mapping Virtual Fibre Channel Adapters on VIO Servers
A new LPAR was created and new virtual fibre channel adapters were presented to both VIO
servers using DLPAR. Now, its time to map the newly created virtual fibre channel adapter to
a physical fibre channel adapter.
But which vfchost device to map? What are checks needed to be done?
Ill step you through the process of mapping NPIV virtual fibre channel adapter to a physical
adapter on the VIO server.
In this example, I have performed a DLPAR of virtual fibre channel adapter with the ID 38 on
both the VIO servers. We now need to identify the vfchost device presented to the VIO server
in order to be able to map them later. Use the lsdev command with the slots flag in the VIO
restricted shell.
On VIO server 1:
On VIO server 2:
Now we have identified both the virtual fibre channel adapter on both VIO servers. The virtual
fibre channel adapters configured to both the VIO servers is vfchost17.
TIPS: If you configured and DLPAR you virtual adapters in the same sequence for both VIO
servers from the very first one, your virtual fibre channel device should be the same on both
VIO servers like the above example. Make this a practice as it makes VIO server
administration simpler and easier.
Next we need to identify which physical fibre channel adapter we want to map to. The
command to use for this is lsnports.
On VIO server 1:
vios1 $ lsnports
On VIO server 2:
vios2 $ lsnports
The above command lists all the fibre channel adapters and their available NPIV capable
ports. The first column shows the name of the fibre channel adapter. The second column
shows the physical location of the fibre channel adapter. For path redundancy in this
environment, all adapters that have physical location containing P2-C2 are cabled to SAN
fabric A and all that have physical location of P2-C6 are cabled to SAN fabric B.
Now, another variable that we are interested in is the aports. It shows information on the
number of available NPIV capable ports. In the example above, the adapter fcs3 has the most
available NPIV ports for both the VIOS server. Therefore, I will use that and map it to my
virtual fibre channel adapter device.
Notice that fcs3 on vios1 is has the physical address containing P2-C6. Therefore, fcs3 on
vios1 is cabled to SAN fabric B. On the other hand on vios2, fcs3 has the physical address
containing P2-C2. Therefore fcs3 on vios2 is cabled to SAN fabric A. You may be required to
provide this information along with the NPIV WWNs to the storage administrator to perform
zone configuration and SAN LUN mappings.
On VIO server1:
On VIO server2:
You can now check the mappings by using the lsmap command from the VIO server restricted
shell.
On VIO server 1:
Status:NOT_LOGGED_IN
Flags:a<NOT_LOGGED>
On VIO server 2:
Flags:a<NOT_LOGGED>
Since this is a new LPAR, it is not activated yet. Therefore, the status is NOT_LOGGED_IN.
However, the lsmap command output above is a good check to confirm that vfchost17 is
mapped to fibre channel adapter fcs3 on both the VIO servers.
Over the last few months working with IBM PowerVC I've had to get used to using NPIV, not
my first choice of virtual storage adapter due to its large memory footprint (130Mb per adapter
compared to vSCSI for 1.5Mb). But given its ease of use for PowerVC allowing my colleagues
to deploy IBM Power servers without me, it more then makes up for this. When I initially
started using NPIV it took a little while to get the adapters set-up, a number of documents that
I followed made some assumptions. This meant that I was looking at more then one guide, so
in the end I collated the information together and made my own. Now that its become second
nature, I figured I'd best get this information blogged up, not just for everyone else but due to
my change in jobs I can see me forgetting how to do this and I might need it again in the
future.
NPIV is only supported on 8Gb fibre adapters. The fibre switch needs to support NPIV, but
does not need to be 8Gb (the 8Gb adapter can negotiate down to 2 and 4Gb). In my case it
means none of my POWER6 servers will work unless the cards are replaced.
Maximum number of 64 NPIV adapters per physical adapter (see lsnports command).
I'm not going to tell you how to setup a virtual machine here, there is plenty of guides on how
to do that. I would expect that now you are looking at NPIV your more then familiar with this
part, so from the virtual server profile these are key items to take note, that is the client and
server ids:
image
image
The first image is take from the virtual machine profile (client) looking at the configuration of
the created virtual fibre adapter, the client adapter ID here needs to match the information on
the second image. This is taken from the DLPAR created virtual fibre adapter on the VIO
Server, you need to set this correctly as the default populated information is not always
correct, and if this doesn't match you devices won't function. As mentioned this was created
on the VIO Server from the DLPAR menu as I'm building a temporary virtual machine:
image
As you can see my linux client 'iic-sles-nkd-50G' has 2 adapters mapped for redundancy,
those client/server IDs need to match back correctly on both adapters, I'll take note as they
are needed later.
Server ID 9 - Client ID 3
Server ID 10 - Client ID 4
Along with this you need to take note of the WWPN number for each of the client fibre
adapters, the first WWPN is the number you be using as the second is used for virtual
machine mobility, so in our case:
c0507603caee0058
c0507603caee005a
Next we need to map the those to the physical fibre cards we are using, so log onto the VIOS
and check the status of the adapters:
On this system only every other card is patched into the switch, this is shown by the value of
'fabric' equaling 1, which means available for NPIV configuration. It could also mean that the
adapter does not support NPIV or even the switch that it is connected to.
So lets look at the adapters we created above to confirm what we need to map over to our
fcs# devices:
These vfchost adapters then need to map back to the fibre ports, so this is the status of the
mapped NPIV devices before:
Status:NOT_LOGGED_IN
FC name: FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name: VFC client DRC:
Status:NOT_LOGGED_IN
FC name: FC loc code:
Ports logged in:0
Flags:1<NOT_MAPPED,NOT_CONNECTED>
VFC client name: VFC client DRC:
Status:NOT_LOGGED_IN
FC name:fcs4 FC loc code:U78AA.001.#######-P1-C1-C4-T1
Ports logged in:0
Flags:4<NOT_LOGGED>
VFC client name: VFC client DRC:
Status:NOT_LOGGED_IN
FC name:fcs6 FC loc code:U78AA.001.#######-P1-C1-C3-T1
Ports logged in:0
Flags:4<NOT_LOGGED>
VFC client name: VFC client DRC:
Now as you can see the devices are mapped from the vfchost to the fcs# but there is the
message NOT_LOGGED_IN, this is due to our server not being up or zoned in on our SAN.
Once you have zoned in the device and booted it you can look at adding the storage you
need using the WWPNs that where mentioned eariler, if its all setup correctly then it will look
something like this:
Status:LOGGED_IN
FC name:fcs4 FC loc code:U78AA.001.#######-P1-C1-C4-T1
Ports logged in:5
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:host1 VFC client DRC:U8205.E6B.#######-V5-C3
Status:LOGGED_IN
FC name:fcs6 FC loc code:U78AA.001.#######-P1-C1-C3-T1
Ports logged in:5
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:host2 VFC client DRC:U8205.E6B.#######-V5-C4
Status:LOGGED_IN
FC name:fcs4 FC loc code:U78AA.001.#######-P1-C1-C4-T1
Ports logged in:5
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:host1 VFC client DRC:U8205.E6B.#######-V5-C3