Sunteți pe pagina 1din 185

Technical white paper

HP Virtual Connect with iSCSI


Cookbook (4th Edition)

Table of contents Purpose Introduction System requirements Converged Network Adapters Support Virtual Connect Modules Support Firmware and Software Support iSCSI Storage Target Support Networking recommendations Network considerations Keep it simple and short Flow Control recommendations Jumbo Frames (optional recommendation) iSCSI multipathing solutions Virtual Connect network scenarios Scenario 1: iSCSI network physically separated Scenario 2: iSCSI network logically separated Scenario 3: Direct-attached iSCSI Storage System 3 3 4 4 4 6 6 7 7 7 8 10 16 17 18 22 26

Accelerated iSCSI 38 Creating a Virtual Connect profile with Accelerated iSCSI 40 Accelerated iSCSI with Microsoft Windows Server 43 Accelerated iSCSI with VMware vSphere Server 70 For more information about iSCSI 85 Boot from iSCSI 86 Boot from iSCSI : Creating a Virtual Connect profile 86 Boot from iSCSI: Installing Microsoft Windows Server 2012 95 Boot from iSCSI: Installing Microsoft Windows Server 2008 100

Boot from iSCSI: Installing VMware vSphere 5 106 Boot from iSCSI: Installing VMware vSphere 4.1 110 Boot from iSCSI: Installing Red Hat Enterprise Linux 5 Update 4 119 Boot from iSCSI: Installing Suse Linux Enterprise Server 11 123 Troubleshooting Emulex iSCSI Initiator BIOS Utility Emulex OneCommand Manager (OCM) Problems found with OneCommand Manager Problems found during iSCSI Boot PXE booting problems iSCSI boot install problems with Windows Server VCEM issues with Accelerated iSCSI Boot iSCSI issues with HP StoreVirtual 4000 Storage Appendix 1- iSCSI Boot Parameters Mandatory iSCSI Boot Parameters entries Optional iSCSI Boot Parameters entries 130 130 135 138 139 144 145 146 146 148 148 152

Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters 155 Windows 2008 DHCP server configuration 156 Linux DHCP server configuration 164 Format of DHCP option 43 for the Emulex FlexFabric Adapters 165 Examples 166 Appendix 3- How to monitor an iSCSI Network? 167 Monitoring Disk Throughput on the iSCSI Storage System 167 Monitoring Network and Disk Throughput on the iSCSI Host 168 Analyzing Network information from the Virtual Connect interface 171 Analyzing Virtual Connect Network performance 175 Wireshark 177 Acronyms and abbreviations Support and Other Resources Contacting HP Related documentation 182 184 184 185

Purpose
The purpose of this Virtual Connect iSCSI Cookbook is to provide users of Virtual Connect with a better understanding of concepts and steps required when using iSCSI with Virtual Connect components. This document will help users to answer some of the typical questions on iSCSI: What are the network considerations to properly build an iSCSI network? What are the components supported by HP? How can I troubleshoot my iSCSI environment? In addition, this document describes some typical iSCSI scenarios to provide the reader with some valid examples of how HP Virtual Connect with iSCSI could be deployed within their environments. Tips and some troubleshooting information for iSCSI boot and install are also provided. Detailed information regarding Emulex requirements is subject to change, and readers should always refer to the documentation from the providers.

Introduction
The iSCSI standard implements the SCSI protocol over a TCP/IP network. While iSCSI can be implemented over any TCP/IP network, the most common implementation is over 1 and 10 Gigabit Ethernet (GbE). The iSCSI protocol transports blocklevel storage requests over TCP connections. Using the iSCSI protocol, systems can connect to remote storage and use it as a physical disk although the remote storage provider or target may actually be providing virtual physical disks. iSCSI serves the same purpose as Fibre Channel in building SANs, but iSCSI avoids the cost, complexity, and compatibility issues associated with Fibre Channel SANs. Because iSCSI is a TCP/IP implementation, it is ideal for new field deployments where no FC SAN infrastructure exists. An iSCSI SAN is typically comprised of software or hardware initiators on the host connected to an isolated Ethernet network and some number of storage resources (targets). While the target is usually a hard drive enclosure or another computer, it can also be any other storage device that supports the iSCSI protocol, such as a tape drive. The iSCSI stack at both ends of the path is used to encapsulate SCSI block commands into Ethernet Packets for transmission over IP networks. An interresting technology of iSCSI is the iSCSI boot, it allows servers to boot from an operating system image located on a remote iSCSI target. iSCSI Boot enables organizations to purchase less expensive diskless server, to provide rapid disaster recovery and a more efficient usage of the storage, etc. Another nice feature is the accelerated iSCSI that can also be enabled with the HP FlexFabric adapters, it offloads the iSCSI function to the Converged Network Adapter rather than taxing the CPU of the server.

System requirements
When using HP Virtual Connect technology, iSCSI Boot and Accelerated iSCSI is only supported with the following components:

Converged Network Adapters Support


Integrated NC553i Dual Port FlexFabric 10Gb Adapter (Intel based BladeSystem G7 servers) Integrated NC551i Dual Port FlexFabric 10Gb Adapter (AMD based BladeSystem G7 servers) HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553m Dual Port FlexFabric 10Gb Converged Network Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter
554FLB NOTE: iSCSI Boot is available as well on Virtual Connect with the QLogic QMH4062 1GbE iSCSI Adapter but with some restrictions. The QMH4062 iSCSI settings cannot be managed by a Virtual Connect profile but they can manually be set through the Qlogic Bios (CTRL+Q during Power-on Self-Test). The constraints to remember is that during a Virtual Connect profile move, the iSCSI boot settings will not be saved and reconfigured on the target server. m 554M NC553m NC551m

Virtual Connect Modules Support


HP Virtual Connect FlexFabric 10Gb/24-port Module HP Virtual Connect Flex-10 10Gb Ethernet Module HP Virtual Connect Flex-10/10D Module

NOTE: 10Gb KR-based Ethernet switches (like Procurve 6120XG, Cisco 3120G) can be used as well for Accelerated iSCSI boot but this is not covered in this document

Virtual Connect with iSCSI Summary support When using HP Virtual Connect technology, iSCSI Boot and Accelerated iSCSI is only supported with the following combination of devices:

BladeSystem Gen8

+ BladeSystem Gen8 with FlexFabric 10Gb 2-port 554FLB or FlexFabric 10Gb 2-port 554M

Virtual Connect FlexFabric

+ BladeSystem Gen8 with FlexFabric 10Gb 2-port 554FLB or FlexFabric 10Gb 2-port 554M

Virtual Connect Flex-10 or Flex-10/10D Minimum VC 3.10 and above

BladeSystem G7

+ BladeSystem G7 with NC551i / NC553i Integrated CNA or NC551m/NC553m FlexFabric Adapter

Virtual Connect FlexFabric

+ BladeSystem G7 with NC551i / NC553i Integrated CAN or NC551m/NC553m FlexFabric Adapter

Virtual Connect Flex-10 or Flex-10/10D Minimum VC 3.10 and above

BladeSystem G6

+ NC551m / NC553m 10Gb 2-port FlexFabric Adapter

Virtual Connect FlexFabric

BladeSystem G6 Latest System BIOS

+ NC551m / NC553m 10Gb 2-port FlexFabric Adapter

Virtual Connect Flex-10 or Flex-10/10D Minimum VC 3.10 and above

BladeSystem G6 Latest System BIOS

NOTE: HP BladeSystem c-Class Integrity Server Blades do not support Accelerated iSCSI and iSCSI boot with Virtual Connect.

Firmware and Software Support


HP recommends to use the latest Service Pack for Proliant, for more information, see http://h18004.www1.hp.com/products/servers/service_packs/en/index.html Requirements for Accelerated iSCSI and iSCSI boot:

VCM 3.10 (or above) is required in order to have Accelerated iSCSI and iSCSI boot support. One Command OS tool. (recommended) be2iSCSI driver (FlexFabric Adapters drivers). be2iSCSI Driver Update Disk for iSCSI boot installs. iSCSI target. DHCP server (optional).
NOTE: If the VC firmware is downgraded to a version older than 3.10, the iSCSI boot parameter configuration is not supported and all iSCSI boot parameters are cleared.

iSCSI Storage Target Support


Any storage system with iSCSI host ports is supported (e.g. the HP 3PAR StoreServ, HP EVA 6000 Storage, HP StoreVirtual 4000 Storage, HP MSA P2000 Storage, etc.).

Networking recommendations
Network considerations
When constructing an iSCSI SAN with HP Virtual Connect, some network considerations must be taken into account. Dont think of iSCSI network as just another LAN flavor, IP storage needs the same sort of design thinking applied to FC infrastructure particularly when critical infrastructure servers can boot from a remote iSCSI data source. Network performance is one of the major factors contributing to the performance of the entire iSCSI environment. If the network environment is properly configured, the iSCSI components provide adequate throughput and low enough latency for iSCSI initiators and targets. But if the network is congested and links, switches or routers are saturated, iSCSI performance suffers and might not be adequate for some environments. Here are some important tips and tricks to think about:

Keep it simple and short


With iSCSI, it is possible to route packets between different networks and sub networks but keep in mind every route and hop a packet must use, network latency will be added and affect tremendously the performance between iSCSI initiator and iSCSI target. A network switch will also add latency to the delivery time from the iSCSI packet, so we recommend keeping the distance short and ovoid any router or network switches in between the connection. It simply costs performance, reduces the IOPs per second and the chances of storage traffic competing with other data traffic on congested inter switch links. To avoid bottlenecks, inter switch links should be sized properly and use stacking cables, 10-Gigabit Ethernet uplinks, or link aggregation or port trunking. Networking considerations include:

Minimizing switch hops Maximizing the bandwidth on the inter-switch links if present. Use of 10-Gigabit Ethernet uplinks

Flow Control recommendations


Storage vendors usually have iSCSI SAN design recommendations; enabling Flow Control is one of the most important ones. Ethernet Flow control is a mechanism to manage the traffic flow between two directly connected devices and makes use of pause frames in order to notify the link-partner to stop sending traffic when congestion occurs. It helps resolve in an efficient manner any imbalance in network traffic between sending and receiving devices. Enabling Flow Control is highly recommended by iSCSI storage vendors. It must be enabled globally across all switches, server adapter ports and NIC ports on the storage node. Enabling Flow Control on iSCSI Storage Systems Flow Control can usually be enabled on all iSCSI Storage Systems. For more specific information about how to enable Flow Control, see the Storage Systems manufacturer documentation. On a HP StoreVirtual 4000 Storage, Flow Control can be set from the TCP/IP settings page within the CMC console:

Enabling Flow Control on Network Switches: Flow Control should be enabled on every switch interfaces connected to the Storage device. See the switchs manufacturer documentation for more information about Flow Control. NOTE: On ProCurve switches, if the port mode is set to "auto" and flow control is enabled on the HP StoreVirtual 4000 Storage port, the switch port will auto-negotiate flow control with the Storage device NIC.

Enabling Flow Control on Virtual Connect: Flow Control is enabled by default on all downlink ports. To enable Flow control on all VC ports, including uplink ports, enter: -> set advanced-networking FlowControl=on NOTE: Be careful, this command can result in data traffic disruption!

Enabling Flow Control on iSCSI Hosts By default, Flow Control is enabled on every network interfaces when Accelerated iSCSI is enabled. For Software iSCSI, it might be necessary to enable Flow control at the NIC/iSCSI initiator level.

Jumbo Frames (optional recommendation)


Jumbo frames (MTU>=9000 bytes) are also frequently recommended by iSCSI storage vendors as it can significantly increase the iSCSI performance. Jumbo Frames have many benefits particularly for iSCSI traffic as it reduces the fragmentation overhead which translates straight away to lower CPU utilization. It gives as well more aggressive TCP dynamics, leading to greater throughput and better response to certain types of loss. Jumbo frames must be correctly configured end-to-end on the network, from the storage to the Ethernet switches and up to the server ports. NOTE: Usage of the Jumbo frames in some environment can cause more problems than helps with performance. This is frequently due to misconfigured MTU sizes but also because some devices support different max MTU sizes. So if you are unsure about whether your routers and other devices support larger frame sizes, keep the frame size at the default setting.

Enabling Jumbo Frames on iSCSI Storage Systems Jumbo frames can usually be used with all iSCSI Storage Systems and are mostly of the time enabled by setting the MTU size on an interface. The frame size on the storage system should correspond to the frame size on iSCSI Hosts (Windows and Linux application servers). For more specific information about how to enable Jumbo frames, see the Storage Systems manufacturer documentation. On HP StoreVirtual 4000 Storage, jumbo frames can be set from the TCP/IP settings page within the CMC console:

NOTE: On the storage System, make sure to set a MTU size above 8342 bytes. Any Storage Systems configured with frames size below 8342 bytes will result in a MTU negotiation failure with the CNA, causing the traffic to run at the default Ethernet standard frame size (i.e. 1518 bytes). If 9000 bytes is needed by any specific reason then Software iSCSI must be configured instead of Accelerated iSCSI.

10

Enabling Jumbo Frames on Network Switches Jumbo frames must be enabled across all ports of the iSCSI dedicated VLAN or hardware infrastructure (always end-toend). See the switchs manufacturer documentation. NOTE: Not all switches support both Jumbo Frames and Flow Control if you have to pick between the two, choose Flow Control. Enabling Jumbo Frames on Virtual Connect Jumbo Frames are enabled by default on Virtual Connect, there is no configuration required. Enabling Jumbo Frames on iSCSI Hosts There are two different procedures for enabling Jumbo Frames on servers; one for Accelerated iSCSI (uses a dedicated HBA port for iSCSI traffic) and one for Software iSCSI (uses a port of an existing NIC for iSCSI traffic): With Accelerated iSCSI Jumbo Frames are enabled by default on the FlexFabric 10Gb NC551 and NC553 CNA with Accelerated iSCSI, there is no configuration required. The MTU size is auto-negotiated during the TCP connection with the iSCSI target. For optimal performance, the Max MTU size supported by iSCSI Accelerated mode is limited to 8342 bytes and cannot be modified. Checking MTU size under Windows: To see the MTU size that has been auto-negotiated under Windows, it is necessary to install the Emulex OneCommand Manager (OCM) utility and when launched, to select the iSCSI target and to click on Sessions

11

The TCPMSS value used for this connection displayed in the Connection Negotiated Login Properties section, shows indirectly the MTU that has been negotiated:

When TCPMSS displays 1436, the MTU negotiated size is 1514. When TCPMSS displays 8260, the MTU negotiated size is 8342. Checking MTU size under VMware ESX: Like under MS Windows, MTU is automatically configured and the user has no control on this setting. Note that with VMware, there is no way at this time to view the configured MTU/MSS.

12

With Software iSCSI With Software iSCSI, Jumbo frames must be enabled under the Operating System on each adapter/vSwitch running iSCSI:

Under MS Windows Server 2012:


1. In Control Panel, select View network status and tasks in the Network and Internet section. 2. In the View your active networks section, click on the network adapter used for iSCSI and click Properties. 3. Click Configure. 4. Click the Advanced tab 5. Select Jumbo Mtu and change the MTU value.

13

Under MS Windows Server 2008:


1. Right-click Network in Start Menu and click Properties. 2. Select the network adapter used for iSCSI and click Properties. 3. Click Configure. 4. Click the Advanced tab 5. Select Packet Size and change the MTU value

6. Click OK to apply the changes. 7. To see the MTU value configured under Windows, go to OCM, select the adapter used for iSCSI, the MTU is displayed under the Current MTU field in the Port information tab:

For more information, refer to the OS Vendors documentation.

14

Under ESX:

Run this command to set MTU for the vswitch: esxcfg-vswitch -m 9000 vSwitch<#> To check the MTU configuration, enter: esxcfg-vswitch l vSwitch<#>
Switch Name vSwitch2 Num Ports 128 Used Ports 3 Configured Ports 128 MTU 9000 Uplinks vmnic4,vmnic5

For more information, see iSCSI and Jumbo Frames configuration on ESX/ESXi (KB: 1007654). http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007654

Testing Jumbo Frames: The Jumbo frames configuration can be tested by using the PING command frequently available on iSCSI Storage Systems:

Test ping from the Storage System to the iSCSI hosts interface using 8300 bytes for MTU:

The Ping result should appear similar to:


PING 192.168.5.14 (192.168.5.14) from 192.168.5.20 : 8300(8328) bytes of data. 8308 bytes from 192.168.5.14: icmp_seq=5 ttl=64 time=47.7 ms

15

iSCSI multipathing solutions


The use of Multipathing solutions is highly recommended for load balancing and failover to improve iSCSI performance and availability. Multipathing solutions use redundant physical path componentsadapters, cables, and switchesto create logical "paths" between the server and the storage device. In the event that one or more of these components fails, causing the path to fail, multipathing logic uses an alternate path for I/O so that applications can still access their data. For the Operating System, this multipathing means the need of an intelligent path manager called Multipath I/O (also known as MPIO) to log in multiple sessions and to failover if needed among multiple iSCSI Host Bus Adapters (HBAs). MPIO is a key component to building a highly available, fault tolerant iSCSI SAN solution. MPIO technologies provide for the following:

I/O path redundancy for fault tolerance I/O path failover for high availability I/O load balancing for optimal performance
For Microsoft Windows OS, storage vendors usually provide a vendor-specific DSM (Device Specific Module) to optimize multipathing using the Microsoft MPIO framework. This Vendor-specific module (DSM) for multi-path I/O must be installed under the Operating System, consult your Storage provider web site for more information.

16

Virtual Connect network scenarios


For security and performance purposes, it is recommended that the iSCSI network be separated either logically (using different VLANs) or physically (using different physical switches) from the ordinary data network. Isolating the iSCSI traffic helps to improve response times, reliability and prevent bottlenecks and congestion, it also helps to address the TCP/IP overhead and flow control issues inherent in an Ethernet network. Another recommendation to maximize availability and optimal performance is to use an iSCSI redundant network path from Virtual Connect (and therefore from the server) to the storage system. This enables failover mechanism in case of path failure among multiple iSCSI HBA. The use of Multipath I/O software running under the OS (Windows, Linux and VMware) is required to provide an automatic means of persisting I/O without disconnection. For a more complete step by step typical scenario configuration, please refer to the Virtual Connect FlexFabric Cookbook http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf

17

Scenario 1: iSCSI network physically separated


In this first scenario, the iSCSI network is physically separated from the ordinary data network using a different switch infrastructure. PROS This scenario is the best for performance, for better latency and is the recommended scenario. It maximizes the bandwidth availability, here the iSCSI traffic do not have to fight for bandwidth as we have a dedicated infrastructure for the storage traffic. CONS This scenario uses more switches, more VC uplinks and thus more cabling. The solution cost is increased Figure 1: Virtual Connect scenario using a separated iSCSI network

PRODUCTION NETWORK
LAN Switch 1

IP STORAGE NETWORK iSCSI Storage Device


Virtual IP

PRODUCTION NETWORK
LAN Switch 2

Port 1

Port 2

iSCSI network
802.3ad LAG 802.1Q Trunk

vN et-iSCSI-1 Active Prod-vN et-1 Active


FAN 1

vN et-iSCSI-2 Active
FAN 5

802.3ad LAG 802.1Q Trunk

Prod-vN et-2 Active

SHARED: UPLINK or X-LINK

SHARED: UPLINK or X-LINK

HP VC Flex-10 Enet Module

X1

X2

X3

X4

X5

X6

X7

X8
UID

HP VC Flex-10 Enet Module

X1

X2

X3

X4

X5

X6

X7

X8

UID

2
X1
SHARED

X1

SHARED

4
1 2 3 4
UID

VC Modules

UID

HP 4Gb VC-FC Module

HP 4Gb VC-FC Module

7
Enclosure Interlink

8
UID

OA1
Reset

UID

iLO Active
Enclosure UID

iLO Reset Active

OA2

Remove management modules before ejecting sleeve

FAN 6

FAN 10

PS 6

PS 5

PS 4

PS 3

PS 2

PS 1

Rear view c7000 enclosure

18

Figure 2: Virtual Connect logical view of an iSCSI ESX host

IP Storage Network
Switch 1

IP Storage Network Switch 2 Prod-vNet-2

Prod-vNet-1

VC FlexFabric 10Gb Enc0:Bay1


SHARED: UPLINK or X-LINK

VC FlexFabric 10Gb Enc0:Bay2


SHARED: UPLINK or X-LINK

X1

X2

X3

X4

X5

X6

X7

X8

X1

X2

X3

X4

X5

X6

X7

X8

UID

UID

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric 10Gb/24-Port Module

Hypervisor Host
HP ProLiant BL460cG6
UID
1 port 10k

Prod-vNet-1
Enc0:Bay1:X1,X2
serial scsi

vNet-iSCSI-1 Enc0:Bay1:X5

Prod-vNet-2 Virtual Connect Domain Enc0:Bay2:X1,X2 VLAN_101-2 VLAN_103-2 VLAN_102-2 VLAN_104-2

vNet-iSCSI-2 Enc0:Bay2:X5

vSwitch Console

VLAN UT

LOM 1 vmnic0 1Gb A vmnic2 1.5Gb C vmnic4 3.5Gb D iSCSI2 4Gb 10Gb B

LOM 2
NIC 1

146 GB

VLAN_101-1 LOM 1 VLAN_103-1


1A FlexNIC 1B FlexiSCSI 1C FlexNIC 1D FlexNIC

VLAN_102-1 VLAN_104-1

vmnic1 1Gb A vmnic3 1.5Gb C vmnic5 3.5Gb D iSCSI3 4Gb 10Gb B

NIC 2

VMotion

UT
102

1 port 10k

serial scsi

146 GB

VM Guest VLAN

103 104

LOM 2
2A FlexNIC 2B FlexiSCSI 2C FlexNIC 2D FlexNIC

iSCSI Network

UT

Figure 3: Virtual Connect logical view of an iSCSI Windows host


iSCSI device Port 1 Prod-vNet-1 Prod-vNet-2 iSCSI device Port 2

VC FlexFabric 10Gb Enc0:Bay1


SHARED: UPLINK or X-LINK

VC FlexFabric 10Gb Enc0:Bay2


SHARED: UPLINK or X-LINK

X1

X2

X3

X4

X5

X6

X7

X8

X1

X2

X3

X4

X5

X6

X7

X8

UID

UID

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric 10Gb/24-Port Module

Windows Host
HP ProLiant BL460c
G6
1 port 10k

Prod-vNet-1
Enc0:Bay1:X1,X2
serial scsi
UID

vNet-iSCSI-1 Enc0:Bay1:X5

Prod-vNet-2 Virtual Connect Domain Enc0:Bay2:X1,X2 Management-2 VLAN_103-2 VLAN_102-2 VLAN_104-2

vNet-iSCSI-2 Enc0:Bay2:X5

Team Management

VLAN UT

LOM 1 LAC0 1Gb A LAC2 4Gb C LAC4 D

LOM 2
NIC 1

146 GB

LOM 1
1A FlexNIC

Management-1 VLAN_103-1

VLAN_102-1 VLAN_104-1

LAC1 1Gb A LAC3 4Gb C LAC5 3.5Gb D iSCSI2 5Gb 10Gb B

NIC 2

1B FlexiSCSI 1C FlexNIC 1D FlexNIC


1 port 10k

Application

T
102 103 104

serial scsi

146 GB

LOM 2
2A FlexNIC 2B FlexiSCSI 2C FlexNIC

iSCSI Network

UT

iSCSI1 5Gb 10Gb B

2D FlexNIC

19

Defining two iSCSI networks vNet Create a vNet and name it vNet-iSCSI-1

On the Virtual Connect Manager screen, click Define, Ethernet Network to create a vNet Ether the Network Name of vNet-iSCSI-1 Select Smart Link, but, do NOT select any of the other options (i.e.; Private Networks etc.) Select Add Port, then add one port from Bay 1 For Connection Mode, use Failover Select Apply

20

Create a vNet and name it vNet-iSCSI-2

On the Virtual Connect Manager screen, click Define, Ethernet Network to create a vNet Ether the Network Name of vNet-iSCSI-2 Select Smart Link, but, do NOT select any of the other options (ie; Private Networks etc.) Select Add Port, then add one port from Bay 2 For Connection Mode, use Failover Select Apply

NOTE: By creating TWO vNets we have provided a redundant path to the network. As each uplink originates from a different VC module and vNet both, uplinks will be active. This configuration provides the ability to lose an uplink cable, network switch or depending on how the iSCSI ports are configured at the server (iSCSI Software Initiator supporting failover), even a VC module. Smart Link In this configuration Smartlink SHOULD be enabled. Smartlink is used to turn off downlink ports within Virtual Connect if ALL available uplinks to a vNet or SUS are down. In this scenario if an upstream switch or all cables to a vNet were to fail on a specific vNet, VC would turn off the downlink ports connect to that vNet, which would then force the iSCSI Software Initiator to fail-over to the alternate NIC. Connection Mode: Failover should be enabled here as only a single external uplink port is used for this network. With multiple uplink ports, the connection mode Auto can be used to enable the uplinks to attempt to form aggregation groups using IEEE 802.3ad link aggregation control protocol. Aggregation groups require multiple ports from a single VC-Enet module to be connected to a single external switch that supports automatic formation of LACP aggregation groups, or multiple external switches that utilize distributed link aggregation.

21

Scenario 2: iSCSI network logically separated


In this second scenario, we use the same switch infrastructure but the iSCSI network is logically separated from the ordinary data network through the use of 802.1Q VLAN trunking. Each Virtual Connect module is connected with more than one cable to the LAN Switches to increase the network bandwidth and to provide a better redundancy. PROS This scenario uses less VC uplinks and thus less cabling. The solution cost is reduced. CONS In this scenario, the performance of iSCSI relies on the datacenter network performance. If the datacenter network is congested and saturated, iSCSI performance suffers and might not be adequate for some environments. Figure 4: Virtual Connect scenario using a logically separated iSCSI network

PRODUCTION NETWORK
LAN Switch 1

PRODUCTION NETWORK IP STORAGE NETWORK iSCSI Storage Device


Virtual IP

LAN Switch 2

Port 1

Port 2

iSCSI network + Production network

802.3ad LAG 802.1Q Trunk

802.3ad LAG 802.1Q Trunk

UplinkSet_1 Active
FAN 1

UplinkSet_2 Active
FAN 5

iSCSI network + Production network

SHARED: UPLINK or X-LINK

SHARED: UPLINK or X-LINK

HP VC Flex-10 Enet Module

X1

X2

X3

X4

X5

X6

X7

X8
UID

HP VC Flex-10 Enet Module

X1

X2

X3

X4

X5

X6

X7

X8

UID

2
X1
SHARED

X1

SHARED

VC Modules

3
1 2 3 4
UID

4
1 2 3 4

UID

HP 4Gb VC-FC Module

HP 4Gb VC-FC Module

7
Enclosure Interlink

8
UID

OA1
Reset

UID

iLO Active
Enclosure UID

iLO Reset Active

OA2

Remove management modules before ejecting sleeve

FAN 6

FAN 10

PS 6

PS 5

PS 4

PS 3

PS 2

PS 1

Rear view c7000 enclosure

22

Figure 5: Virtual Connect logical view of an iSCSI ESX host

Prod 802.1Q Trunk


(VLANs 101 through 105)

Prod 802.1Q Trunk


(VLANs 101 through 105)

VC FlexFabric 10Gb Enc0:Bay1


SHARED: UPLINK or X-LINK

VC FlexFabric 10Gb Enc0:Bay2


SHARED: UPLINK or X-LINK

X1

X2

X3

X4

X5

X6

X7

X8

X1

X2

X3

X4

X5

X6

X7

X8

UID

UID

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric 10Gb/24-Port Module

Internal Stacking link

Hypervisor Host
HP ProLiant BL460c
G6
1 port 10k

UplinkSet_1
serial scsi

Virtual Connect Domain

UplinkSet_2
VLAN_102-2 VLAN_104-2

UID

vSwitch Console

VLAN UT
101

LOM 1 vmnic0 1Gb A vmnic2 1.5Gb C vmnic4 3.5Gb D iSCSI2 4Gb 10Gb B

LOM 2
NIC 1

146 GB

Enc0:Bay1:X1,X2 VLAN_101-1 LOM 1


1A FlexNIC 1B FlexiSCSI 1C FlexNIC

Enc0:Bay2:X1,X2 VLAN_101-2 VLAN_103-2

VLAN_102-1 VLAN_104-1

vmnic1 1Gb A vmnic3 1.5Gb C vmnic5 3.5Gb D iSCSI3 4Gb 10Gb B

NIC 2

VLAN_103-1

iSCSI_1

iSCSI_2

VMotion

UT
102

1 port 10k

serial scsi

1D FlexNIC

146 GB

VM Guest VLAN

103 104

LOM 2
2A FlexNIC 2B FlexiSCSI 2C FlexNIC 2D FlexNIC

iSCSI Network

UT
105

23

Defining a first Shared Uplink Set (VLAN-trunk-1) Create a SUS named UplinkSet_1

On the Virtual Connect Home page, select Define, Shared Uplink Set Insert Uplink Set Name as UplinkSet_1 Select Add Port, then add two ports from Bay 1 Add Networks as follows (to add a network, right click on the grey bar under the Associate Networks (VLAN) header,
the select ADD; o o o o o VLAN_101-1 = VLAN ID = 101 = CONSOLE VLAN_102-1 = VLAN ID = 102 = VMOTION VLAN_103-1 = VLAN ID = 103 = First VM Guest VLAN VLAN_104-1 = VLAN ID = 104 = Second VM Guest VLAN (More VM Guest VLANs can be defined here) iSCSI_1 = VLAN ID=105

Enable SmartLink on ALL networks Leave Connection Mode as Auto (this will create a LCAP port channel if the upstream switch is properly configured) Optionally, if one of the VLANs is configured as Default/untagged, on that VLAN only, set Native to Enabled Click Apply

24

Defining a second Shared Uplink Set (VLAN-trunk-2) Create a SUS named UplinkSet_2

On the Virtual Connect Home page, select Define, Shared Uplink Set Insert Uplink Set Name as UplinkSet_2 Select Add Port, then add two ports from Bay 2 Add Networks as follows (to add a network, right click on the grey bar under the Associate Networks (VLAN) header,
the select ADD; o o o o o VLAN_101-2 = VLAN ID = 101 = CONSOLE VLAN_102-2 = VLAN ID = 102 = VMOTION VLAN_103-2 = VLAN ID = 103 = First VM Guest VLAN VLAN_104-2 = VLAN ID = 104 = Second VM Guest VLAN (More VM Guest VLANs can be defined here) iSCSI_2 = VLAN ID=105

Enable SmartLink on ALL networks Leave Connection Mode as Auto (this will create a LCAP port channel if the upstream switch is properly configured) Optionally, if one of the VLANs is configured as Default/untagged, on that VLAN only, set Native to Enabled Click Apply

25

Scenario 3: Direct-attached iSCSI Storage System


In this third scenario, an iSCSI device is directly connected to the Virtual Connect Domain without any switch infrastructure. This scenario uses more VC uplinks than the second scenario but no additional or dedicated switches are required like in scenario 1 to only connect a SCSI disk storage enclosure. This reduces the entire solution cost and complexity. PROS Cost is greatly reduced as no additional switch is required. CONS There are several limitations. Direct-attached Limitations There are three important limitations that administrator must be aware of:

When an iSCSI storage device is directly connected to a VC Domain, this iSCSI device is ONLY accessible to the servers
belonging to this Virtual Connect Domain.

iSCSI Storage Systems that are sharing the same ports for both iSCSI host connectivity traffic and LAN management
(also known as in-band management) can ONLY be managed from the Virtual Connect Domain.

The only network interface bond supported on the iSCSI Storage system is Active-Passive. VC Active/Standby iSCSI vNet configuration is not supported.
iSCSI Storage systems can be divided into two categories, the ones with out-of-band management (separate ports are used for management and host traffic) and the ones with in-band management (use the same ports for management and host traffic). So the direct-attached scenario will be divided into two sub-scenarios, according to the type of management you may use:

Scenario 3-A: Direct-attached iSCSI device with out-of-band management (using separate ports for management and
host traffic). traffic).

Scenario 3-B: Direct-attached iSCSI device with in-band management (using the same ports for management and host Scenario 3-C: Low cost Direct-attached iSCSI device with a c3000 enclosure.

26

Scenario 3-A: Direct-attached iSCSI device with out-of-band management (using separate ports for management and host traffic) Figure 6: Virtual Connect direct-attached iSCSI target scenario with out-of-band management

PRODUCTION NETWORK
LAN Switch 1

IP STORAGE NETWORK Directly attached iSCSI Storage Device


Virtual IP

PRODUCTION NETWORK
LAN Switch 2 Management network

Mgt Port
Port 1 Active Port 2 Passive

802.3ad LAG 802.1Q Trunk

iSCSI host network


802.3ad LAG 802.1Q Trunk

vNet-iSCSI-1 Active

vNet-iSCSI-2 Active

VC Modules
Storage Management Console

Rear view

c7000 enclosure
iSCSI Hosts

In this scenario A, the iSCSI target is connected directly to the VC Domain using two Active/Active vNet (blue and red in the above diagram) to provide iSCSI LAN access to the servers located inside the enclosure (only). The iSCSI storage device is using an out-of-band management network; this means that dedicated port(s) separated from the iSCSI host traffic, are available for the management/configuration. Therefore the iSCSI device can be managed from anywhere on the network. Both software and hardware iSCSI can be implemented on the iSCSI hosts without limitation.

27

NOTE: The direct-attached scenario supports only one iSCSI device per VC Active/Active network. To support more direct-attached iSCSI devices, its mandatory to create for each iSCSI device an Active/Active VC network.
Production Network
LAN Switch LAN Switch iSCSI Target 1

Direct-attached iSCSI devices


iSCSI Target 2

vNet1-iSCSI-1

vNet2-iSCSI-1 vNet1-iSCSI-2 vNet2-iSCSI-2

VC Modules

Rear view

c7000 enclosure

This limitation of one device per VC Active/Active network implies that you cannot directly attach multiple HP StoreVirtual 4000 Storage nodes to the same VC domain because in a multi node environment, StoreVirtual nodes need to talk to each other and this requirement is not possible as VC dont switch traffic between different VC networks.

Production Network
LAN Switch LAN Switch

Direct-attached HP LeftHand P4000 series

HP P4000 series Node 1

HP P4000 series Node 2

VC Modules

Rear view

c7000 enclosure

28

Scenario 3-B: Direct-attached iSCSI device with in-band management (using the same ports for management and host traffic): Figure 7: Virtual Connect direct-attached iSCSI target scenario with in-band management

PRODUCTION NETWORK
LAN Switch 1

IP STORAGE NETWORK Directly attached iSCSI Storage Device

PRODUCTION NETWORK
LAN Switch 2

Virtual IP

Port 1 Active

Port 2 Passive

802.3ad LAG 802.1Q Trunk

iSCSI host network + management network


802.3ad LAG 802.1Q Trunk

vNet-iSCSI-1 Active

vNet-iSCSI-2 Active

VC Modules

Rear view

c7000 enclosure
iSCSI Hosts

Storage Management Console

In this scenario B, the iSCSI device is again connected directly to the VC Domain using two Active/Active vNet (blue and red in the above diagram) but here the iSCSI device is not using a dedicated interface for management which means that the same ports are used for both the management and the iSCSI host traffic (known as in-band management device). This implies due to the Virtual Connect technologies that you can only manage and configure the iSCSI storage device from the Virtual Connect Domain (i.e. only from a server located inside the enclosure and connected to the iSCSI vNet). This means that you need a dedicated server for the storage system management and that all SNMP trap notifications for hardware diagnostics, events and alarms will be sent only to this management console. Both software and hardware iSCSI can be implemented here without limitation on the iSCSI hosts. For more information about Accelerated iSCSI, please refer to the Accelerated iSCSI section.

29

Scenario 3-C: Low cost Direct-attached iSCSI device with or without out-of-band management In this scenario, the iSCSI Storage device is directly connected to a c3000 enclosure filled with only one Virtual Connect module installed into interconnect Bay 1. By design, interconnect Bay 1 in c3000 enclosures always map to all embedded NICs/CNAs, so all embedded server ports are all connected through the same interconnect module. This scenario can be proposed to customers looking for cost reduction and not necessarily for high availability. PROS Cost is even more greatly reduced as only one VC module is required. CONS Risk of network failure. In addition to the direct-attached limitations described before, this scenario does not offer a fully redundant design: the VC module is a single point of failure, if it fails, the entire network communication will stop and all blades will go offline. Figure 8: Virtual Connect direct-attached iSCSI target scenario using a single VC module

PRODUCTION NETWORK
LAN Switch 1

IP STORAGE NETWORK Directly attached iSCSI Storage Device


Virtual IP

PRODUCTION NETWORK
LAN Switch 2
Optional out-of-band management

Mgt Port
Port 1 Active Port 2 Passive

802.1Q Trunk

vNet-iSCSI-1 Active

vNet-iSCSI-2 Active

802.1Q Trunk

VC Flex-10/10D Module
in Bay 1
PS 3 FA N1 PS 2 Enc . nt e I rl nk i FA N3 l os ur e O A1/i LO O A2/i L OEnc UI D

PS 6

PS 5

PS 1

FA N4

FA N6

PS 4

Rear view

c3000 enclosure
filled with only one VC module iSCSI Hosts

30

Virtual Connect Network configuration Scenario 3-A and Scenario 3-B The Virtual Connect network configuration is the same for Scenario 3-A and Scenario 3-B. You must define two Active/Active iSCSI Virtual Connect networks (vNet): vNet-iSCSI-1 and vNet-iSCSI-2 like in Scenario 1.

These two vNets can exclusively be used for the direct attachment of the iSCSI storage device. Each vNet can only support one VC uplink. Additional vNets are required for the non-iSCSI traffic.

Scenario 3-C You must define two Active/Active iSCSI Virtual Connect networks (vNet): vNet-iSCSI-1 and vNet-iSCSI-2 like in Scenario 1 . The only difference being that uplink ports from the same VC Module in Bay 1 are used for both vNets.

These two vNets can exclusively be used for the direct attachment of the iSCSI storage device. Each vNet can only support one VC uplink. Each vNet is configured with one VC uplink port from the module Bay 1. Additional vNets are required for the non-iSCSI traffic.
Note: The Direct-attached scenario is not supported with VC Active/Standby iSCSI vNet configuration. A standby VC uplink is always seen as active by an upstream device so having no way to detect the standby state of the VC uplink, a storage system would incorrectly send traffic to the standby port, causing communication issues.

31

Connecting the Direct-Attached iSCSI SAN to the VC Domain Connect the VC uplink port of the two vNet to the network interfaces of the iSCSI device. Figure 9: Out-of-band management configuration with a HP StoreVirtual 4000 Storage Solution (Scenario 3-A)
Management network HP StoreVirtual 4000 Storage
PORT 1
R X T X

PORT 2 L A

10GbE SFP

L A
R X T X

NC550sp

Mgmt

UID

vNet-iSCSI-1 Active Prod-vNet-1 Prod-vNet-2

vNet-iSCSI-2 Active

Directly attached iSCSI device using out-of-band management

SHARED: UPLINK or X-LINK

SHARED: UPLINK or X-LINK

X1

X2

X3

X4

X5

X6

X7

X8

X1

X2

X3

X4

X5

X6

X7

X8

UID

UID

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric
HP ProLiant BL460c
UID NIC 1 NIC 2

HP ProLiant BL460c
UID

CNA 1 CNA 1 CNA 1

NIC 1 NIC 2

CNA 2
HP ProLiant BL460c
UID NIC 1 NIC 2

CNA 2
HP ProLiant BL460c
UID NIC 1 NIC 2

CNA 2 CNA 2

CNA 1

iSCSI Hosts Blade Servers

Figure 10: In-band management configuration with a HP StoreVirtual 4000 Storage Solution (Scenario 3-B)

HP StoreVirtual 4000 Storage

Mgmt

UID

vNet-iSCSI-1 Active Prod-vNet-1 Prod-vNet-2

vNet-iSCSI-2 Active

Directly attached iSCSI device

using in-band management


SHARED: UPLINK or X-LINK

SHARED: UPLINK or X-LINK

X1

X2

X3

X4

X5

X6

X7

X8

X1

X2

X3

X4

X5

X6

X7

X8

UID

UID

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric 10Gb/24-Port Module

HP VC FlexFabric
HP ProLiant BL460c
UID NIC 1 NIC 2

HP ProLiant BL460c
UID

CNA 1 CNA 1 CNA 1

NIC 1 NIC 2

CNA 2
HP ProLiant BL460c
UID NIC 1 NIC 2

CNA 2
HP ProLiant BL460c
UID NIC 1 NIC 2

CNA 2

CNA 1

CNA 2

iSCSI Hosts Blade Servers

32

Preparing the network settings of the storage system The following describes a basic network configuration of a HP StoreVirtual 4000 Storage Solution:

Open a remote console on the StoreVirtual using the StoreVirtual iLO or connect a keyboard/monitor. Type start, and press Enter at the log in prompt. Press Enter to log in or type the Username and password if already configured. When the session is connected to the storage system, the Configuration Interface window opens. On the Configuration Interface main menu, tab to Network TCP/IP Settings, and press Enter. Tab to select the first network interface and press Enter. Enter the host name, and tab to the next section to configure the network settings. Enter a Private IP address like 192.168.5.20 / 255.255.255.0 with 0.0.0.0 for the Gateway address:

Tab to OK, and press Enter to complete the network configuration. Press Enter on the confirmation window. A window opens listing the assigned IP address that will be used later on to
configure and manage the P4300.

Configuring the Storage Management Server Out-of-band management iSCSI device (Scenario 3-A and 3-C) An iSCSI device with an out-of-band management does not require any particular configuration as the management console can reside anywhere on the network. In-band management iSCSI device (Scenario 3-B and 3-C) An iSCSI device with an in-band management requires a little bit more attention and particularly when network interface bonding is enabled on the iSCSI Storage System. The following configuration can be followed for an in-band management iSCSI Storage System:

Create a Virtual Connect server profile for the Management Storage server:
o o Assign NIC1 and NIC2 to the management network. Assign NIC3 and NIC4 to the iSCSI direct-attached VC networks vNet-iSCSI-1 and vNet-iSCSI-2.

33

Figure 11: Example of a VC profile used by the Storage Management Server

Start the server and install Windows Server 2003 or 2008. At the end of the installation, assign a private IP address to NIC3 (e.g. 192.168.5.150) and to NIC4 (e.g. 192.168.5.151)
NOTE: Make sure to use here the same IP subnet addresses as the one set in the Storage System!

NOTE: Two IP addresses are used in this configuration to ensure that the management server stays always connected to the iSCSI device regardless of the iSCSI device bonding configuration (no bonding or Active-Passive bond enabled). Despite the use of the Active/Active iSCSI vNet configuration, NIC teaming cannot be used on the management server otherwise the network connection to the iSCSI device will fail. This is due to the direct-attached design and the use of NIC bonding on the iSCSI device. There is no such limitation with Scenario 3-A.

Install the Storage Management software (e.g. HP StoreVirtual Management Software, this is the software used to
configure and manage HP StoreVirtual Storage).

34

Open the Centralized Management Console (CMC) and locate the storage system using the Find function, enter the
address assigned previously on the first interface of the P4300 (e.g. 192.168.5.20).

Select the Auto Discover by Broadcast check box. When you have found the storage system, it appears in the Available systems pool in the navigation window .

Configuring the network interface bonds on the Storage System When configuring an iSCSI SAN directly attached to VC, one point is worth considering: the network interface bonding. Network interface bonding is generally used on iSCSI SAN devices to provide high availability, fault tolerance, load balancing and/or bandwidth aggregation. Depending on your storage system hardware, you can generally bond NICs in one of the following methods:

Active Passive Link Aggregation (802.3 ad) Adaptive Load Balancing


Related to the direct-attached scenario, when a bond is configured, multiple interfaces are used but due to lack of Ethernet switches between the VC domain and the iSCSI SAN device, the support of NIC bonding is very limited: only the Active/Passive bonding method is supported.

NOTE: Link Aggregation (802.3ad) is not supported because you cannot create an aggregation group across different VC modules. Adaptive Load Balancing is not supported either as it requires the two interfaces to be connected to the same network which is not the case here as we have two vNet for iSCSI (Active/Active scenario). To configure Active/Passive bonding on a HP StoreVirtual 4000 Storage Solution:

From CMC, select the storage system, and open the tree below it.

35

Select TCP/IP Network category, and click the TCP/IP tab.

Select the two interfaces from the list for which you want to configure the bonding, right-click and select New Bond Select the Active Passive bond type

Click OK After a warning message, click OK to rediscover the Storage system, the IP address is normally preserved. After a few seconds, the new bond setting shows the Active-Passive configuration:

The Storage System is now ready to be used with fault tolerance enabled.

36

Configuring the iSCSI Host For the iSCSI Hosts configuration, Hardware or/and Software iSCSI connection can be used. Multipathing and path checking With the Direct-attached scenario and its Active Passive iSCSI SAN NIC bonding, it is recommended to check the second path to validate the entire configuration. After the iSCSI volume has been discovered from the first path, log in to the iSCSI Storage System and trigger a NIC bonding fail-over, this will activate the second interface, you will then have the ability to validate the second iSCSI path.

37

Accelerated iSCSI
Traditional software-based iSCSI initiators generate more processing overhead for the server CPU. An accelerated iSCSI capable card, also known as Hardware iSCSI, offloads the TCP/IP operations from the server processor freeing up CPU cycles for the main applications. The main benefits are:

Processing work offloaded to the NIC to free CPU cores for data-intensive workloads. Increased server and IP storage application performance. Increased iSCSI performance.
Software-based iSCSI initiators are of course supported by all Virtual Connect models (1/10Gb, 1/10Gb-F, Flex-10 and FlexFabric) but 10Gb accelerated iSCSI (Hardware-based iSCSI initiator) is only provided today by Converged Network Adapters (i.e. NC551i/m or NC553i/m) with only Virtual Connect Flex-10 and Virtual Connect FlexFabric. The QLogic QMH4062 1GbE iSCSI Adapter is another adapter which also supports accelerated iSCSI but it is not supported by VC Flex-10 and VC FlexFabric.

NOTE: Under the OS, Accelerated iSCSI defers with Software iSCSI in the sense that it provides a HBA type of interface and not a Network interface card (NIC). Consequently, additional drivers, software and settings are sometimes required. NOTE: The selection between Software iSCSI and Accelerated iSCSI is done under the Virtual Connect profile: o Example of a server profile with Software iSCSI enabled:

38

Example of a server profile with Accelerated iSCSI enabled:

Several steps are required to connect a server to an iSCSI target using accelerated iSCSI:

Accelerated iSCSI must be enabled on the server using Virtual Connect. A hardware iSCSI initiator must be installed and configured under the OS. A Multipath IO software must be used to manage the redundant iSCSI connection.
This document will provide the steps to enable Accelerated iSCSI on a BladeSystem server under Microsoft Windows Server 2003/2008 and VMware vSphere Server.

39

Creating a Virtual Connect profile with Accelerated iSCSI


The first step is the same for both MS Windows and VMware vSphere servers; we have to create a Virtual Connect server profile that enables Accelerated iSCSI:

Open Virtual Connect manager. From the Menu Define, select Server Profile in order to create a new VC profile.

Note that VCM assigns FCoE connections by default!

NOTE: A server with one FlexFabric adapter can be configured with a unique personality, either all Ethernet, or Ethernet/iSCSI, or Ethernet/FCoE. Therefore its not possible to enable at the same time both FCoE and iSCSI connections. Only a server with multiple FlexFabric Adapters can be configured with both iSCSI and FCoE connections.

If you have a unique FlexFabric Adapter, delete the two FCOE connections, otherwise jump to the next bullet

In the iSCSI HBA Connections section, click Add

40

In the Network Name column, click on Select a network

Select your iSCSI dedicated VC network and click OK

In the Port Speed column, you can adjust the speed settings (Auto Preferred Custom) In the Boot setting column, leave DISABLED

NOTE: Boot setting disabled means Accelerated iSCSI is enabled but iSCSI Boot is unavailable. The disable mode offloads the iSCSI protocol processing from the OS to the NIC. In addition to offloading TCP/IP protocol processing, it also offloads iSCSI protocol processing. NOTE: The multiple networks feature (i.e. when using 802.1Q VLAN tagging) is not supported for iSCSI HBA connections.

41

Optionally create a second iSCSI Connection for multipathing configuration

NOTE: Allowing more than one iSCSI application server to connect to a volume concurrently without cluster-aware applications or without an iSCSI initiator with Multipath I/O software could result in data corruption.

Configure the additional VC Ethernet Network connections that may be needed on the other FlexNIC:

When done, you can assign the profile to a server with an Ethernet adapter that supports Accelerated iSCSI:

Click Apply to save the profile The server can now be powered on (using either the OA, the iLO, or the Power button)

42

Accelerated iSCSI with Microsoft Windows Server


After the creation of a VC profile with Accelerated iSCSI enabled, we need to proceed with the following steps under Microsoft Windows Server:

Installation of the Emulex hardware iSCSI initiator (i.e. OneCommand Manager). IP Configuration of the iSCSI HBA ports. Installation of the Microsoft iSCSI initiator. Installation of Microsoft MPIO. Installation of the Device Specific Module (DSM)
o o Using the Microsoft DSM that comes with Microsoft MPIO Using the Storage vendor DSM for MPIO provided by the Storage System vendor for better performance and latency.

Connection to the iSCSI target using the iSCSI initiator.

Installing Emulex OneCommand Manager OneCommand Manager is the Emulex utility to manage the NC55x 10Gb FlexFabric Converged Network Adapters. Among other things, it provides comprehensive control of the iSCSI network including discovery, reporting and settings. For accelerated iSCSI under MS Windows server, the Emulex OneCommand Manager is mandatory as it provides the only way to configure the IP settings of the iSCSI HBA ports required to connect the iSCSI volume. For more information about OneCommand Manager, refer to:

The User Manual of the Emulex OneCommand Manager:

http://bizsupport1.austin.hp.com/bc/docs/support/supportmanual/c02018556/c02018556.pdf

The OneCommand Manager section further below in this document.


NOTE: Do not confuse between hardware and software iSCSI initiators. Accelerated iSCSI always uses a specific port (i.e. host bus adapters) and require a utility from the HBA vendor (i.e. Emulex OneConnect Manager, Emulex Drivers be2iscsi).

Network Interface Ports

Storage iSCSI Adapter Ports

On the opposite, software initiators use the standard server NIC port and are usually included in the Operating System (i.e. Microsoft iSCSI initiator).

43

OneCommand Manager can be downloaded from the Emulex web site: http://www.emulex.com/downloads/emulex.html

Select the Windows version:

Select the Management tab

44

Select the Enterprise Kit:

NOTE: The HP OneCommand Enterprise Kit contains a graphical User Interface (GUI) and a Command Line Interface (CLI). HbaCmd.exe (the CLI) is located by default in C:\Program Files\Emulex\Util\OCManager

Launch the OneCommand utility:


o For Windows 2012: press the Windows logo key to open the Start screen and click on OCManager

45

For Windows 2008: from the start menu, select All Programs / Emulex / OCManager

OneCommand Manager shows the iSCSI ports detected on your server:

Configuring the IP addresses of the iSCSI ports In order to log into the iSCSI target, an IP address must be assigned to each iSCSI HBA ports:

From the OneCommand Manager interface, select the first iSCSI port, and click Modify.

Enter a static IP address or check DHCP Enabled

46

This IP address must be in the same subnet as the one configured in the iSCSI Storage System. Note: The multiple networks feature being not supported for iSCSI HBA connections (see Creating a Virtual Connect profile with Accelerated iSCSI) the VLAN Enabled option must remain unchecked for all scenarios presented in this cookbook (i.e. scenario 1, scenario 2 and scenario 3). The VLAN Enabled option can be required when Virtual Connect is tunneling the iSCSI traffic (i.e. when the iSCSI network under VCM has the Enable VLAN Tunneling checkbox selected). The use of a Tunnel network for the iSCSI traffic is not presented in this cookbook because the most common scenarios are using the iSCSI vNet in VLAN mapping mode (i.e. not using the VLAN tunneling mode).

Click OK. Select the second iSCSI port, and click Modify. Enter a second static IP address or enable DHCP

Click OK.

47

Installation of the Microsoft iSCSI initiator The Microsoft iSCSI Software Initiator enables connection of a Windows host to an external iSCSI storage array using Ethernet NICs. For more information about the Microsoft initiator, see http://technet.microsoft.com/enus/library/ee338476%28WS.10%29.aspx

NOTE: The Microsoft iSCSI Initiator utility is a software initiator (using Ethernet NICs) but it can as well be used to manage the iSCSI accelerated connectivity (using the iSCSI HBAs). For Windows 2012: The Microsoft iSCSI initiator comes installed by default. To launch the Microsoft iSCSI initiator:

Go to Server Manager Then select Tools / iSCSI Initiator

The first time the iSCSI Initiator is launched, the iSCSI service is required to be started, click Yes

48

For Windows 2008: The Microsoft iSCSI initiator comes installed with both the Windows Server 2008, and the Server Core installation. For Windows 2003: Download and install the latest version of the Microsoft iSCSI Initiator software. You must select the Microsoft MPIO Multipathing Support for iSCSI option when you install the Microsoft iSCSI initiator.

Installation of Microsoft MPIO MPIO solutions are needed to logically manage the iSCSI redundant connections and ensure that the iSCSI connection is available at all times. MPIO provides fault tolerance against single point of failure in hardware components but can also provide load balancing of I/O traffic, thereby improving system and application performance. For Windows 2012: MPIO is an optional component in all versions of Windows 2012 Server so it must be installed.

Go to Server Manager Then select Manage Then Add Roles and Features In the Features section, select Multipath I/O

Click Next and Install


For more information, see Microsoft Multipath I/O (MPIO) Users Guide for Windows Server 2012 at http://www.microsoft.com/en-us/download/details.aspx?id=30450

49

For Windows 2008: MPIO is an optional component in all versions of Windows 2008 Server so it must be installed.

Go to Server Manager Then select Features Then Add Features Select Multipath I/O

For Windows 2003: MPIO is installed during the Microsoft iSCSI initiator installation. You must select the Microsoft MPIO Multipathing Support for iSCSI option when installing the Microsoft iSCSI Initiator.

Installation of the Device Specific Module (DSM) MPIO requires the installation of device specific modules (DSM) in order to provide support for using multiple data paths to a storage device. These modules are a server-side plug-in to the Microsoft MPIO framework. A native Microsoft DSM provided by Microsoft, comes by default with Microsoft MPIO software but storage providers can develop their own DSM that contain the hardware specific information needed to optimize the connectivity with their storage arrays. Vendor specific DSM usually provides better write performance, lower latency, etc. while the Microsoft DSM offers more basic functions. Installation of the Microsoft DSM The Microsoft DSM comes with the Microsoft iSCSI Initiator for Windows 2003 Server and is an optional component for Windows 2008 Server that is not installed by default. For Windows 2012: See Microsoft Multipath I/O (MPIO) Users Guide for Windows Server 2012 at http://www.microsoft.com/enus/download/details.aspx?id=30450

50

For Windows 2008: To use the Microsoft DSM with all versions of Windows 2008 Server, you must install MPIO and enable it. Additional details for some steps are in the white paper available from http://www.microsoft.com/mpio

Connect to a volume using the iSCSI initiator. In the MPIO Control Panel applet, click the Discover Multi-Paths tab, select the Add support for iSCSI devices check
box, and click Add.

The check box does not become active unless you are logged in to an iSCSI session.

The system asks to reboot for the policy to take effect.

51

After the reboot, the MPIO-ed Devices tab shows the addition of MSFT2005iSCSIBusType_0x9.

Connect to a volume using the iSCI Initiator and select the Enable multi-path check box.

52

When you connect to the volume, set your MPIO load balancing.

For Windows 2003: The Microsoft DSM is installed along with the iSCSI Initiator.

53

Installation of the Storage vendor DSM

Download the specific DSM for your iSCSI storage array from your storage vendors website.
NOTE: The HP StoreVirtual 4000 Storage DSM for MPIO software is not supported with Accelerated iSCSI HBA under Windows Server 2003 and Windows Server 2008. To get more information about the HP StoreVirtual 4000 Storage DSM, see http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03041928/c03041928.pdf

Install the DSM for MPIO. A reboot might be required. Once the DSM for MPIO is installed, there is no additional settings, all iSCSI volume connections made to an iSCSI
Storage System will attempt to connect with the Storage vendors DSM for MPIO.

54

Connecting volumes with MPIO Open the Microsoft iSCSI Initiator: o For Windows 2012, open the Server Manager console, go to Tools then select iSCSI Inititator.

o For Windows Server 2008, open the Control Panel then enter iSCSI in the search tab

55

On the Discovery tab, select Discover Portal

Then enter the IP address of the iSCSI Storage System then click OK.

NOTE: iSCSI SAN vendors usually recommend using a Virtual IP address; refer to the iSCSI SAN vendor documentation.

On the Targets tab, you should discover a new iSCSI target if a LUN has been correctly presented to the server.

Select the volume to log on to and click Connect

56

Select the Enable multi-path check box.

NOTE: Do not select the Enable Multi-path checkbox if your iSCSI Storage System is not supporting load balanced iSCSI access.

Click OK to finish.

57

Multipath Verification

You can verify the DSM for MPIO operations, from the Target tab, select the volume (now connected!) and click on
Properties:

You should see here the multiple sessions of this target as the DSM for MPIO automatically builds a data path to each
storage node in the storage cluster.

If after a few minutes only one session is available (you can click on refresh several times) it could be necessary to configure manually the multipathing.

58

Manual Multipath configuration

Click on Add session

Select the Enable multi-path check box.

Click Advanced to open the Advanced Settings window. Configure the Advanced Settings as follows:
o For Local adaptor, select the first Emulex OneConnect OCe11100, iSCSI Storport...

For Source IP, select the IP address of the iSCSI HBA to connect to the volume.

59

For Target portal, select the IP of the iSCSI Storage System containing the volume.

Click OK to close the Advanced Settings dialog. Click OK to finish logging on Repeat the same steps using this time to the second Emulex adapter

o o

For Source IP, select the IP address of the iSCSI HBA to connect to the volume. For Target portal, select the IP of the iSCSI Storage System containing the volume.

Click OK to close the Advanced Settings dialog.

60

Click OK again to finish logging on. You should see now the second session using the second path:

NOTE: It is recommended to test the failover to validate the multipath configuration.

61

Using the Microsoft iSCSI Software Initiator in conjunction with Accelerated iSCSI support Besides the Accelerated iSCSI attached disk, it is possible to use as well the iSCSI Software to connect additional volumes using one of the NIC adapter(s). This section describes the different configuration steps required. Virtual Connect Profile Configuration From the Windows servers VC Profile, make sure that at least one server NIC is connected to an iSCSI network or to a network where an iSCSI device relies:

Software iSCSI

Accelerated iSCSI

62

Enabling jumbo frames on the NIC adapters under Windows 2012

Windows PowerShell can be used using Set-NetAdapterAdvancedProperty -Name Adapters name RegistryKeyword *JumboPacket -Registryvalue 9014 OR

Go to the Adapter Properties

Select Configure

Click on the Advanced tab

63

Then Jumbo Packet

Select the MTU size supported by the network infrastructure and the iSCSI Storage System
Enabling jumbo frames on the NIC adapters under Windows 2008/2003

From the HP Network configuration Utility, select the port connected to the iSCSI-1 VC Network Click Properties Select the Advanced Settings tab then click on Jumbo Packet

64

Select the MTU size supported by the network infrastructure and the iSCSI Storage System

Click OK to close Do the same on the second adapter connected to the iSCSI-2 VC Network

65

Installation of the iSCSI Storage DSM for MPIO A vendor specific DSM for the iSCSI storage array can be installed for better performance. If not available, the Microsoft DSM can be used. Microsoft iSCSI Software configuration with multiple NICs

Open the Microsoft iSCSI initiator Select Discovery tab Click on Discover Portal

Enter the IP Address of the iSCSI target system connected to one of the server NIC network (e.g. iSCSI-1 and iSCSI-2)

Click OK

66

Select the Target tab


A new discovered iSCSI target should appear in the list if a LUN has been correctly presented to the server.

Select the new target (inactive) and click Properties Click Add session Select Enable multi-path then click Advanced

67

Configure the Advanced Settings as follows:


o For Local adaptor, select Microsoft iSCSI Initiator

For Source IP, select the IP address of the first NIC connected to iSCSI-1 VC Network

For Target portal, select the IP of the iSCSI Storage System containing the volume.

Click OK to close the Advanced Settings dialog. Click OK to finish logging on

68

Repeat the same steps but this time using the second NIC adapter connected to iSCSI-2 VC Network and using the
corresponding IP address

You should see now several sessions using the first and second paths:

Click OK to close the Properties window

Two iSCSI targets are now available via the Microsoft iSCSI initiator, one is using the iSCSI Accelerated ports (iSCSI HBAs) and the other is using the NIC adapters with software iSCSI.

69

Accelerated iSCSI with VMware vSphere Server


After the creation of a VC profile with Accelerated iSCSI enabled, we need to proceed with the following steps under vSphere:

Installation of the Emulex iSCSI drivers Installation of the Emulex OneCommand Manager. (this is optional) IP Configuration of the iSCSI HBA ports. Connection to the iSCSI volumes
Installing the Emulex iSCSI drivers Make sure that the latest Emulex iSCSI drivers for VMware ESX have been installed; visit the following hp.com links to download to latest be2iscsi drivers: To obtain these drivers, click below on the web link corresponding to the Emulex card you have in the server: HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Server Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5033634&taskId=135&prodT ypeId=329290&prodSeriesId=5033632&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=4324854&taskId=135&prodT ypeId=3709945&prodSeriesId=4296125&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215362&taskId=135&prodT ypeId=3709945&prodSeriesId=5215333&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215390&taskId=135&prodT ypeId=3709945&prodSeriesId=5215389&lang=en&cc=us

Then select your VMware version and download the HP Emulex 10GbE iSCSI Driver. NOTE: Hardware and software iSCSI can be managed by vSphere without any additional utility. Accelerated iSCSI uses a specific port (i.e. vmhba) and requires the installation of HBA drivers included in the be2iscsi package. If the CNA is properly configured, you can view the vmhba on the list of initiators available for configuration:

Log in to the vSphere Client, and select your host from the inventory panel. Select the Configuration tab and click on Storage Adapters in the Hardware panel. The Emulex OneConnect vmhba
appear in the list of storage adapters.

70

Installing Emulex OneCommand Manager OneCommand Manager (OCM) is the Emulex utility to manage the NC55x 10Gb FlexFabric Converged Network Adapters. OCM is not required to connect a VMware server to an iSCSI volume but it delivers lots of advanced management features, configuration, status monitoring and online maintenance. OneCommand Manager (OCM) for VMware can be divided into two installations:

OneCommand Manager for VMware vCenter OCM application agent in VMware ESX Server
OneCommand Manager for VMware vCenter For more information, see the OneCommand Manager for VMware vCenter User Manual: http://www-dl.emulex.com/support/vmware/vcenter/110/vcenter_user_manual.pdf See the Installing and Uninstalling OCM for VMware vCenter Components page 7. The OneCommand Manager for VMware vCenter can be downloaded from: http://www.emulex.com/downloads/emulex/vmware/vsphere-41/management.html http://www.emulex.com/downloads/emulex/vmware/vsphere-50/management.html http://www.emulex.com/downloads/emulex/vmware/vsphere-51/management.html

OCM application agent in VMware ESX Server For more information, see the OneCommand Manager Application User Manual http://www-dl.emulex.com/support/elx/r32/b16/docs/apps/ocm_gui_manual_elx.pdf Installation in a VMware ESX Server To install the OneCommand Manager Application agent in VMware ESX Server:

Log into the ESX Server COS. Download the Application Kit (OCM core application) from
http://www.emulex.com/downloads/emulex/vmware/vsphere-41/management.html http://www.emulex.com/downloads/emulex/vmware/vsphere-50/management.html http://www.emulex.com/downloads/emulex/vmware/vsphere-51/management.html

Copy the elxocmcore-esx<NN>-<version>.<arch>.rpm file to a directory on the install machine CD to the directory to which you copied the rpm file. Install the rpm. Type:
rpm -Uvh elxocmcore-esx<NN>-<version>.<arch>.rpm

71

Where NN is 40 for an ESX 4.0 system or 41 for an ESX 4.1 system.

The rpm contents are installed in /usr/sbin/ocmanager. The OneCommand Manager application Command Line Interface is also located in this directory. OneCommand Manager uses the standard CIM interfaces to manage the adapters. But it is necessary with the NC55x adapters to install the latest patches and updates from Vmware (using the vCenter Update Manager) or to import manually the CIM provider provided by Emulex like described below:

Download the CIM Provider from the same links provided above. Connect the vSphere client to a vCenter server system on which Update Manager is registered. In the navigation bar, select Home Solutions and Applications Update Manager. Select the Patch Repository tab to display all patches. Select Import Patches Click on Browse to select the applicable elx-esx-x.x.x-emulex-cim-provider-<version>-offline_bundle<release>.zip file

Then update the host with this new imported patch.


For more information about the ESX Patch Management activities, refer to the VMware website. Installation in a VMware ESXi Server To install the Emulex CIM Provider in a VMware ESXi hypevisor environment, use the esxupdate command line utility and perform the following steps:

Enable Secure Shell (SSH) on the VMware Visor host. Transfer the Emulex CIM Provider vib file to the VMware hypervisor host. Log into the VMware hypervisor host, and execute the following command:
esxupdate -b file://Emulex Provider vibfilepath nosigcheck --maintenancemode update At the end of the installation, you should get the following information using the Emulex OneCommand tab:

72

NOTE: Only OneCommand Manager for VMware vCenter 1.1.0 or later can discover and view iSCSI port information.

73

Configuring the IP addresses of the iSCSI ports In order to log into the iSCSI target, an IP address must be assigned to each vmhba iSCSI ports:

From the vSphere client, select the host and click the Configuration tab / Storage Adapters Select the first vmhba and click Properties

Then Configure Enter a static IP address

NOTE: This IP address must be in the same subnet as the one configured in the iSCSI Storage System.

Click OK to save the changes Repeat the same configuration for vmhba2

Click OK

74

Configuring the iSCSI Volumes Configuring iSCSI Software target discovery address

Select vmhba1 and click Properties Select the Dynamic Discovery tab

Click Add Enter the IP address of the iSCSI Storage System then click OK.

NOTE: iSCSI SAN vendors usually recommend using a Virtual IP address; please refer to your iSCSI Storage System documentation.

75

Click Close then accept the rescan message

Newly discovered iSCSI targets should appear in the list if a LUN has been correctly presented to the server. Repeat the same configuration on vmhba2

At the end the iSCSI target should be discovered on both ports

76

Adding iSCSI Software Datastore

From the Hardware panel, click Storage Click Add Storage Select the Disk/LUN storage type and click Next Select the iSCSI device freshly discovered to use for your datastore and click Next Click Next Enter a name for the Accelerated iSCSI datastore Adjust the file system values if needed then click Next Click Finish
Multipath configuration checking To check the multipathing configuration:

Select Storage from the Hardware panel From the Datastores view, select the iSCSI datastore
Make sure the number of paths displayed is more than one:

You can click Properties then Manage Paths

77

Details about the multipath settings is displayed:

78

Using the VMware iSCSI Software Initiator in conjunction with Accelerated iSCSI support The use of an iSCSI Software initiator in conjunction with the Emulex iSCSI hardware initiator is possible under VMware with Virtual Connect. This section describes the different configuration steps required. Virtual Connect Profile Configuration From the ESX servers VC Profile, make sure that at least one server NIC is connected to an iSCSI network or to a network where an iSCSI device is available:

Software iSCSI

Accelerated iSCSI

vSphere iSCSI Software configuration with multiple NICs

Open the vSphere Client, go to the Configuration tab then Networking Click on Add Networking Select VMkernel and click Next Select Create a virtual switch Select the two vmnic connected to the iSCSI network (e.g. iSCSI-1 and iSCSI-2) Click Next Enter a network label (e.g. iSCSI-1) then click Next Specify the IP Settings (Static or DHCP) then click Next Click Finish

79

Select the vSwitch1 just created and click Properties Under the Ports tab, click Add Select VMkernel and click Next Enter a network label (e.g. iSCSI-2) then click Next Specify the IP Settings (Static or DHCP) then click Next Click Finish
Mapping each iSCSI port to just one active NIC

Select iSCSI-1 on the Ports tab, then click Edit Click the NIC Teaming tab and select Override vSwitch failover order. Move the second vmnic to the Unused Adapters

Repeat the same steps for the second iSCSI port (iSCSI-2) but this time using the first vmnic as the Unused Adapter:

Review the configuration then click Close

Note that the iSCSI-1 VMkernel interface network is linked to the vmk1 port name and iSCSI-2 is linked to vmk2.

80

Binding iSCSI Ports to the iSCSI Adapters From the vSphere CLI command, enter:

#esxcli swiscsi nic add -n vmk1 -d vmhba32 #esxcli swiscsi nic add -n vmk2 -d vmhba32 #esxcli swiscsi nic list -d vmhba32

Enable Jumbo Frames on vSwitch For a MTU=9000, enter on the vSphere CLI:

# esxcfg-vswitch -m 9000 vSwitch1 # esxcfg-vswitch -l

81

Enable Jumbo frames on the VMkernel interface For a MTU=9000, enter on the vSphere CLI:

#esxcfg-vmknic -m 9000 iSCSI-1 #esxcfg-vmknic -m 9000 iSCSI-2 # esxcli swiscsi nic list -d vmhba32

Configuring iSCSI Software target discovery address

Log in to the vSphere Client and select your server from the inventory panel. Click the Configuration tab and click Storage Adapters in the Hardware panel

82

Select the iSCSI Software Adapter then click Properties

Click on Configure then check the Enabled option Enter the same iSCSI name than the one already configured on the Accelerated iSCSI HBAs:

NOTE: The host assigns a default iSCSI name to the initiator that must be changed in order to follow the iSCSI RFC-3720, where it is stated to use the same IQN for all connections for a given host and NOT associate different ones to different adapters.

Click OK Go to the Dynamic Discovery tab and click Add Enter the iSCSI Storage System IP address then press OK then Close Accept the rescan of the freshly configured iSCSI Software HBA

83

Newly discovered iSCSI targets should appear in the list if a LUN has been correctly presented to the server.

Click on the Paths tab and make sure as well that two paths are discovered

Adding iSCSI Software Datastore

From the Hardware panel, click Storage Click Add Storage

Select the Disk/LUN storage type and click Next Select the iSCSI device freshly discovered to use for your datastore and click Next Click Next Enter a name for the software iSCSI datastore

84

Adjust the file system values if needed then click Next Click Finish

Two datastores are now available, one is using iSCSI Accelerated ports (iSCSI HBA) and the other is using iSCSI software traffic flowing through the NIC adapters.

For more information about iSCSI


The following links may be useful for further information regarding iSCSI:

Microsoft Windows 2008 R2 See the HP LeftHand SAN iSCSI Initiator for Microsoft Windows Server 2008 http://h10032.www1.hp.com/ctg/Manual/c01750839.pdf VMware vSphere Server ESXi5.1 vSphere Storage Guide: http://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-esxivcenter-server-511-storage-guide.pdf ESXi 5.0 vSphere Storage Guide: http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxivcenter-server-50-storage-guide.pdf

Citrix XENServer See http://docs.vmd.citrix.com and http://forum.synology.com/wiki/index.php/How_to_use_iSCSI_Targets_on_Citrix_XenServer Linux Redhat Entreprise See the Red Hat Storage Administration Guide http://docs.redhat.com/docs/enUS/Red_Hat_Enterprise_Linux/6/pdf/Storage_Administration_Guide/Red_Hat_Enterprise_Linux-6Storage_Administration_Guide-en-US.pdf Linux SUSE Linux Enterprise Server 11 Detailed documentations can be found on http://www.novell.com/documentation/sles11/

85

Boot from iSCSI


The iSCSI Boot feature allows a server to boot from a remote disk (known as the iSCSI target) on the network without having to directly attach a boot disk. The main benefits are:

Centralized boot process Cheaper servers (diskless) Less server power consumption - Servers can be denser and run cooler without internal storage. Boot From SAN-like benefits at attractive costs Easier server replacement You can replace servers in minutes, the new server point to the old boot location. Easier backup processes The system boot images in the SAN can be backed up as part of the overall SAN backup
procedures. NOTE: Under Virtual Connect management, Accelerated iSCSI takes automatically place when iSCSI boot is enabled.

Boot from iSCSI : Creating a Virtual Connect profile


The following steps provide an overview of the procedure to create a Boot from iSCSI Virtual Connect profile:

Open Virtual Connect manager. From the Menu Define, select Server Profile in order to create a new VC profile.

Note: VCM assigns FCoE connections by default!

NOTE: A server with one FlexFabric adapter can be configured with a unique personality, either all Ethernet, or Ethernet/iSCSI, or Ethernet/FCoE. Therefore its not possible in this case to enable at the same time both FCoE and iSCSI connections. So the two existing FCoE connection must be deleted. On the other hand, a server with multiple FlexFabric Adapters can be configured with both iSCSI and FCoE connections mapped to different adapters.

If you have a unique FlexFabric Adapter, delete the two FCOE connections by clicking on Delete, otherwise jump to the
next bullet:

86

In the iSCSI HBA Connections section, click Add

In the Network Name column, click on Select a network

Select your iSCSI dedicated VC network and click OK

NOTE: The multiple network feature (i.e. when using 802.1Q VLAN tagging) is not supported for iSCSI connections

87

In the Port Speed column, you can adjust the speed settings (Auto Preferred Custom) In the Boot Setting column, select Primary

NOTE: Disabled: Only Accelerated iSCSI is available. Boot is unavailable. Primary: Enables you to set up a fault-tolerant boot path and displays the screen for Flex-10 iSCSI connections. Accelerated iSCSI is enabled. USE-BIOS: Indicates if boot will be enabled or disabled using the server iSCSI BIOS utility.

A new window menu pops-up automatically when Primary is selected:

88

2 choices are offered at this point:


o o Enter all iSCSI boot parameters manually for the primary connection. Visit the Appendix 1 for help. Use the iSCSI Boot Assistant (if previously configured, see below for more details).

NOTE: Boot Assistant supports only HP StoreVirtual Storage Solution today; its the recommended method as it simplifies the procedure and avoids typing errors.

Boot Assistant prerequisites: Make sure to update your HP StoreVirtual 4000 Storage solution with the latest software version (HP StoreVirtual Storage is the new name for HP LeftHand Storage and HP P4000 SAN solutions). Servers and volumes must be created on the StoreVirtual storage. StoreVirtual volumes must be assigned to servers. The StoreVirtual user and password credentials must be configured in VCM: o Click on the Storage Mgmt Credentials link on the left hand side menu:

Click Add and enter the required information:

89

Steps with Boot Assistant: Click Use Boot Assistant Then select the iSCSI target from Management Targets drop down list, then click Retrieve

Then select the Boot Volume

After all entries have been correctly entered either manually or by the Boot Assistant, save the iSCSI Boot parameters,
by clicking Apply

NOTE: Boot Assistant will populate information for iSCSI Boot Configuration and Authentication. The user needs to fill in the Initiator Network Configuration, either by checking Use DHCP to retrieve network configuration or filling in the I P address and Netmask fields.

90

Now we can define the second iSCSI connection, in the iSCSI HBA Connections section, click Add

NOTE: Allowing more than one iSCSI application server to connect to a volume concurrently without cluster-aware applications or without an iSCSI initiator with Multipath I/O software could result in data corruption.

In the Network Name column, select your second iSCSI dedicated VC network, In the Port Speed column, you can adjust the speed settings (Auto Preferred Custom) In the Boot Setting column, select Secondary

A new window menu pops-up automatically when Secondary is selected, enter all iSCSI boot parameters for the

secondary connection, the settings are the same as the primary configuration (you can consult the Appendix 1 for help) or you can use the iSCSI Boot Assistant (if previously configured).

Save by clicking Apply.

91

Additional VC Ethernet Network connections must be added to the VC profile in order to give to the server some other
network access like Service Console, VM Guest VLAN, VMotion, etc. This will obviously depend on your server application.

When done, you can assign your profile to a server bay

Click Apply to save the profile

92

The server can now be powered on (using either the OA, the iLO, or the Power button)

NOTE: A press on Any key for servers with recent System BIOS is required to view the Option ROM boot details.

While the server starts up, a screen similar to this one should be displayed:

Make sure the iSCSI disk is shown during the Emulex iSCSI scan.

93

Validation of the iSCSI configuration:

Initiator name iSCSI port 1 Make sure you get an IP address if DHCP is enabled

iSCSI Disk Volume Make sure you see the drive information

iSCSI port 2 Make sure you get an IP address if DHCP is enabled

If everything is correct, you can start with the OS deployment, otherwise consult the Troubleshooting section.

94

Boot from iSCSI: Installing Microsoft Windows Server 2012


An iSCSI boot Windows Server installation does not require any particular instructions except that the iSCSi drivers must be provided right at the beginning of the Windows installation in order to discover the iSCSI drive presented to the server.

Launch the Windows installation

At the Where do you want to install Windows? step, the Windows installation does not detect any iSCSI drive

So it is necessary to provide at this stage, the iSCSI drivers of the Emulex card (NC55x 10Gb FlexFabric Adapter) in
order to detect the iSCSI LUN target. To obtain these drivers, click below on the web link corresponding to the Emulex card you have in the server:

95

HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Server Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter

http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5033634&taskId=135&prodT ypeId=329290&prodSeriesId=5033632&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=4324854&taskId=135&prodT ypeId=3709945&prodSeriesId=4296125&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215362&taskId=135&prodT ypeId=3709945&prodSeriesId=5215333&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215390&taskId=135&prodT ypeId=3709945&prodSeriesId=5215389&lang=en&cc=us

Then select Microsoft Windows Server 2012 and download the HP Emulex 10GbE iSCSI Driver:

Using the iLO Virtual Drives menu, mount a virtual folder and select the iSCSI folder located where your HP Emulex
drivers have been unpacked.

96

Then select Load driver

And browse the just mounted iLO folder and click OK

97

Windows should detect the be2iscsi.inf file contained in the folder

Click Next

98

The installation should finally detect the iSCSI Boot volume that has been presented to the server

Select the iSCSI Volume and click Next then you can complete the Windows installation. NOTE: Sometimes a same drive is detected several times by the Windows installation due to multipathing; this is because the server profile under VCM is configured with two iSCSI ports. In such case, the Windows installation automatically turns offline the duplicated drives and leaves only one drive that can be selected.

99

Boot from iSCSI: Installing Microsoft Windows Server 2008


An iSCSI boot Windows Server installation does not require any particular instructions except that the iSCSi drivers must be provided right at the beginning of the Windows installation in order to discover the iSCSI drive presented to the server.

Launch the Windows installation

At the Where do you want to install Windows? stage, the Windows installation does not detect any iSCSI drive

So it is necessary to provide at this stage, the iSCSI drivers of the Emulex card (NC55x 10Gb FlexFabric Adapter) in
order to detect the iSCSI LUN target.

100

To obtain these drivers, click below on the web link corresponding to the Emulex card you have in the server: http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5033634&taskId=135&prodT ypeId=329290&prodSeriesId=5033632&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=4324854&taskId=135&prodT ypeId=3709945&prodSeriesId=4296125&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215362&taskId=135&prodT ypeId=3709945&prodSeriesId=5215333&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215390&taskId=135&prodT ypeId=3709945&prodSeriesId=5215389&lang=en&cc=us

HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Server Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter

Then select Microsoft Windows Server 2008 and download the HP Emulex 10GbE iSCSI Driver:

Using the iLO Virtual Drives menu, mount a virtual folder and select the iSCSI folder located where your HP Emulex
drivers have been unpacked.

101

Then select Load driver

And browse the just mounted iLO folder and click OK

102

Windows should detect the be2iscsi.inf file contained in the folder

Click Next

103

The installation should finally detect the iSCSI Boot volume that has been presented to the server

Select the iSCSI Volume and click Next then you can complete the Windows installation.

104

NOTE: Sometimes a same drive is detected several times by the Windows installation due to multipathing; this is because the server profile under VCM is configured with two iSCSI ports. In such case, the Windows installation automatically turns offline the duplicated drives and leaves only one drive that can be selected.

105

Boot from iSCSI: Installing VMware vSphere 5


Prerequisites Ensure that the LUN is presented to the ESX system as LUN 0. The host can also boot from LUN 255.

Ensure that no other system has access to the configured LUN.


Manual ESXi installation Method The installation of an iSCSI Boot server with ESXi 5.0 is quite straight forward and does not required any specific steps:

Download the installation ISO from the HP website

https://h20392.www2.hp.com/portal/swdepot/displayProductsList.do?category=SVIRTUAL or from the VMware website https://www.vmware.com/tryvmware/?p=esxi iLO.

Burn the installation ISO to a CD, or move the ISO image to a location accessible using the virtual media capabilities of Boot the server and launch the HP VMware ESXi installation

106

Click Enter

Click F11 to accept the agreement

107

The iSCSI volume to install ESXi should be detected without difficulty by the installer:

Once the disk is selected, you can carry on with the next steps

108

At the end, you can reboot the server to start ESXi 5.0

Once rebooted, it is necessary to install the latest Emulex drivers and firmware, see

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c03005737

109

Boot from iSCSI: Installing VMware vSphere 4.1


Prerequisites

Ensure that the LUN is presented to the ESX system as LUN 0. The host can also boot from LUN 255. Ensure that no other system has access to the configured LUN.

Manual ESX installation Method

Obtain the drivers


The VMware ESX 4.1 CD does not include the drivers for the FlexFabric 10Gb Converged Network Adapters (NC55x 10Gb FlexFabric Adapter) so during the install, its necessary to provide the iSCSI drivers (be2iscsi) and the NIC drivers (be2net). Visit the following links to download to latest FlexFabric Adapter drivers: http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5033634&taskId=135&prodT ypeId=329290&prodSeriesId=5033632&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=4324854&taskId=135&prodT ypeId=3709945&prodSeriesId=4296125&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215362&taskId=135&prodT ypeId=3709945&prodSeriesId=5215333&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215390&taskId=135&prodT ypeId=3709945&prodSeriesId=5215389&lang=en&cc=us

HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Server Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter

Then select VMware ESX/ESXi 4.1 and download the following two Emulex 10GbE Drivers:

Start the VMware installation by inserting the ESX 4.1 DVD or ISO image through ILO Virtual Media.

110

At the Custom Drivers page, click Yes and Add to install first the Emulex NIC drivers (be2net)

Then insert the just downloaded Emulex ESX4.1 NIC driver CD and select the only module to import

The NIC driver should be listed now in the Custom Drivers window

111

Click Add again to provide now the Emulex iSCSI drivers (be2iscsi), insert the Emulex ESX 4.1 iSCSI driver CD and
select the module to import

You should have now the two drivers listed, click Next

Click Yes to load the Load Drivers warning

112

Later on, make sure you have the Emulex Network Adapter properly detected, and then configure the network
settings

At the setup type window, select Advanced setup

113

If the iSCSI volume has been properly presented and the correct iSCSI driver has been loaded, the Select a location to
install ESX window should indicate the iSCSI Boot volume presented to the server:

Select the iSCSI volume and click Next then follow the traditional installation procedures. Check VMware's
documentation for more details.

114

View iSCSI Storage Adapter Information Procedure:

Using a vSphere client, select the host and click the Configuration tab, then in Hardware, select Storage Adapters.
To view details for a specific iSCSI adapter, select the adapter from the Storage Adapters list.

To list all storage devices the adapter can access, click Devices.

To list all paths the adapter uses, click Paths.

115

How to configure/change/check the iSCSI Multipathing on ESX4.1 Generally there is nothing to change under ESX, the default multipathing settings used by VMware is correct. However if changes are required, its possible to modify the path selection policy and preferred path. Procedure:

Select the host and click the Configuration tab, then in Hardware, select Storage.
From the Datastores view, select the datastore and click Properties

Click Manage Paths

116

Set multipathing policy


o Fixed (VMware) This is the default policy for LUNs presented from an Active/Active storage array. Uses the designated preferred path flag, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path or it becomes unavailable, ESX selects an alternative available path. The ESX host automatically returns to the previously-defined preferred path as soon as it becomes available again. o VMW_PSP_FIXED_AP Extends the Fixed functionality to active-passive and ALUA mode arrays. o Most Recently Used (VMware) This is the default policy for LUNs presented from an Active/Passive storage array. Selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. ESX does not return to the previous path when if, or when, it returns; it remains on the working path until it, for any reason, fails. o Round Robin (VMware) Uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths. For Active/Passive storage arrays, only the paths to the active controller will used in the Round Robin policy. For Active/Active storage arrays, all paths will used in the Round Robin policy. Note: This policy is not currently supported for Logical Units that are part of a Microsoft Cluster Service (MSCS) virtual machine.

117

For the fixed multipathing policy, right-click the path to designate as preferred and select Preferred.

Click Ok to save and exit the dialog box. Reboot your host for the change to impact the storage devices currently managed by your host.

118

Boot from iSCSI: Installing Red Hat Enterprise Linux 5 Update 4


Obtain the iSCSI drivers
At this time of writing, Red Hat Enterprise Linux DVD does not include the drivers for the FlexFabric 10Gb Converged Network Adapters (NC55x 10Gb FlexFabric Adapter) so during the install, its necessary to provide the iSCSI driver disk (be2iscsi). To obtain these drivers, click below on the web link corresponding to the Emulex card you have in the server: http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5033634&taskId=135&prodT ypeId=329290&prodSeriesId=5033632&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=4324854&taskId=135&prodT ypeId=3709945&prodSeriesId=4296125&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215362&taskId=135&prodT ypeId=3709945&prodSeriesId=5215333&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215390&taskId=135&prodT ypeId=3709945&prodSeriesId=5215389&lang=en&cc=us

HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Server Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter

Then select Red Hat Enterprise Linux Server and download the HP NC-Series ServerEngines Driver Update Disk for Red Hat Enterprise Linux 5 Update 4

119

Start the RHEL5.4 installation by inserting the DVD or ISO image through iLO Virtual Media.
At the boot prompt: o If there is a single I/O path to the iSCSI device, enter: linux dd During the setup, this command prompts to provide a driver disk for the FlexFabric CNA.

o With multiple I/O paths to the iSCSI device, enter: linux dd mpath This command enables multipath and prompts to provide a driver disk for the FlexFabric CNA.

120

When prompted, Do you have a driver disk? click Yes

Then insert the floppy disk image using the iLO virtual Media and click OK.

121

Notice the floppy is being read, and pressing <Alt> <F3> will show be2iscsi driver being loaded.

Press <Alt> <F1> and return to the main install screen. When prompted again with Do you wish to load any more driver disks? click No, then follow the traditional
installation procedures. Check Red Hat's documentation for more details. multipathing detected for a multipath iSCSI target installation)

At the Select the drive to use for this installation windows, make sure only one iSCSI drive is proposed (with

122

Boot from iSCSI: Installing Suse Linux Enterprise Server 11


Obtain the iSCSI drivers
At this time of writing, SUSE LINUX Enterprise Server 11 CD does not include the drivers for the FlexFabric 10Gb Converged Network Adapters (NC55x 10Gb FlexFabric Adapter) so during the install, its necessary to provide the iSCSI driver disk (be2iscsi). To obtain the be2iscsi driver disk, refer to the following table: HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Server Adapter HP FlexFabric 10Gb 2-port 554FLB Adapter HP FlexFabric 10Gb 2-port 554M Adapter http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5033634&taskId=135&prodT ypeId=329290&prodSeriesId=5033632&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=4324854&taskId=135&prodT ypeId=3709945&prodSeriesId=4296125&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215362&taskId=135&prodT ypeId=3709945&prodSeriesId=5215333&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownl oad.jsp?lang=en&cc=us&prodNameId=5215390&taskId=135&prodT ypeId=3709945&prodSeriesId=5215389&lang=en&cc=us

Then select SUSE LINUX Enterprise Server and download the HP ServerEngines Driver Update Disk for SUSE Linux Enterprise Server 11

Start the SUSE LINUX Enterprise Server 11 CD1 installation by inserting the DVD or ISO image through ILO Virtual
Media.

123

A menu will be displayed prompting for input.

Press F6 and select Yes to get prompted during the installation for the driver update disk

124

Then click Installation

125

Then insert the floppy disk image using the iLO virtual Media and click OK.

After a while, you should get the following windows with the new mounted sda: drive listed, select sda: and click OK

126

Notice the floppy is being read, and pressing <Alt> <F3> will show be2iscsi driver being loaded.

Press <Alt> <F1> and return to the main install screen where you should now see the iSCSI drive(s)

Click Back and follow the traditional installation procedures as prompted. Check SUSE's documentation for more
details.

127

If there are multiple I/O paths to the iSCSI device during the SUSE installation, you must enable the multipath support,
click Partitioning to open the Preparing Hard Disk page

Click Custom Partitioning (for experts), then click Next.

128

Select the Hard Disks main icon, click the Configure button then select Configure Multipath, then click Yes when
prompted to activate multipath

This rescans the disks and shows available multipath devices (such as
/dev/mapper/3600a0b80000f4593000012ae4ab0ae65). Select one of the hard disks starting with /dev/mapper/ that should be detected twice if multipathing is used.

Click Accept to continue with the traditional installation procedures.

129

Troubleshooting
Emulex iSCSI Initiator BIOS Utility
The iSCSI Initiator BIOS Utility is an INT 13H option ROM resident utility that you can use to configure, manage and troubleshoot your iSCSI configuration. NOTE: In order to get into the iSCSI utility, iSCSI must be enabled in the Virtual Connect profile!

With the Emulex iSCSISelect Utility you can:

Configure an iSCSI initiator on the network Ping targets to determine connectivity with the iSCSI initiator Discover and display iSCSI targets and corresponding LUNs Initiate boot target discovery through Dynamic Host Configuration Protocol (DHCP) Manually configure bootable iSCSI targets View initiator properties View connected target properties
NOTE: HP Virtual Connect profile takes precedence over the settings in this menu: If iSCSI SAN boot settings are made outside of Virtual Connect (using iSCSISelect or other configuration tools), Virtual Connect will restore the settings defined by the server profile after the server blade completes the next boot cycle. Only the USE-BIOS option in a VC Profile preserves the boot settings options set in the iSCSI Utility or through other configuration utilities.

To run the Emulex iSCSISelect Utility, press <Ctrl> + <S> when prompted:

130

NOTE: To reach the iSCSISelect Utility screen, it might be necessary to press any key during POST:

The iSCSI Initiator Configuration menu will appear after the BIOS initializes.

The iSCSI Initiator name we get here is related to the control of Virtual Connect.

A server with a VC profile without iSCSI Boot parameters shows a factory default initiator name like iqn.199007.com.Emulex.xx-xx-xx-xx-xx with in red the default Factory MAC address of the iSCSI adapter.

A server with a VC profile with iSCSI Boot parameters shows the initiator name defined in the VC Profile.

131

Configuration checking If you are facing some connection problems with your iSCSI target, you can use the iSCSISelect Ping utility to troubleshoot your iSCSI configuration. If you dont see any iSCSI Boot drive detected during POST (BIOS not installed is displayed during POST) you might need to check the iSCSI configuration written by VC.

From the iSCSI Initiator Configuration menu, tab to Controller Configuration menu, and press Enter. Select the first Controller and press Enter. Move to the iSCSI Target Configuration and press Enter.

Verify you have one iSCSI target, this target must correspond to your Virtual Connect profile configuration Select the target and press Enter. You can verify the target name, the IP Address and the authentication method.

132

A screen without target information like seen below, means that there is an issue somewhere:

Multiple reasons can lead to this issue:

Out-dated Emulex firmware. Authentication information problem. Virtual Connect profile not assigned (make sure also there is no issue reported by the VC Domain). Network connectivity issue between the server and the iSCSI target. iSCSI target problem: misconfiguration, wrong LUN masking, etc.
For further debugging, you can manually enter the target configuration to run a PING test:

Select Add New iSCSI target Enter all the target information

133

Then select Ping and press Enter.

You must get the following successful result:

If ping is successful, then its very likely its the incorrect authentication information. If you still cannot connect or ping your iSCSI target, please refer to the following paragraphs.

134

Emulex OneCommand Manager (OCM)


The Emulex OneCommand Manager Application is a comprehensive management utility for Emulex and converged network adapters (CNAs) and host bus adapters (HBAs) that provides a powerful, centralized adapter management suite, including discovery, reporting. Its an excellent tool for troubleshooting. OneCommand Manager contains a graphical User Interface (GUI) and a Command Line Interface (CLI) OneCommand Manager is available under:

Windows Server 2003/2008/2008 R2 Linux VMware ESX Server


See http://www.emulex.com/downloads/emulex.html Some of the OneCommand Manager interesting features available for iSCSI:

Run diagnostic tests on adapters Reset / disable adapters Manage an adapter's CEE settings Discover iSCSI targets Login to iSCSI Targets from CNAs View iSCSI Target session information Logout from iSCSI targets
Not all OneCommand Manager Application features are supported across all operating systems. Consult the Emulex OneCommand documentation.

Target LUN 0 from Port 1

3 NIC 1 iSCSI 0 FCoE iSCSI Initiator

iSCSI Target Target LUN 0 from Port 2 MAC of the NICs

135

LUN information

If you select the target, you can click on Sessions

136

And get lots of diagnostic of the target session

Diagnostics

137

Problems found with OneCommand Manager


Problem:
You have configured a VC server profile with two iSCSI ports but OneCommand Manager or the iSCSI Initiator show only one iSCSI port connected to the iSCSI LUN

Solution:
To properly configure iSCSI load balancing sometimes like with HP StoreVirtual 4000 Storage, you must use for the Primary Target Address in the iSCSI boot parameters the Virtual IP Address (VIP) of the cluster and not the IP address of the HP StoreVirtual Storage node.

Make sure you use the VIP of your storage cluster when using two iSCSI ports boot connections

Also verify with your Storage Administrator that all appropriate clusters have VIPs configured.

138

Problems found during iSCSI Boot


Problem:
You are using a DHCP server to get the iSCSI Boot parameters but during POST (server Power-On Self-Test) the iSCSI disk is not be displayed when the Emulex iSCSI Initiator is executed although the menu shows correct initiator IP addresses:

Solution 1:
You are not able to get the DHCP option 43 to work with the FlexFabric 10Gb Converged Network Adapters, for more information see Appendix 2

Solution 2:
Make sure the storage presentation (also known as LUN masking) is correctly configured on the storage array

139

Problem:
You get a warning message with IP address: 0.0.0.0

Solution:
There are different reasons for this issue: o Your second iSCSI boot connection is not configured in the VC profile, in which case its not a problem. o You have a network connection issue; check the VC profile and the vnet corresponding to the iSCSI connection. Verify the status of the vnet and make sure it is properly configured with all ports in green.

Problem:
BIOS Not Installed appears without any drive information

Solution: Make sure the Storage presentation (also known as LUN masking) is correct. Make sure the Emulex CNA has the latest Firmware.

140

This may also be network, or authentication information problem:


o Check iSCSISelect, in Controller Properties page to make sure Boot Support is Enabled.

Check the network in iSCSISelect in Network Configuration page, make sure Link Status field is Link Up and DHCP field match what is configured in VC.

141

Click on Ping

And enter the target IP to make sure the network connection is OK.

142

o o

If ping failed and DHCP is enabled, check the DHCP server setup. If ping failed and DHCP is disabled:

Select Configure Static IP Address

Then make sure the initiator IP, network mask and gateway are set up correctly:

If Ping is successful, make sure authentication configuration in VC matches what is configured on storage side.

Problem:
You cannot boot from iSCSI and your boot iSCSI LUN ID is above 8.

Solution:
Change the iSCSI boot LUN ID to a number lower than 9.

143

PXE booting problems

Problem:
Media test failure when PXE booting a G6 server with a CNA mezzanine and only FlexFabric modules in bay 3 and 4.

Solution:
Make sure all the LOM ports (i.e. 1 and 2) are PXE disabled in the VC profile otherwise you will see the PXE error media test failure because the default VC settings are Use-Bios:

144

iSCSI boot install problems with Windows Server


Problem:
Installation loaded the iSCSI driver and installation proceeded seeing iSCSI target LUN, later during copy file phase, it complained cannot find the driver be2iscsi.sys/cat/inf file even though the same files were used to access the target. Skipping the files will end up windows installation failed.

Solution:
Make sure you get the window installation DVD which provides choices of Custom and Express install. You have to choose Custom installation. Express installation requires boot driver to be Windows Signed driver (WinLogo), if its not, w2k3 installation just gives very misleading generic message (file not found) and failed th e installation.

Problem:
Drivers loaded successfully, target LUN seen, installation proceeded to target LUN, no complaint of drivers not found, but blue screen occurred.

Solution:
Monitor the network to the iSCSI storage using one of the multiple network monitoring tools available on the market. This used to be one of the causes for iSCSI boot install problems with Windows 2008. For more information, see the Appendix 3: How to monitor an iSCSI Network.

Problem:
Load driver seem to hang. It should only take a few seconds.

Solution:
Check if the firmware version matches driver version. It may hang if not match.

Problem:
During Expanding files phase, installation complained it could not locate some files and terminated.

Solution:
Monitor the network access to the iSCSI storage. Network can be a cause.

145

Problem:
iSCSI boot parameters are set up in VC side, but iSCSISelect utility does not show configurations in the iSCSI Target Configuration for the specified controller.

Solution:
Make sure controller Network Configuration is correct. Check link status, make sure its Link Up, check Static IP Address setup: correct IP, mask, and gateway (if routing is required from initiator to target).

VCEM issues with Accelerated iSCSI Boot


Problem:
During the creation of an iSCSI boot Virtual Connect profile under VCEM 6.2 and you get an unclear format error message.

Solution:
VCEM 6.2 only allows all LOWERCASE for the initiator and target information (does not apply to VCM)

iSCSI issues with HP StoreVirtual 4000 Storage


Problem:
You are connected to a HP StoreVirtual 4000 Storage Solution and during heavy network utilization; the iSCSI initiator driver reports a device failure.

Solution:
There is something that is significantly slowing down the response time on the network such that the iSCSI initiator session recovery cant occur within the default driver settings. In this case, it might be useful to increase the iSCSI initiator driver timeout parameter called ETO (or Extended TimeOut) and is configurable via OneCommand Manager. By default ETO is 90 seconds on all Windows operating systems. It can be set between 20 and 3600 seconds. You can set it to 0 also, but the minimum value assumed by the driver is 20.

146

To set the value: o Launch OneCommand Manager.

Click on the first iSCSI port and select the iSCSI target.

In the ETO entry, enter the new timeout value and click Apply

Do the same on the second iSCSI port.

147

Appendix 1- iSCSI Boot Parameters


Mandatory iSCSI Boot Parameters entries
Some entries have to be correctly filled up in order to successfully boot from iSCSI.

148

iSCSI Initiator (iSCSI Boot Configuration) (also known as the iSCSI client or iSCSI host)

Name used for the iSCSI initiator on the booting system. The initiator name length can be maximum of 223 characters.

NOTE: Make sure the Initiator Name you set is unique. This Initiator Name must be given to the Storage Administrator for the Storage presentation (also known as LUN masking). NOTE: Use the same IQN initiator name for all connections for a given host and NOT associate different ones to different adapters. (Ref. iSCSI RFC-3720).

Each iSCSI host is identified by a unique iSCSI qualified name (IQN). This name is similar to the WorldWide Name (WWN) associated with Fibre Channel devices and is used as a way to universally identify the device. iSCSI qualified names take the form iqn.yyyy-mm.naming-authority:unique name, where:

yyyy-mm is the year and month when the naming authority was established. naming-authority is usually reverse syntax of the Internet domain name of the naming authority. For example, the
iscsi.vmware.com naming authority could have the iSCSI qualified name form of iqn.1998-01.com.vmware.iscsi. The name indicates that the vmware.com domain name was registered in January of 1998, and iscsi is a subdomain, maintained by vmware.com.

unique name is any name you want to use, for example, the name of your host.
IQN examples: iqn.1991-05.com.microsoft:win-g19b6w8hsum iqn.1990-07.com.Emulex.00:17:A4:77:04:02 iqn.1998-01.com.vmware.iscsi:name1 iqn.1998-01.com.vmware.iscsi:name2

149

iSCSI Target (iSCSI Boot Configuration) The iSCSI target parameters can be set either statically (by disabling DHCP) or dynamically (by enabling DHCP). For simplification and human error prevention, its may be easier to use a DHCP server for the iSCSI boot parameters. Configuring manually the target information

The Target Name

The Target Name is your iSCSI target from which the server boots. The target name length is of maximum 223 characters. o This target name is provided by the Storage Administrator

The Boot LUN

The LUN of the Target identifies the volume to be accessed. Valid values for standard LUNs are 0-255 decimal. Valid values for extended LUNs are 13 to 16 character hexadecimal values. o This Boot LUN is provided by the Storage Administrator

The Primary Target Address

The Primary Target Address is the primary IP address used by the iSCSI target. The TCP port associated with the primary target IP address is by default 3260. o This Primary Target Address and TCP Port is also provided by the Storage Administrator. o Depending on your storage solutions, but sometimes if you plan to configured a second iSCSI boot connection from the second iSCSI HBA, it is required to enter here the Virtual IP Address (VIP) of your storage cluster and not the IP address of one of the node (thats the case with HP StoreVirtual 4000 Storage Solutions) Note: A virtual IP (VIP) address is a highly available IP address which ensures that if a storage node in a cluster becomes unavailable, servers can still access a volume through the other storage nodes in the cluster. The benefits of using a VIP here, is that in case of a storage node failure, the iSCSI traffic does not failover to the second iSCSI port thus reducing risk, latency, etc.

150

Configuring dynamically the target information

To use DHCP when configuring the iSCSI boot configuration, select the Use DHCP checkbox.

Note: In a dynamic configuration the initiator name, the target name, the LUN number and the IP address can be provided by the DHCP server. Selecting this option requires a DHCP server to be set up properly with iSCSI extensions to provide boot parameters to servers. Refer to Appendix 2.

Initiator Network Configuration

The VLAN ID
This is the VLAN number that the iSCSI initiator will use for all sent and received packets. Accepted value is from 1 to 4094.

Note: The multiple networks feature being not supported for iSCSI HBA, the VLAN ID field should be left blank for all scenarios presented in this cookbook (i.e. scenario 1, scenario 2 and scenario 3). The VLAN id option can be required when Virtual Connect is tunneling the iSCSI traffic (i.e. when the iSCSI network under VCM has the Enable VLAN Tunneling checkbox selected). The use of a Tunnel network for the iSCSI traffic is not presented in this cookbook because the most common scenarios are using the iSCSI vNet in VLAN mapping mode (i.e. not using the VLAN tunneling mode).

The IP address/Netmask/Gateway
This is the network configuration of the iSCSI initiator, a fixed IP address or DHCP can be either set.

The Use-DHCP checkbox allows the iSCSI option ROM to retrieve the TCP/IP parameters from a DHCP server.

151

Optional iSCSI Boot Parameters entries


Some fields are optional and can be configured for enhancements. Secondary iSCSI Target Address Secondary iSCSI target address is used in the event primary target IP network is down. If primary target fails to boot, Secondary iSCSI boot is used instead.

The TCP port associated with the secondary target IP address is by default 3260.

This Secondary Target Address and TCP Port is provided by the Storage Administrator A secondary target here is usually not required for a server with a second CNA iSCSI port

Security enhancement using an authentication For added network security, the Challenge Handshake Authentication Protocol (CHAP) can be enabled to authenticate initiators and targets. By using a challenge/response security mechanism, CHAP periodically verifies the identity of the initiator. This authentication method depends on a secret known only to the initiator and the target. Although the authentication can be One-Way, you can negotiate CHAP in both directions (2-way CHAP) with the help of the same secret set for Mutual authentication. You must make sure however, that what you have configured on the target side, is going to match the initiator side. Both One-Way (CHAP Mode) and Mutual authentication (CHAPM) are supported. To enable One-Way CHAP authentication: Select CHAP then enter the Target CHAP Username and Target Secret

Note: The Target/Initiator CHAP Name and Target/Initiator Secret can be any name or sequence of numbers over 12 and less than 16 characters. However, the username and secret on the Target side must match the name on the Initiator side.

152

Note: CHAP authentication must also be enabled on the storage system side.

To enable Mutual CHAP authentication: Select CHAPM then enter the Target CHAP Username, Target Secret, Mutual Username (known as well as Initiator CHAP Name), and Mutual Secret (known as well as Initiator Secret).

Note: The Target/Initiator CHAP Name and Target/Initiator Secret can be any name or sequence of numbers over 12 and less than 16 characters. However, the name and secret on the Target side must match the name on the Initiator side. Note: If users enabled CHAP/CHAPM through DHCP Vendor Option 43, the username and secret will need to be entered through VC profile configuration.

153

Note: CHAPM authentication must also be enabled on the storage system side.

154

Appendix 2 - Dynamic configuration of the iSCSI Boot Parameters


The dynamic configuration of the iSCSI target parameters through a DHCP server is one of the options available in the iSCSI Boot configuration in order to get automatically all required parameters: initiator name, target name, its IP address, its ports and Boot LUN number. During this dynamic configuration, DHCP clients (i.e. iSCSI initiators configured with DHCP enabled for the iSCSI Boot configuration) are identified by the DHCP server by their vendor type. This mechanism is using the DHCP Vendor Option 43 which contains the vendor specific information. If the DHCP option 43 is not defined as specified by the vendors then the DHCP server has no possibility to identify the client and therefore will not provide the iSCSI target information. When this occurs the iSCSI disk will not be displayed when the Emulex iSCSI Initiator is executed.

So its necessary to configure properly the DHCP server with the mandatory fields provided by the vendor (Emulex) for our Emulex Dual Port FlexFabric 10Gb Converged Network Adapter. We will describe the configuration steps for both Windows and Linux DHCP servers.

155

Windows 2008 DHCP server configuration


The following is the set up instruction for a Windows 2008 DHCP server to enable DHCP option 43 as specified by Emulex for the NC551 and NC553 Dual Port FlexFabric 10Gb Converged Network Adapter.

Start DHCP Manager. In the console tree, click the applicable DHCP server branch. Right-click on IPV4 and then click Define Vendor Classes to create a new vendor class.

Click Add.

156

In the New Class dialog box, type a descriptive identifying name for the new option in the Display name box. You may
also add additional information to the Description box.

Type in the data to be used by the DHCP Server service under ASCII (32 characters max) like Emulex
To enter the ID, click the right side of the text box under ASCII.

This ID is the value to be filled in the VC iSCSI Boot Configuration DHCP Vendor ID field.

157

Click OK, and then click Close.

Right-click on IPV4 again and then click Set Predefined Options to set the vendor specific options.

In Option class, choose the vendor created above.

158

Click Add

Then provide:
o An option name, e.g. "Boot target" o Data type = String or Encapsulated o Code = 43 o Add any description.

159

Click OK

Click OK again. In DHCP Manager, double-click the appropriate DHCP scope. Right-click Scope Options and then click Configure Options.

Click the Advanced tab.

160

Now choose the vendor class we just entered above (e.g. Emulex)

Select the check box for 43 (created earlier) and enter the correct string.

(e.g. iscsi:192.168.1.19:::iqn.2003-10.com.lefthandnetworks:p4300:13392:esx5-1:"iqn.199105.com.vmware:esx5-1":::D)

For more information about the format of the DHCP Option 43 string, see below.

Click OK.
Note: Its not necessary to restart the DHCP service after making the modification.

161

After the configuration is complete on the DHCP server, you need to enable DHCP and use the same vendor ID within
the VC server profile in VCM:

Note: Be careful, the DHCP Vendor ID is case sensitive.

Note: If the Initiator Name is not set up in the DHCP Vendor option 43, it IS necessary to enter the initiator name at the same time as the Vendor ID:

Note: If the Initiator Name is set up in the DHCP Vendor option 43, it IS NOT necessary to enter the initiator name. If users enter the initiator name then DHCP option 43 set up overrides the value entered from VC.

Note: The precedence of initiator name is: DHCP option 43 then VC configuration then default initiator name.

162

o The Initiator Network Configuration is not configured through DHCP Vendor Option 43, user needs to fill in the IP address and Netmask fields:

o Or check Use DHCP to retrieve network configuration:

o The Authentication method needs to match to what is configured in DHCP vendor option 43 (i.e. in the AuthenticationType field). Note: DHCP vendor option 43 does not configure the Username and secret so make sure to provide Username/Secret here if authentication is enabled in DHCP:

163

Linux DHCP server configuration


The following is the set up instruction for a Linux DHCP server to enable DHCP option 43 as specified by Emulex for the Emulex Dual Port FlexFabric 10Gb Converged Network Adapter. For a Linus DHCP server, the vendor option 43 is defined in the /etc/dhcpd.conf file. The example below illustrates how to define the vendor option 43 for the FlexFabric Adapters in dhcpd.conf:

option vendor-class-identifier "Emulex"; option space HP; option HP.root-path code 201 = text; class "vendor-classes" { match option vendor-class-identifier; } subclass "vendor-classes" "Emulex" { vendor-option-space HP; option HP.root-path "iscsi:\"10.10.10.203\":\"3260\"::\"iqn.198603.com.hp:storage.p2000g3.1020108c44\":::::"; }

This is the vendor identifier that must be set as well in the iSCSI VC profile

The string format must be defined correctly as shown. For more information about the format of the DHCP Option 43 string, see below.

After the dhcpd.conf file has been modified and saved, restart the DHCP service, enter: [root@DHCP_server]# service dhcpd restart The DHCP server configuration is now complete.

164

Format of DHCP option 43 for the Emulex FlexFabric Adapters


iscsi:<TargetIP>:<TargetTCPPort>:<LUN>:<TargetName>:<InitiatorName>:<HeaderDigest>:<DataDigest>:< AuthenticationType>

Strings shown in quotes are part of the syntax and are mandatory. Fields enclosed in angular brackets (including the angular brackets) should be replaced with their corresponding
values. Some of these fields are optional and may be skipped.

When specified, the value of each parameter should be enclosed in double quotes. See examples below. All options are case insensitive.
Description of Optional Parameters

TargetIP

Replace this parameter with a valid IPv4 address in dotted decimal notation. This is a mandatory field. Replace this parameter with a decimal number ranging from 1 to 65535 (inclusive). It is an optional field. The default TCP port 3260 is assumed, if not specified. This parameter is a hexadecimal representation of Logical Unit number of the boot device. It is an optional field. If not provided, LUN 0 is assumed to be the boot LUN. It is an eight-byte number which should be specified as a hexadecimal number consisting of 16 digits, with an appropriate number of 0s p added to the left, if required. Replace this parameter with a valid iSCSI target iqn name of up to 223 charact ers. This is a mandatory field. Replace this parameter with a valid iSCSI iqn name of up to 223 characters. This is an optional field. If not provided, the initiator name configured through VC will be used, if it is not configured in VC, then the default Initiator name will be used. This is an optional field. Replace this parameter with either E or D. o E denotes header digest is enabled o D denotes that it is disabled. If skipped, it is assumed that Header Digest is disabled. This is an optional field. Replace this parameter with either E or D. o E denotes data digest is enabled and o D denotes that it is disabled. If not provided, it is assumed that Data Digest is disabled by default. This is an optional field. If applicable, replace this parameter with D, E or M. o D denotes authentication is disabled. o E denotes that One-way CHAP is enabled - the username and secret to be used for one way CHAP must be specified by non-DHCP means. o M denotes that MutualCHAP is enabled - username and secret required for mutual CHAP authentication must be specified by non-DHCP means. If not specified, this field defaults to authentication-disabled.

TargetTCPPort LUN

TargetName

InitiatorName

HeaderDigest

DataDigest

AuthenticationType

165

Examples
Note: Emulex requires that all attributes are within double quotes () and that any optional parameter not defined must include a colon (:) in the string. If the string is not properly formed then the option ROM ignores the DHCP servers offer. iscsi:<TargetIP>:<TargetTCPPort>:<LUN>:<TargetName>:<InitiatorName>:<HeaderDigest>:<DataDigest>:< AuthenticationType> Example with target and initiator name: iscsi:192.168.1.19:::iqn.2003-10.com.lefthandnetworks:p4300:13392:esx5-1:"iqn.1991-05.com.vmware:esx51":::D

Target IP address: 192.168.1.19 Target TCP port: Not specified, use default from RFC 3720 (3260) Target boot LUN: Not specified, Assumed LUN 0. Target iqn name: iqn.2003-10.com.lefthandnetworks:p4300:13392:esx5-1 Initiator name: iqn.1991-05.com.vmware:esx5-1 Header Digest : Not specified. Assume disabled. Data digest: Not specified. Assume disabled. Authentication Type: Disabled.
Default Initiator name and Data Digest settings: iscsi:192.168.0.2:3261:000000000000000E:iqn.2009-4.com:1234567890::E::E

Target IP address: 192.168.0.2 Target TCP port: 3261 Target boot LUN: 0x0E Target iqn name: iqn.2009-04.com:1234567890 Initiator name: Not specified. Use the Initiator name already configured. Use the default name if none was
configured.

Header Digest : Enabled Data digest: Not specified. Assume disabled. Authentication Type: 1-way CHAP.
Default TCP Port and Mutual CHAP:

iscsi:192.168.0.2::000000000000000E:iqn.2009-4.com:1234567890::E:D:M

Target IP address: 192.168.0.2 Target TCP port: Use default from RFC 3720 (3260) Target boot LUN: 0x0E Target iqn name: iqn.2009-04.com:1234567890 Initiator name: Not specified. Use the Initiator name already configured. Use the default name if none was
configured.

Header Digest : Enabled Data digest: Data Digest disabled Authentication Type: Mutual CHAP

166

Appendix 3- How to monitor an iSCSI Network?


Some methods for identifying the root cause of performance problems, bottlenecks and various network issues is something important to cover when playing with iSCSI traffic. Lets see briefly how we can identify network problems and recommend actions.

Monitoring Disk Throughput on the iSCSI Storage System


iSCSI Devices usually provide performance information from their management software:

The performance monitoring on the iSCSI system can help you to understand the current load on the SAN to provide additional data to support decisions on issues such as the following:

Configuration options (Would network bonding help me?) Capacity expansion (Should I add more storage systems?) Data placement (Should this volume be on my SATA or SAS cluster?).
The performance data does not directly provide answers, but will let you analyze what is happening and provide support for these types of decisions.

167

Monitoring Network and Disk Throughput on the iSCSI Host


VMware vSphere With vSphere, network performance and disk throughput are displayed from Performance tab, Advanced for both vSphere Host and Virtual Machines:

To check the different load, you can then switch from Network, Disk, Datastore and Storage Adapter

Virtual Machines

vSphere host

168

The Disk monitoring counters:

Disk Read rate Disk Write rate Disk Usage

For more information about data analysis and recommended actions to correct the problems you identify, refer to the Storage Analysis page 8 and Network Analysis page 11 of the following VMware technical Note: http://www.vmware.com/files/pdf/perf_analysis_methods_tn.pdf

169

Microsoft Windows Resource Monitor For Windows System, Microsoft provides a Windows Resource Monitor tool that provides monitoring resource usage in real time. Among other things, it can help you analyze the disk and network throughput.

Other performance utility are also provided by Microsoft like SQLIO, designed to measure the I/O capacity of a given SQL configuration, can help to verify that your I/O subsystem is functioning correctly under heavy loads. SQL Deployments and Tests in an iSCSI SAN http://technet.microsoft.com/en-us/library/bb649502%28SQL.90%29.aspx For MS Exchange, Microsoft provides as well specific tools and counters to measure performance, see http://technet.microsoft.com/en-us/library/dd351197.aspx

170

Analyzing Network information from the Virtual Connect interface


The Virtual Connect Manager interface provides lots of counters, port statistics and information that can help to troubleshoot a network issue. Both the GUI and CLI provide telemetry statistics. From the Interconnect Bays link on the left navigation menu, select one of the modules:

Scroll down to the Uplink Port Information section:

This screen provides details on Port Information (uplinks and downlinks) Port Status, Port Statistics, and Remote Device Information.

171

The Connected to column shows the upstream device information (LLDP must be enabled on the switch)

Click on Detailed Statistics/Information for the uplink port used for iSCSI:

A statistic screen opens with lot of information; lets see briefly the port statistics section:

No error must be detected here

No error must be detected here

No error must be detected here

Number of frames discarded due to excessive transit delay Number of frames discarded due to excessive MTU size (can indicate Jumbo frames configuration issue)

High number may show that VC is running out of resource

Show FCS error (can indicate a collision issue, use smaller network segment, avoid hubs)

172

Show good Jumbo frames statistics Show Jumbo frames statistics with bad FCS (indicates collisions?) Lowest number of error must be shown here

Lowest number of error must be shown here

Frames received that exceeds the maximum permitted frame size

Excessive PAUSE frames is likely due to network congestion

You can refer to the Virtual Connect User Guide for more details about the different port statistics counters.

173

At the end of the screen, the Remote Device Information is also good for upstream switch information sometimes useful if you need to troubleshoot the remote network devices. The remote switch IP address, MAC address, remote port and switch description are provided:

VCM provides as well statistics for the FlexNIC ports. Go back to the VC Module Bay1 screen and click on Detailed statistics for the server port you are using:

Note: The statistics on individual FlexNIC is available since VC 3.70.

174

Analyzing Virtual Connect Network performance


To verify the throughput statistics of a VC port, from the VC CLI, enter: -> show statistics-throughput <EnclosureID>:<BayNumber>:<PortLabel>

This command shows the historical throughput (bandwidth and packets) of a port. Note: The same throughput statistics are also available in the GUI since VC 3.70 (from Tools / Throughput statistics)

175

VC provides also real-time traffic SNMP statistics for all ports: server ports and uplink ports. Those performance stats can be sent from VC to any SNMP tools (e.g. HP IMC, CACTI, NAGIOS, PRTG, etc.)

CACTI Management Interface

176

Wireshark
Currently the best noncommercial tool for debugging network problems is the open source protocol analysis tool Wireshark. Wireshark contains protocol dissectors for the iSCSI protocol, which can be useful for debugging network and performance problems. Wireshark is available under Windows, Linux, HP-UX, etc. For more information see http://www.wireshark.org/ Software iSCSI analysis For Software iSCSI traffic, i.e. iSCSI traffic is using a network Interface card; you have to select under Wireshark the correct interface running the iSCSI traffic for the network capture. If you are unsure, you can open the VC server profile to get the MAC addresses used by iSCSI

Then under Wireshark, select one of the two interfaces used by MPIO and click Start. You can use the Details button to see the corresponding interface MAC address

177

For a better display of the iSCSI traffic, enter iSCSI in the Filter field:

Here is a sample of a network capture filtering the iSCSI protocol:

178

iSCSI analysis for Accelerated iSCSI adapters For Accelerated iSCSI connections (i.e. iSCSI traffic is using a Host Bus Adapter) Wireshark cannot be used on the server as it captures only packet data from a network interface so it is necessary to use the Virtual Connect Port monitoring feature to monitor the Accelerated iSCSI traffic.

VC profile using Accelerated iSCSI adapters

The Virtual Connect Port monitoring enables network traffic on a set of server ports to be duplicated to an unused uplink port so that network traffic on those server ports can be monitored by Wireshark and debugged. To configure Port Monitoring under VC:

Open VC Manager From the left navigation menu, select Ethernet then Port Monitoring. Select an unused uplink port for the Network Analyzer Port For the ports to monitor, select the correct downlink used by your server from the All Ports drop down menu:

179

Then from the new list select the two downlinks connected to VC Bay 1 and Bay 2:

Click OK Enable Port Monitoring and click Apply

You should see a new icon appearing in the interface meaning port monitoring is running:

180

Now you are ready to connect a laptop running Wireshark to the VC uplink port configured as the network analyzer
port

Start the capture and filter using iSCSI

181

Acronyms and abbreviations


Term BIOS CLI CNA DCB GUI FC FCoE Flex-10 NIC Port* Flex HBA** FOS HBA I/O IOS IP iSCSI LACP LOM LUN MPIO MZ1 or MEZZ1; LOM NPIV NXOS OS POST RCFC RCIP ROM SAN SCSI SFP SPS SSH VC VC-FC VCM VLAN VSAN vNIC vNet Definition Basic Input/Output System Command Line Interface Converged Network Adapter Data Center Bridging (new enhanced lossless Ethernet fabric) Graphical User Interface Fibre Channel Fibre Channel over Ethernet A physical 10Gb port that is capable of being partitioned into 4 Flex NICs Physical function 2 or a FlexFabric CNA can act as either an Ethernet NIC, FCoE connection or iSCSI NIC with boot and iSCSI offload capabilities. Fabric OS, Brocade Fibre Channel operating system Host Bus Adapter Input / Output Cisco OS (originally Internetwork Operating System) Internet Protocol Internet Small Computer System Interface Link Aggregation Control Protocol (see IEEE802.3ad) LAN-on-Motherboard. Embedded network adapter on the system board Logical Unit Number Multipath I/O Mezzanine Slot 1; (LOM) LAN Motherboard/Systemboard NIC N_Port ID Virtualization Cisco OS for Nexus series Operating System Power-On Self-Test Remote Copy over Fibre Channel Remote Copy over IP Read-only memory Storage Area Network Small Computer System Interface Small form-factor pluggable transceiver Storage Protocol Services Secure Shell Virtual Connect Virtual Connect Fibre Channel module Virtual Connect Manager Virtual Local-area network Virtual storage-area network Virtual NIC port. A software-based NIC used by Virtualization Managers Virtual Connect Network used to connect server NICs to the external Network

182

WWN WWPN

World Wide Name World Wide Port Name

*This feature was added for Virtual Connect Flex-10 **This feature was added for Virtual Connect FlexFabric

183

Support and Other Resources


Contacting HP
Before you contact HP Be sure to have the following information available before you call contact HP:

Technical support registration number (if applicable) Product serial number Product model name and number Product identification number Applicable error message Add-on boards or hardware Third-party hardware or software Operating system type and revision level
HP contact information For help with HP Virtual Connect, see the HP Virtual Connect webpage: http://ww.hp.com/go/virtualconnect For the name of the nearest HP authorized reseller: See the Contact HP worldwide (in English) webpage: http://www.hp.com/country/us/en/wwcontact.html For HP technical support: In the United States, for contact options see the Contact HP United States webpage: http://welcome.hp.com/country/us/en/contact_us.html To contact HP by phone:

Call 1-800-HP-INVENT (1-800-474-6836). This service is available 24 hours a day, 7days a week. For continuous
quality improvement, calls may be recorded or monitored. refer to the HP website: http://www.hp.com/hps

If you have purchased a Care Pack (service upgrade), call 1-800-633-3600. For more information about Care Packs,

In other locations, see the Contact HP worldwide (in English) webpage:


http://welcome.hp.com/country/us/en/wwcontact.html

Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/country/us/en/contact_us.html After registering, you will receive email notification of product enhancements, new driver versions, firmware updates, and other product resources.

184

Related documentation
HP Virtual Connect Manager 4.01 Release Notes http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c03801912/c03801912.pdf HP Virtual Connect for c-Class BladeSystem Version 4.01 User Guide http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c03791917/c03791917.pdf HP Virtual Connect Version 4.01 CLI User Guide http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c03790895/c03790895.pdf HP Virtual Connect for c-Class BladeSystem Setup and Installation Guide Version 4.01 and later http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c03801914/c03801914.pdf HP Virtual Connect FlexFabric Cookbook http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf HP Virtual Connect Fibre Channel Networking Scenarios Cookbook http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01702940/c01702940.pdf HP Virtual Connect Multi-Enclosure Stacking Reference Guide http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02102153/c02102153.pdf iSCSI Technologies in HP ProLiant servers using advanced network adapters, Technology Brief: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01600624/c01600624.pdf HP NC-Series ServerEngines iSCSISelect User Guide http://h10032.www1.hp.com/ctg/Manual/c02255542.pdf HP StoreVirtual 4000 Storage Best Practices guide: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA2-5615ENW.pdf

HP welcomes your feedback. To make comments and suggestions about product documentation, send a message to docsfeedback@hp.com. Include the document title and manufacturing part number. All submissions become the property of HP.

Get connected
hp.com/go/getconnected Current HP driver, support, and security alerts delivered directly to your desktop

Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omiss ions contained herein. Trademark acknowledgments, if needed. c02533991, Created October 2010, Updated June 2013.

185