Documente Academic
Documente Profesional
Documente Cultură
Abstract.............................................................................................................................................. 2
Introduction......................................................................................................................................... 2
HP Direct Connect Shared SAS storage solution................................................................................... 3
VMware virtualization software ......................................................................................................... 7
Guidelines for installing and configuring HP hardware............................................................................. 8
c-Class component interconnects ........................................................................................................ 8
Drive array configuration .................................................................................................................. 8
Guidelines for installing and configuring VMware ESX software ................................................................ 9
Virtual drives and VMFS volumes ....................................................................................................... 9
LUN masking................................................................................................................................. 10
Multipathing .................................................................................................................................. 10
Storage volume naming .................................................................................................................. 10
Conclusion........................................................................................................................................ 11
Appendix: Procedures for setting up HP Direct Connect Shared SAS hardware ......................................... 12
Installing and configuring HP hardware ............................................................................................ 12
Configuring hardware for boot-from-shared storage ........................................................................... 16
For more information.......................................................................................................................... 17
Call to action .................................................................................................................................... 17
Abstract
The HP BladeSystem c-Class and HP Direct-Connect Shared SAS storage technology combines with
VMware ESX server software to provide a highly available, virtualized infrastructure. This technology
brief describes implementation of VMware virtualization technology on an HP Direct Connect Shared
SAS storage solution to achieve server and storage consolidation, simplified server and storage
deployment, and advanced active load balancing using virtual machine migration technology. An
appendix delineates the essential steps for installing and configuring HP hardware and for
configuring the hardware for boot-from-shared storage functionality.
Introduction
Shared storage systems such as the HP MSA2012sa Shared SAS solution reduce storage costs and
enable advanced computing architectures with multiple physical servers that require access to the
same storage assets. The ability to store the complete software image of a server (OS, applications,
and data) on one or more volumes of shared storage and to boot from shared storage enhances
software management and eliminates the need for additional local (in-server) storage (Figure 1).
Ethernet LAN
SAS Interconnect
Virtualization tools allow administrators to migrate virtual machines dynamically from one physical
server to another to balance hardware utilization for increased system efficiency and reduced power
usage.
Used together, the following solutions provide the basis for an automated data center:
• HP Direct Connect Shared SAS storage solution
• VMware ESX 3.5 U3 and later or VMware ESX 4.0 and later, and VMotion
2
HP Direct Connect Shared SAS storage solution
The HP Direct Connect Shared SAS 1 storage solution combines HP BladeSystem c-Class components
with the HP StorageWorks MSA 2012sa storage array (Figure 2). This combination creates a
complete system of servers and storage contained in a single rack. Supported c-Class server blades
include at least one slot that accepts the HP Smart Array P700m Controller mezzanine card for
interconnectivity with one or two pairs of HP 3Gb SAS BL switches.
Each HP 3Gb SAS Switch provides 16 internal connections with the servers and eight external ports
for SAS storage connectivity. HP recommends using a pair of switches to provide redundancy paths
between the servers and the HP StorageWorks Modular Smart Array (MSA) 2012sa.
HP Smart Array
HP c-Class Server Blade P700m Controller
mezzanine card
Front Back
Front Back
HP StorageWorks MSA2012sa Array Enclosure
Figure 3 shows the connection paths for a basic HP Direct Connect Shared SAS storage system. In this
example, the HP MSA2012sa Array Enclosure includes dual controllers (A and B). Each controller
provides two ports, with each port connected to a different SAS switch to maintain storage availability
should a switch, cable, or MSA controller fail.
Figure 3. Basic configuration block diagram of a HP Direct Connect Shared SAS system
1
SAS is an acronym for Serial Attached SCSI.
3
Figure 4 shows the external connectivity of the configuration in Figure 3. While two connections (one
per switch/array controller) are adequate, the additional connections enhance performance and
availability.
Array controllers
Figure 4 shows a typical c-Class configuration supporting 16 servers. A system with 32 servers (16
dual-density server blades) would require an additional pair of SAS switches and a second MSA
2012sa Array Enclosure.
The HP MSA 2012sa can operate as a standalone 12-drive array or with additional MSA2000 Drive
Enclosures. Figure 5 shows a maximum configuration of four drive arrays, each with three additional
MSA enclosures, for a total capacity of 192 (4 x48) drive slots.
2
c-Class MSA 2012sa MSA 2000 MSA 2000 MSA 2000
Blade Enclosure Array Enclosure Drive Drive Drive
2 #1 Enclosure A Enclosure B Enclosure C
2
MSA 2012sa MSA 2000 MSA 2000 MSA 2000
Array Enclosure Drive Drive Drive
3Gb SAS switch 2 2 #3 Enclosure A Enclosure B Enclosure C
2
MSA 2012sa MSA 2000 MSA 2000 MSA 2000
Array Enclosure Drive Drive Drive
2 #4 Enclosure A Enclosure B Enclosure C
4
Figure 6 shows a method of connecting cascaded MSA 2000 drive enclosures from one MSA
2012sa array enclosure as suggested previously in Figure 5. The “A” controllers of the array and
drive enclosures are cascaded together, as are the “B” controllers.
HP MSA 2012sa
Array Enclosure
HP MSA 2000
Drive Enclosures
HP 3Gb
SAS Switches
The HP Direct Connect SAS system offers wide flexibility in configuring a shared storage system that
meets strategic goals. Figures 7 and 8 show two configurations possible with the c-Class blade
system.
NOTE
The MSA2012sa shared storage solution primarily supports one or
two c-Class enclosures. To support more than two c-Class
enclosures with ESX servers needing access to the same shared
storage, a more extensible shared storage solution would be
required, such as the MSA2012i (iSCSI) or the MSA2012fc (Fiber
Channel) product.
5
Figure 7 depicts a single c7000 enclosure accessing two storage enclosure groups. This approach
offers more storage space on a per-server basis for applications that use large data files.
Figure 8 depicts two c7000 enclosures sharing a storage enclosure group. This solution may apply to
higher density server environments that do not have large storage demands.
6
VMware virtualization software
HP has a partnership with VMware, makers of VMware ESX virtualization software. VMware ESX is
an enterprise-class virtualization solution that consolidates multiple physical servers or workloads into
a single physical server. VMware ESX implements a hypervisor operating system in which all other
supported operating systems execute as a guest OS in a virtual machine. Multiple virtual machines
running a variety of operating systems and applications can run simultaneously on the same physical
server hardware. This consolidation concept better utilizes space and power, and reduces server
acquisition and maintenance costs. Virtual machines can be quickly deployed with a standard
configuration and set of applications to respond dynamically to variable loads.
When deployed in a shared storage environment, ESX has additional capabilities, including the
ability to migrate a virtual machine from one physical ESX server to another for load balancing,
scheduled maintenance, or other reasons. If VMotion technology is enabled on these servers, the
migration can happen live while the VM is online and without affecting the OS and applications that
are running. In Figure 9, a virtual machine (VM 3) is migrated live from one VMware ESX Server to
another. Each VM runs its own (guest) operating system (Microsoft Windows, Linux, or Unix) while the
Virtualization Layer of ESX Server manages all resource requirements.
VMotion technology
VM 1 VM 2 VM 3 VM n VM 3
App App App App App
OS OS OS OS OS
Server 1 Server 2
During a live migration, the network identity and connections of the VM are preserved, and the
network router is updated with the new physical location of the VM. The process takes a matter of
seconds on a GbE network and is transparent to users.
7
Guidelines for installing and configuring HP hardware
The HP Direct Connect Shared SAS Storage system offers wide flexibility in two key areas of shared
storage systems: component interconnects and drive array configuration.
Half-height blade:
Slot 1, ports 1 and 2 n/a 3 and 4
Slot 2, ports 1 and 2 3 and 4 5 and 6
Slot 2, ports 3 and 4 3 and 4 7 and 8
Full-height blade:
Slot 1, ports 1 and 2 n/a 3 and 4
Slot 2, ports 1 and 2 3 and 4 5 and 6
Slot 2, ports 3 and 4 3 and 4 7 and 8
Slot 3, ports 1 and 2 3 and 4 7 and 8
Slot 3, ports 3 and 4 3 and 4 3 and 4
Dual-density blade:
Server A slot, ports 1 and 2 3 and 4 5 and 6
Server B slot, ports 1 and 2 3 and 4 7 and 8
For optimum connectivity, SAS switches should be installed in pairs, even if not all server blades are
installed or configured for shared storage.
8
Guidelines for installing and configuring VMware ESX
software
Virtual drives and VMFS volumes
Each virtual machine can be configured with one or more virtual disk drives. These virtual drives
appear to the VM as SCSI drives but actually are files on a Virtual Machine File System (VMFS)
volume (Figure 10). When a guest operating system on a VM issues SCSI commands to a virtual
drive, the ESX Virtualization Layer translates the commands to VMFS file operations. While a VM is
up and running, VMFS locks the associated files to prevent other ESX servers from updating the files.
The VMFS also ensures that a VM cannot be opened by more than one ESX server in the cluster.
VM 1 VM 2 VM 3 VM 1 VM 2 VM 3
App App App App App App
OS OS OS OS OS OS
VMFS Volume
with virtual drives
VMFS allows virtualization to scale beyond the boundaries of a single system, and increases resource
utilization by allowing multiple VMs to share a consolidated pool of clustered storage. A VMFS
volume can extend over multiple Logical Unit Numbers (LUNs). VMFS3 volumes must be 1200 MB
(1.2 GB) or larger and can extend over 32 physical storage elements. This allows pooling of storage
and flexibility in creating the storage volume necessary for virtual machines. To increase disk space
as required by VMs, a VMFS volume can be extended while virtual machines are running on the
volume.
There are two basic approaches for setting up VMFS volumes and LUNs:
• Many small LUNs, each with a VMFS volume: With a number of smaller LUNs and VMFS volumes,
fewer ESX server hosts and VMs contend for each VMFS volume, which maximizes performance but
makes management more complex.
• One large LUN (or many LUNs) with one VMFS volume extended across all LUNs: With only one
VMFS volume, management is simpler and creating VMs and resizing virtual drives are easier;
however, performance can be affected.
The recommended strategy falls somewhere in between. Using a few VMFS volumes extended over
several LUNs more evenly distributes server access and storage load across two controllers. The HP
Direct Connect Shared SAS solution is currently limited to 64 total LUNs per ESX server when all four
paths are in use. This means 64 LUNs * 4 paths = 256 LUN-paths. This is a per-server limit
9
regardless of the number of shared storage arrays in the configuration. The initial installation should
strive for a balance of VMs across ESX servers and storage because a highly shared VMFS volume
can result in reduced performance.
LUN masking
Proper LUN masking is critical for boot-from-shared storage systems.
Each ESX server booting from shared storage must have a dedicated boot volume visible only to that
server. While it may seem like a good idea to configure all of the boot volumes from a single Vdisk
on the array, distributing them creates better balance and improves performance across two
controllers. No ESX server should have access to the boot LUNs of other ESX server hosts.
A volume presented to a particular ESX server on multiple paths (multiple SAS IDs) should have a
consistent LUN value on each of the paths. A volume shared by multiple ESX servers should have a
consistent LUN value for all servers.
Multipathing
Multipath redundancy provided by dual-controller and dual-switch configurations causes each LUN
exposed to a particular ESX server blade to appear four times (four total paths to the LUN). Typically,
each LUN will appear as the same LUN number on four different targets. For example, the VMware
command esxcfg-mpath reveals four paths to a boot LUN:
• Vmhba0:T1:L1
• Vmhba0:T2:L1
• Vmhba0:T3:L1
• Vmhba0:T4:L1
Multipathing functionality in VMware ESX software monitors the status of each path and resolves the
paths into a single view of the LUN for the operating system. ESX chooses the most viable path based
on path availability and controller preference.
10
Conclusion
The HP Direct Connect Shared SAS storage solution integrates well with VMware ESX server and
supports the advanced functionality of VMware tools. The HP Direct Connect Shared SAS storage
solution has been tested and certified with VMware software and economically provides high levels of
hardware redundancy.
11
Appendix: Procedures for setting up HP Direct Connect
Shared SAS hardware
NOTE
For detailed instructions on hardware installation, refer to the user
or installation guides for the appropriate c-Class and
StorageWorks components.
NOTE
Previous HP MSA and Smart Array storage solutions were
managed with the HP Array Configuration Utility (ACU), an
application downloadable from HP. The SMU referenced in
this procedure is included in the MSA2012sa firmware and
provides functionality similar to the ACU.
Each MSA2012sa enclosure contains 12 dual-ported SAS drives accessed through the two MSA
controllers. The RAID level and Vdisk configuration depend on the desired mix of
performance/redundancy/capacity, and on the number of disks available for each RAID set.
12
Figure 11. Creating Vdisks with the HP Storage Management Utility
13
8. Using the SMU, configure the Vdisks into storage volumes.
Figure 12 shows the SMU volume management page that allows the user to add a volume to a
Vdisk. In this example, the Vdisk “vdisk2sas” has been highlighted and will be configured with a
storage volume. The size of the volume should be large enough to accommodate the ESX server
software, the applications, and data for the estimated number of virtual machines.
Figure 12. Creating storage volumes with the HP Storage Management Utility
Logical Unit Number (LUN) numbering is controlled through storage volume mapping, which is
discussed in the next step.
9. If you are configuring the system for boot from shared storage operation, go to the section
“Configuring hardware for boot-from-shared storage”; otherwise, enter the volume name
“vmfs3”and click Add Volume.
10. Map the storage volume (Figure 13).
Volume mapping is used to control which servers get what type of access (read/write) by
specifying the Host WWNs (SAS device IDs) to which the volume may be connected. The SAS
device ID (recorded in step 6) uniquely identifies the port on a server blade’s Smart Array P700m
controller. Each P700m controller has four ports, thus four SAS device IDs. For c3000 enclosures,
all four ports connect to SAS switches in bays 3 and 4. For c7000 enclosures, only two ports will
be connected to a pair of switches (in bays 3 and 4, 5 and 6, or 7 and 8), so that 4-port
connectivity will require two pairs of SAS switches.
14
The mapping of all ports (SAS device IDs) of a server’s P700m controller should specify the same
LUN number for a storage volume. For VMware ESX servers, a storage volume exposed to multiple
servers should be mapped to the same LUN number for each server. If the LUN is mapped as LUN
100 to the four SAS device IDs of the first server, it should also be mapped as LUN 100 to all
other ESX servers that require shared access to that LUN.
Storage volume mapping also specifies which MSA controller ports are used for accessing the
volume, which is designated with a LUN value. Typically, all four MSA2012sa ports (A0, A1, B0,
B1) are enabled to provide the best redundancy and minimize management overhead.
NOTE
When configuring storage, be aware of the controller-to-Vdisk
assignments. Ensure that the load is distributed evenly across
both MSA controllers. A numerically even distribution of
Vdisks across both controllers does not necessarily equate to
an even distribution, since some Vdisks may contain
applications more heavily used than others.
15
Configuring hardware for boot-from-shared storage
Boot-from-shared storage is an optional capability of the HP Direct Connect Shared SAS storage
system. The following procedure establishes the relationship between one server blade and a boot
volume.
NOTE
The following steps are described in detail in the white paper
“Booting From Shared Storage with Direct Connect SAS Storage
for HP BladeSystem and the HP StorageWorks MSA2012sa”
available at http://h20195.www2.hp.com/PDF/4AA2-
3667ENW.pdf
1. Using the Onboard Administrator of the c-Class enclosure, determine and record the Device IDs for
the P700m controller ports.
2. Using a web browser and the appropriate IP address of the MSA2012sa enclosure, access the
Storage Management Utility (SMU) to configure and set access to a server boot volume. Create a
storage volume large enough to contain the ESX OS. Use the volume mapping function to enable
server blade access to the volume by entering the Device IDs recorded in step 1.
16
For more information
For additional information, refer to the resources listed below.
Resource Hyperlink
Call to action
Send comments about this paper to: TechCom@HP.com.
17