Sunteți pe pagina 1din 100

HP BladeSystem Matrix 6.

3 Planning Guide

HP Part Number: 646940-001 Published: May 201 1 Edition: 1

Copyright 201 Hewlett-Packard Development Company, L.P. 1 The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 and 12.212, Commercial 1 Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Microsoft, Windows, and Windows Server are U.S. registered trademarks of Microsoft Corporation.

Contents
1 Overview..................................................................................................5
HP BladeSystem Matrix documents..............................................................................................5 Planning summary....................................................................................................................6 HP BladeSystem Matrix infrastructure...........................................................................................7 HP BladeSystem Matrix components............................................................................................9

2 HP BladeSystem Matrix services planning....................................................14


Servers and services to be deployed in HP BladeSystem Matrix.....................................................14 Application services................................................................................................................14 Management services.............................................................................................................16

3 HP BladeSystem Matrix customer facility planning.........................................29


Racks and enclosures planning.................................................................................................29 Data center requirements.........................................................................................................29 Virtual Connect domains.........................................................................................................30

4 HP BladeSystem Matrix solution storage......................................................35


Virtual Connect technology......................................................................................................35 Storage connections................................................................................................................35 Storage volumes.....................................................................................................................37

5 HP BladeSystem Matrix solution networking.................................................40


Network planning..................................................................................................................40 Virtual Connect Ethernet uplink connections................................................................................42 Virtual Connect Flex-10 Ethernet services connections...................................................................43 Manageability connections......................................................................................................45

6 HP BladeSystem Matrix pre-delivery planning checklist..................................49 7 Next steps...............................................................................................50 8 Support and other resources......................................................................51
Contacting HP........................................................................................................................51 Related information.................................................................................................................53

A Dynamic infrastructure provisioning with HP BladeSystem Matrix.....................54


Example 1An agile test and development infrastructure using logical servers...............................54 Example 2An agile test and development infrastructure with IO.................................................60

B Sample configuration templates..................................................................68 C Optional Management Services integration notes.........................................76


HP HP HP HP BladeSystem Matrix and HP Server Automation.....................................................................76 BladeSystem Matrix and Insight Recovery..............................................................................76 BladeSystem Matrix and Insight Control for VMware vCenter Server.........................................76 BladeSystem Matrix and Insight Control for Microsoft System Center.........................................77

D HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines..................................................................................................80


Virtual Connect FlexFabric hardware components.......................................................................80 FlexFabric interconnects/mezzanines HP BladeSystem c7000 port mapping................................81 HP BladeSystem c7000 enclosure FlexFabric module placement....................................................82 FlexFabric configurations using only HP G7 BladeSystem servers...................................................83 FlexFabric configurations using only HP G6 or i2 BladeSystem servers...........................................85 FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers.............87

Contents

HP BladeSystem Matrix configuration guidelines for mixing FlexFabric with Flex-10 .........................90

Glossary....................................................................................................91 Index.........................................................................................................95

Contents

1 Overview
This guide is the recommended initial document for planning an HP BladeSystem Matrix infrastructure solution. The intended audience for this guide is pre-sales and HP Services involved in the planning, ordering, and installation of an HP BladeSystem Matrix-based solution. Planning is the key to success; early planning is the key to creating an HP BladeSystem Matrix order, which moves on to smooth, successful, and satisfactory delivery. This guide is intended for use, along with a planning worksheet, to capture planning decisions, customer-provided details, and HP BladeSystem Matrix configuration parameters for future implementation. Effective planning requires knowledge of BladeSystem technology, including Virtual Connect (VC) FlexFabric, VC Flex-10 Ethernet and Fibre Channel (FC); knowledge of FC shared storage, including fabric zoning, redundant paths, N_Port ID Virtualization (NPIV), and logical unit number (LUN) provisioning; knowledge of software configuration planning and functionality, including HP Insight Orchestration (IO), Central Management Server (CMS) software, OS deployment, and any customer-provided management software in connection with the HP BladeSystem Matrix implementation. The HP BladeSystem Matrix Starter Kits and optional expansion kits provide configuration options that enable integration into a customers existing environment. This document is intended to guide you through the planning processes by outlining the decisions involved and data collected in preparing for a HP BladeSystem Matrix solution implementation. There are two points during the HP BladeSystem Matrix implementation delivery process where design decision input and user action are required: This document outlines both sets of input information. 1. Pre-Order: Before placing the HP BladeSystem Matrix order, you must plan and specify requirements and order options. 2. Pre-Delivery: Before the delivery of the HP BladeSystem Matrix physical infrastructure, you must coordinate the environmental and configuration details to make sure the on-site implementation service can begin immediately.

HP BladeSystem Matrix documents


The HP BladeSystem Matrix documents table shows the documentation hierarchy of the HP BladeSystem Matrix infrastructure solution. Read this guide before ordering and configuring the HP BladeSystem Matrix, and use the guide in conjunction with the HP BladeSystem Matrix Release Notes and HP BladeSystem Matrix Compatibility Chart. Table 1 HP BladeSystem Matrix documents
Phase Planning Document titles HP BladeSystem Matrix 6.3 Compatibility Chart The Compatibility Chart provides version information of HP BladeSystem Matrix components. Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide The Before you begin section of this document describes storage, networking, and SAN zoning considerations when implementing an HP BladeSystem Matrix Recovery Management (Insight Recovery) configuration. Using HP HP BladeSystem Matrix 6.3 Release Notes BladeSystem The release notes provide key information on HP BladeSystem Matrix and its Matrix components. Yes No HP BladeSystem Matrix Documentation CD Yes

HP BladeSystem Matrix documents

Table 1 HP BladeSystem Matrix documents (continued)


Phase Document titles HP BladeSystem Matrix 6.3 Getting Started Guide The getting started guide provides instructions on how to design your first HP BladeSystem Matrix infrastructure template and then create (or provision) an infrastructure service using that template after the installation is complete. HP BladeSystem Matrix 6.3 Troubleshooting Guide The troubleshooting guide provides information on troubleshooting tools and how to recover from errors in a HP BladeSystem Matrix environment. HP BladeSystem Matrix Step-by-Step Use Case Guides and demo videos The use cases provide test and video instructions on how to build six different solutions corresponding to the six included demos. Yes Yes HP BladeSystem Matrix Documentation CD Yes

The latest updates to the HP BladeSystem Matrix solution are located on the HP website, http:// www.hp.com/go/matrixcompatibility. The supported hardware, software, and firmware versions are listed in the HP BladeSystem Matrix Compatibility Chart. Updates to issues and solutions are listed in the HP BladeSystem Matrix Release Notes. White papers and external documentation listed above are located on the HP BladeSystem Matrix Infrastructure 6.x product manuals page or on the HP BladeSystem Matrix Documentation CD. HP BladeSystem Matrix QuickSpecs are located at http://h18004.www1.hp.com/products/ quickspecs/13297_div/13297_div.pdf and for HP-UX, see http://h18004.www1.hp.com/ products/quickspecs/13755_div/13755_div.pdf.

Planning summary
HP BladeSystem Matrix is a platform that optimally creates an HP Converged Infrastructure environment that is simple and straightforward to buy and use. This document presents steps to guide you through the HP BladeSystem Matrix planning process.

Overview

Figure 1 HP BladeSystem Matrix planning steps

HP BladeSystem Matrix infrastructure


HP BladeSystem Matrix embodies the HP Converged Infrastructure enabling provisioning, deployment, and management of application services. The following key components enable this infrastructure: Converged infrastructure consisting of virtual I/O, shared storage, and computer resources Management environment with physical and virtual machine provisioning and workflow automation, capacity planning, Disaster Recovery (DR)-ready and auto spare failover, continuous optimization, and power management Factory and on site integration services

Planning begins with understanding what makes up each component. Some components might include existing services found in the customer data center. Other components are automatically provided by, or optionally ordered with, HP BladeSystem Matrix. The physical infrastructure as provided by HP BladeSystem Matrix consists of the following components: HP BladeSystem Matrix FlexFabric enclosures include the following: HP BladeSystem c7000 Enclosure with power and redundant HP Onboard Administrator (OA) modules Redundant pair of HP VC FlexFabric 10Gb/24-Port modules

HP BladeSystem Matrix infrastructure

HP BladeSystem Matrix Flex-10 enclosures include the following: HP BladeSystem c7000 Enclosure with power and redundant OA modules Redundant pair of HP VC Flex-10 10Gb Ethernet modules Redundant pair of HP VC 8Gb 24-Port FC HP 10000 G2 series rack HP ProLiant DL360c G7 server functioning as a Central Management Server

The following components are included by default, but can be deselected:

The following figure illustrates a basic HP BladeSystem Matrix configuration. Many components displayed in the diagram are discussed in detail in this guide, and are carried through to the HP BladeSystem Matrix Setup and Installation Guide. The examples in this document are based on this sample configuration. Additional detailed application examples are located in Appendix ADynamic infrastructure provisioning with HP BladeSystem Matrix (page 54). For an Insight Recovery implementation, these steps are required for the HP BladeSystem Matrix configurations at both the primary and recovery sites. Figure 2 Basic HP BladeSystem Matrix infrastructure

Management infrastructure The physical infrastructure provided by the customers data center includes power, cooling, and floor space.

Overview

The management infrastructure as provided by HP BladeSystem Matrix consists of the following components: HP Insight Software Advisor HP Insight Dynamics HP Insight Dynamics capacity planning, configuration, and workload management IO HP Insight Recovery (HP IR) (setup requires an additional per event service)

HP Insight Control HP Insight Control performance management HP Insight Control power management HP Insight Control virtual machine management HP Insight Control server migration HP Insight Control server deployment HP Insight Control licensing and reports HP iLO Advanced for BladeSystem

HP Virtual Connect Enterprise Manager (HP VCEM) software HP Insight Remote Support Advanced (formerly Remote Support Pack) HP Systems Insight Manager (HP SIM) HP System Management Homepage (HP SMH) HP Version Control Repository Manager (HP VCRM) Windows management instrumentation (WMI) Mapper

HP Insight managed system setup wizard

Optional management infrastructure, which can integrate with HP BladeSystem Matrix, includes the following components (discussed throughout this guide): Insight Control for Microsoft System Center (additional per event service required) Insight Control for VMware vCenter Server (additional per event service required) HP Server Automation software (customer-provided) HP Ignite-UX software (customer-provided) Microsoft System Center server (customer-provided) VMware vCenter server (customer-provided)

The customer provided components also include network connectivity, SAN fabric, and network management such as domain name system (DNS), dynamic host configuration protocol (DHCP), time source, and domain services. The HP BladeSystem Matrix management components integrate with the customers existing management infrastructure. The factory integration and integration services are described in the HP BladeSystem Matrix QuickSpecs.

HP BladeSystem Matrix components


The following components are available when ordering an HP BladeSystem Matrix infrastructure: Four or more Blade servers, which form the server pools One or more CMS servers to host the management services for the environment
HP BladeSystem Matrix components 9

Starter Kits, which contain the infrastructure needed for a fully-working environment when populated with additional server blades Expansion kits, which extend the HP BladeSystem Matrix with additional enclosures, infrastructure, and blades HP BladeSystem Matrix enclosure licenses Rack infrastructure Power infrastructure FC SAN storage iSCSI SAN storage (optional) Switches, transceivers and signal cables Other licenses to enable the HP BladeSystem Matrix environment

For all HP BladeSystem Matrix components and support options, see the HP BladeSystem Matrix QuickSpecs. Additional components such as FC SAN switches and network switches might be required to integrate the HP BladeSystem Matrix solution with the customers existing infrastructure and can be included with the HP BladeSystem Matrix order. Table 2 HP BladeSystem Matrix components
Hardware Component Choose blades Choose Blades configured to order Selection

Fill Starter and Expansion Kits to capacity In HP BladeSystem Matrix Flex-10 Starter or Expansion these will form your server resource pool Kits all blades require an host bus adapter (HBA) for HP BladeSystem Matrix. mezzanine card. When ProLiant G6 or Integrity i2 See the Compatibility Chart for supported blades are integrated within HP BladeSystem Matrix FlexFabric Starter or Expansion Kits, a NIC FlexFabric blade hardware. Adapter is required for all blades in the enclosure. For solutions with all ProLiant G7 blades, the NIC FlexFabric Adapter LOM is embedded on the blade so no additional modules or mezzanines are required. See HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines (page 80) for more information about these configuration options. Choose 1 or more CMS servers DL360 G7 Matrix CMS Server Default selection for CMS Includes 10Gb NIC Does not include SFPs or cables BL460c G6 Matrix CMS Server Selection for an all-blade solution Alternate CMS Server Right-sized per specific customer needs, ordered or customer provided The alternative CMS host must meet all the CMS hardware requirements listed by the HP Insight Software 6.3 Support Matrix and within this document. Choose 1 HP BladeSystem Matrix Starter Flex-10 Starter Kit for Integrity with HP-UX Kit Redundant VC-Enet Flex-10 modules HP BladeSystem c7000 Enclosure Redundant VC-FC 8Gb 24-port modules Redundant OA modules Fully populated with 10 active cool fans 8 full-height blade bays available Flex-10 Starter Kit for ProLiant

10

Overview

Table 2 HP BladeSystem Matrix components (continued)


Hardware Component Fully populated with six, 2400W power supplies six C19/C20 single phase power inputs available Choose Redundant VC-Enet Flex-10 modules Redundant VC-FC 8Gb 24-port modules 16 half-height server blade bays available Selection

FlexFabric Starter Kit for ProLiant HP BladeSystem Matrix licenses required, but not included with Starter Redundant VC-FlexFabric modules Kits (see "Select HP BladeSystem 16 half-height blade bays available Matrix licenses" in this table.) Choose 1 or more Expansion Kits to grow Flex-10 Expansion Kit for Integrity the HP BladeSystem Matrix 8 full-height blade bays available HP BladeSystem c7000 Enclosure HP BladeSystem Matrix license not included Redundant OA modules Flex-10 Expansion Kit for ProLiant Fully populated with 10 active cool 16 half-height blade bays available fans Fully populated with six, 2400W power supplies six C19/C20 single phase power inputs available HP BladeSystem Matrix licenses included FlexFabric Expansion Kit for ProLiant 16 half-height blade bays available HP BladeSystem Matrix licenses included Select HP BladeSystem Matrix licenses HP BL Matrix SW 16-Svr 24x7 Supp Insight Software

HP BladeSystem Matrix licenses are either One required for each ProLiant Starter Kit. offered as a required order option, or This HP BladeSystem Matrix license is included with both included in the kit. Software license ProLiant Expansion Kits. ordering requirements are outlined in the HP BladeSystem Matrix QuickSpecs. HP-UX 1 Matrix Blade 2Skt PSL LTU Per Socket Licenses 1i HP BladeSystem Matrix licenses for for BL860c i2 Integrity: licenses required for both Starter Kits and Expansion kits. Minimum 8 HP-UX 1 Matrix Blade 4Skt PSL LTU Per Socket Licenses 1i licenses required. for BL870c i2 HP BladeSystem Matrix licenses for ProLiant: required to purchase license for 1i Starter Kit. License purchase not needed HP-UX 1 Matrix Blade 8Skt PSL LTU Per Socket Licenses for BL890c i2 for Expansion Kits (already included). HP VCEM BL7000 one enclosure license Required for each HP BladeSystem Matrix with HP-UX Starter or Expansion Kit Choose 1 or more racks HP 10000 G2 racks Customer provided Choose power infrastructure Each HP BladeSystem Matrix enclosure requires six C19/C20 connections Redundant power configuration recommended (i.e. order PDUs in pairs) Choose supported FC SAN Storage If the customer chooses to provide an existing array, the SAN array must be certified for HP BladeSystem c-Class servers (see HP StorageWorks and BladeSystem c-Class Support Matrix). HP PDUs Monitored power distribution units (PDU)s recommended for manageability and to reduce the number of power connections required per rack Customer provided PDUs Fibre Channel HP 3PAR F-Class and T-Class storage systems At this time, HP 3PAR storage systems can be purchased individually, on a separate order installed in a separate rack. A single 3PAR system may consist of multiple cabinets.

HP BladeSystem Matrix components

1 1

Table 2 HP BladeSystem Matrix components (continued)


Hardware Component Choose Selection

SAN storage must be qualified with the HP StorageWorks EVA VC-FC or VC-FlexFabric modules by the EVAs may be ordered in a HP BladeSystem Matrix rack, storage vendor (see SPOCK for qualified or in separate racks for better expandability. HP SAN Storage) HP StorageWorks XP Array Ordered in a separate rack. Other HP StorageWorks FC Storage Customer provided third party FC storage (Optional) Add supported iSCSI SAN Storage Supported in HP BladeSystem Matrix as a backing store for VM guests. See the HP BladeSystem Matrix QuickSpecs and HP BladeSystem Matrix Compatibility Chart for recommendations and requirements. HP StorageWorks P4300 G2 7.2TB SAS Starter SAN Solution Order up to 8 of these to build a 16 node cluster HP StorageWorks P4500 G2 10.8TB SAS Virtualization SAN Solution Add the 10Gb NIC option for high bandwidth storage applications Other HP StorageWorks iSCSI solutions Customer provided third party iSCSI storage Add switches, transceivers and signal cables See the HP BladeSystem Matrix QuickSpecs and HP BladeSystem Matrix Compatibility Chart for recommendations and requirements. Configured to order Ethernet switches and FC SAN switches are required to complete the solution. Transceivers and signal cables are required for uplinks to switches. The number and type of uplinks for Ethernet, SAN, and VC Stacking may be determined upon completion of this document. Consult the Quick Specs of individual components for compatible transceiver or cable choices. Customer provided FC SAN switches must support NPIV. Other licenses to enable the HP BladeSystem Matrix environment Storage licenses: purchase requirements depend on choice of storage, (some examples listed to the right). Hypervisor licenses: Refer to the Quick Specs for order options. HP StorageWorks XP Command View Advanced Edition (if an HP XP array is ordered, although the Remote Web Console can be used alternatively) HP Command View EVA License To Use to host boot and data LUNs (if an HP EVA is purchased) VMware licenses Hyper-V licenses

Customer responsibilities The customer can select and configure multiple physical Integrity or ProLiant server blades and additional HP BladeSystem Matrix expansion kits. If the default HP ProLiant DL360 G7 management server is not selected, the customer is required to provide a compatible ProLiant server to function as the CMS. The customer also provides connectivity to the HP BladeSystem Matrix infrastructure. The number and type of LAN connections is part of the network planning phase of this document.

12

Overview

IMPORTANT: Be sure that FC SAN SFP+ transceivers are used for FC SAN uplinks, and Ethernet SFP/SFP+ transceivers are used for Ethernet uplinks. VC Flex-10 modules only support Ethernet uplinks and VC FC modules only support FC SAN uplinks. IMPORTANT: VC FlexFabric modules have dual personality faceplate ports; only ports 1 through 4 may be used as FC SAN uplinks (4Gb/8Gb). Additionally, although all VC FlexFabric ports support 10Gb Ethernet uplinks, only ports 5 through 8 support both 1Gb and 10Gb Ethernet uplinks. Using the wrong port or SFP/SFP+ transceiver for any uplink will result in an invalid and unsupported configuration. IMPORTANT: Two additional VC FlexFabric interconnect modules must be purchased when a NIC FlexFabric Adapter mezzanine card is purchased for each blade. This includes any ProLiant G6 or Integrity i2 configuration. When the optional StorageWorks EVA4400 Array is ordered, two embedded FC SAN switches provide connectivity from HP BladeSystem Matrix enclosures to the array. If the EVA is not included with the Starter Kit, the customer must provide connectivity to a compatible FC SAN array. Customer-supplied FC switches to the external SAN must support boot from SAN and NPIV functionality. Refer to the HP website (http://www.hp.com/storage/spock) for a list of switches and storage that are supported by VC FC. Registration is required. Following the login, go to the left navigation and click on Other HardwareVirtual Connect. Then click on the module applicable to the customers solution: HP Virtual Connect FlexFabric 10Gb/24port Module for c-Class Blade System HP Virtual Connect 8Gb 24-Port Fibre Channel Module for c-Class Blade System HP Virtual Connect 4Gb / 8Gb 20-Port Fibre Channel Module for c-Class Blade System NOTE: This module is not in a Starter or Expansion kit. Matrix conversion services are required. Provisioning of suitable computer room space, power, and cooling is based on specifications described in the HP BladeSystem Matrix Quick Specs. When hardware is to be installed in customer provided racks, the customer must order hardware integration services. If the customer elects to not order these services, hardware installation must be done properly prior to any HP BladeSystem Matrix implementation services. When implementing Insight Recovery, two data center sites are used: a primary site that is used for production operations and a recovery site that is used in the event of a planned or unplanned outage at the primary site. Each site contains a complete HP BladeSystem Matrix configuration with an intersite link that connects the sites. Protecting data at the primary site is accomplished by using data replication to the recovery site. Network and data replication requirements for implementing Insight Recovery are described in Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide. Using this document to formulate a plan early on is an essential part of the order process for HP BladeSystem Matrix. IMPORTANT: Each secondary Matrix CMS in a federated environment requires the purchase of a Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation. The following chapter covers the planning considerations of a federated CMS in further detail.

HP BladeSystem Matrix components

13

2 HP BladeSystem Matrix services planning


Servers and services to be deployed in HP BladeSystem Matrix
Begin planning the HP BladeSystem Matrix configuration and implementation by analyzing your application services and their infrastructure requirements. Application services can consist of simple or multi-tier, multi-node physical and virtual servers and associated operating system(s), and storage and network requirements. For example, a two-tier database service can consist of an application tier that includes two to four virtual machines while the database tier consists of one or two physical server blades. Management services can include the monitoring, provisioning and control of application services using such components as Insight Dynamics, server deployment, and VMware vCenter server.

Server planning required for the HP BladeSystem Matrix Installation and Startup Service
Plan management servers to be installed and configured as follows: Management Servers hosting the following services: Insight Software CMS Insight Control server deployment for environments with ProLiant blade servers HP Ignite-UX (pre-existing) for environments with HP-UX and Integrity blade servers SQL Server (or can be installed in a customer-provided SQL server farm) Required storage management software: HP Command View Enterprise Virtual Array (EVA) or XP Command View Advanced Edition or other storage management software required

Hypervisor host A and B (Integrity VM, Microsoft Hyper-V, VMware ESX, ESXi) Windows, Linux, or HP-UX operating system for a newly created logical server Unused server for logical server move operation target demonstration (Optional) Allocated for IO automated deployment targets

When implementing Insight Recovery, a similar plan is required for the recovery site. When implementing a federated CMS, the first CMS installed becomes the primary CMS. Any subsequent CMS which is then installed and joined with the federation is called a secondary CMS. A federated CMS may consist of up to five CMS servers (1 primary and 4 secondary). The Insight Orchestration software is only installed on the primary CMS. Each secondary CMS contains the full Insight Software stack, except for Insight Orchestration. IMPORTANT: Each secondary Matrix CMS of a federated CMS requires purchase of a Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation.

Application services
This section outlines the type of information you need when planning application services deployed on HP BladeSystem Matrix. These services may be deployed as logical servers or automatically provisioned by the infrastructure orchestration capabilities of Insight Dynamics.

14

HP BladeSystem Matrix services planning

The following defines the information to collect when describing HP BladeSystem Matrix application services: Service name: A label used to identify the application or management service Optionally, one or more tiers of a multi-tiered application The server name on which the application or management service is hosted Physical blades Server model (e.g. BL870c i2) Processor and memory requirements Hypervisor (ESX, Hyper-V, HP VM) Processor and memory requirements

Host type and configuration:

Virtual machines

Software and OS requirements: List of applications or management services running on the server Operating System types: Windows Server Red Hat Enterprise Linux SUSE Linux Enterprise Server HP-UX Hypervisor OS: VMware ESX Hyper-V on Windows Server 2008 HP Integrity VM on HP-UX

SAN storage and fabric: Boot from SAN required for directly deployed physical servers Boot from SAN recommended for VM hosts FC or iSCSI SAN required for VM guest backing store LUN size and RAID level Remote storage for recovery Connectivity to corporate network. Private network requirements, for example, VMware service console, VMotion network Bandwidth requirements

Network connectivity:

The application services examples used in this document are based on use cases described in Exploring the Technology behind Key Use Cases for HP Insight Dynamics for ProLiant servers. For details on how the HP BladeSystem Matrix infrastructure solution can be used to provision a dynamic test and development infrastructure using logical servers or IO templates, see the examples in Appendix ADynamic infrastructure provisioning with HP BladeSystem Matrix (page 54).

Application services

15

For Insight Recovery implementations, discuss with the customer what Insight Recovery's DR capabilities and determine the VC-hosted physical blades and/or VM-hosted logical servers the customer wants Insight Recovery to protect. These logical servers are known as DR-protected logical servers. In addition, sufficient computer resources (physical blades and VM hosts) must be available at the recovery site for a successful Insight Recovery failover. See the Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for more information. Some customers may not yet be able to articulate the specific details of their failover requirements. In this case, HP recommends that several of the logical servers created as part of the HP BladeSystem Matrix SIG implementation be used as DR-protected logical servers to demonstrate an HP IR configuration and its failover functionality.

Planning Step 1aDefine application services


Use the following template to list the services to be deployed by the HP BladeSystem Matrix infrastructure. If the management service will be hosted by HP BladeSystem Matrix, make sure to include the Management Service description previously provided. Table 3 Application services in the HP BladeSystem Matrix environment
Service (service name) (tier #1 of service) (server) (server type) (installed software) (SAN requirements) (LAN requirements) Host configuration Software Storage requirements Network requirements

(tier #2 of service) (server) (server type) (installed software) (SAN requirements) (LAN requirements)

IMPORTANT: HP BladeSystem Matrix infrastructure is based on the requirement of boot from SAN for directly deployed physical servers and recommended for VM hosts.

Management services
The HP BladeSystem Matrix solution requires an Insight Software management environment. This environment consists of a CMS running Insight Software, a deployment server, storage management (for example, HP Command View EVA), and a SQL server. This environment may also include separate customer-provided servers for the optional management infrastructure mentioned previously. See the following paragraphs discussing separate servers.

Planning the Insight Software CMS


If you have not already performed detailed planning for the CMS, download and run "The HP Systems Insight Manager Sizer" currently found online at HP ActiveAnswers (an approximately 40 MB zip file that contains a Windows setup.exe). The sizer does not include all the Insight Software being installed in this example. Additional disk space requirements are later in this section. There is additional CMS planning information available in the HP Insight Software SIM information library: http://h18004.www1.hp.com/products/servers/management/unified/infolibraryis.html

16

HP BladeSystem Matrix services planning

NOTE: When planning a federated CMS, the plan for the primary and each secondary CMS must include exclusion ranges in its VCEM instance to remove overlap between all the current and planned instances of VCEM residing in the same data center. NOTE: If you are considering configuring the CMS in a high availability cluster either now or in the future, the CMS must be configured within a Windows domain and not as a standalone workgroup. HP does not currently support data migration of a CMS from a workgroup to a Windows domain. Server hardware Table 4 Confirm the CMS meets the minimum hardware requirements
Component Server Specification HP ProLiant BladeSystem c-Class server blades (G6 or higher series server is recommended), or an HP ProLiant ML300, DL300, DL500 or DL700 (G3 or higher series server is recommended) 12GB for 32-bit Windows management servers (deprecated) 32GB for 64-bit Windows management servers, appropriate for maximum scalability (see below) Processor Disk space File Structure DVD Drive 2 Processor dual core (2.4 GHz or faster recommended) 150GB disk space is recommended. If usage details are known in advance, a better estimate may be obtained from the disk requirements section below. New Technology File System Local or virtual/mapped DVD Drive required

Memory

There are several commonly used choices for installing and configuring a CMS with the HP BladeSystem Matrix: CMS on a rack-mounted ProLiant DL or ML server CMS on a ProLiant server blade CMS running from mirrored local disks CMS running from a SAN-based disk image (boot from SAN) A federated CMS, consisting of a primary CMS and one to four secondary CMSs

Each of these has benefits and tradeoffs. When choosing between a server blade and a racked server configuration, consider the environment's purpose. When choosing to implement the CMS as a server blade, keep in mind that an improper change to the VC-Ethernet network, server profile, or SAN network definitions can render the CMS on a blade unable to manage any other device, including the OA or VC modules. Well-defined processes for management and maintenance operations can mitigate this risk. When hosting the HP BladeSystem Matrix CMS within a HP BladeSystem Matrix enclosure, exercise greater care when accessing VCEM or the VC modules. When choosing the storage medium for the CMS, the default choice is to run the CMS from a SAN-based disk image. In environments where SAN availability may not be guaranteed (or uniform) it may be preferable to install a fully functional CMS on the mirrored local disk. However, this limits the choices, process, and time for recovery in the event of a hardware failure or planned maintenance.

Management services

17

NOTE: If this server is deselected, the customer must supply or order another server that meets the requirements for CMS.

Considerations when a CMS is not a server blade


When a server other than a server blade in HP BladeSystem Matrix is used as the CMS, consider the following requirements in addition to the requirements listed in the HP Insight Dynamics Installation and Configuration Guide.

Networking connections
The CMS must connect to multiple networks, which are common with those defined inside the HP BladeSystem Matrix environment. In the default configuration for HP BladeSystem Matrix, these networks are named: Management Production

If the CMS is also the deployment server for HP BladeSystem Matrix, the server must also connect to: Deployment If vCenter is not running, the VMotion networks do not need to be brought into the CMS in either the BL or external server case. Also ensure that the server has adequate physical ports and are configured for virtual local area networks (VLAN)s for any other networks to be used with HP BladeSystem Matrix. When implementing Insight Recovery, the CMS at the primary and recovery sites must be accessible to each other using a fully qualified domain name (FQDN).

SAN connections
In configurations where the CMS is either booted from SAN or also running storage software, the server requires necessary SAN HBAs and connectivity into the HP BladeSystem Matrix SAN.

Disk requirements
See the HP Insight Software Support Matrix, which shows several different supported combinations of HP SIM, Insight Control server deployment (RDP), and their databases. In addition to the disk space required for the CMS operating system, the requirements for Insight Software are summarized here for planning purposes: 20GB for install of Windows Server 2008 R2 Enterprise Edition (recommended CMS operating system) 20GB for install or upgrade of HP Insight Software Allot 8GB for OS temp space Allot 4GB for each OS to deploy. This additional storage must be accessible to the Insight Control server deployment software. Allot 65MB per workload on Windows or Linux managed systems or 35MB per workload on HP-UX managed systems. These allotments are for collecting and preserving a maximum of four years of data for use by Insight Capacity Advisor. Allot 4GB (CMS DB) per 100 workloads to preserve historical data for Insight Global Workload Manager.

The HP SIM Sizer can help estimate the long-term disk space requirements for logging events and other historic data based on your number of managed nodes and retention plans.

18

HP BladeSystem Matrix services planning

Ignite-UX server
Ignite-UX is required for all HP BladeSystem Matrix with HP-UX installations.

Considerations for a federated CMS


In IO, scalability can be increased through federated CMS configuration that contains one primary CMS with full HP Insight Software installation and up to four secondary CMSs with Insight Software, but without IO. IO provisioning is managed through the primary CMS and executed across all CMSs in the federated CMS environment. In a federated CMS configuration, DNS lookups of participating CMSs are required for successful IO operation. DNS is used to resolve CMS hostnames to IP addresses. On the primary CMS, forward and reverse DNS lookups must work for each secondary CMS. DNS lookups must be resolved using the FQDN of each system. In a federated CMS configuration, primary and secondary CMSs share the same deployment servers, such as the Insight Control deployment server and Ignite-UX server. Deployment servers should be registered in the primary CMS and they must each have their own deployment network that the physical blade servers can access for enabling physical and virtual deployment. Registering the deployment server on the primary CMS requires network access between these servers (via the deployment or management LAN). Creating a federated CMS configuration can always be achieved for new installations, and sometimes can be achieved for upgrade scenarios. New installs (6.3 or later) are always in federated mode so you may add a secondary CMS provided that exclusion ranges are configured appropriately in VCEM on the primary and new secondary CMS. When upgrading from a prior version to 6.3 or later, the CMS will not be in federated mode. If this existing CMS has IO installed, then upgrading to a primary CMS requires ATC engagement to preserve IO services and templates. An existing CMS could also become a secondary CMS but the IO services will be lost, because IO must be uninstalled first. Table 5 (page 19) outlines supported configurations of a federated CMS with associated management software. See Figure 5 (page 26) for an illustrated example configuration of a federated CMS. NOTE: hosts. The configuration of VMM templates takes place on the CMS that manages the Hyper-V

Table 5 Supported management software with a federated CMS


Management software Is it supported for a federated CMS? Yes Yes Yes Yes Yes Yes Yes Yes Single, shared between primary and secondary CMSs Yes Yes Yes Yes Yes Yes No No Multiple, each CMS has one instance Yes1 Yes1 Yes1 Yes Yes2 No Yes Yes

HP Server Automation Ignite-UX Server HP Insight Control server deployment (RDP) vCenter Server CommandView Server HP Insight Orchestration HP Insight Control (except RDP) HP Insight Control for Microsoft System Center

Management services

19

Table 5 Supported management software with a federated CMS (continued)


Management software Is it supported for a federated CMS? Yes Yes Single, shared between primary and secondary CMSs No No No Multiple, each CMS has one instance Yes Yes Yes

HP Insight Control for VMware vCenter Server HP Insight Foundation

HP Insight Dynamics Yes capacity planning, configuration, and workload management HP VCEM Microsoft SQL Server (CMS database) Microsoft System Center HP Insight Recovery HP Cloud Service Automation (CSA)
1 2

Yes Yes Yes No No

No No No N/A N/A

Yes2 Yes Yes N/A N/A

The primary CMS must have access to all deployment servers in a federated CMS configuration. Multiple VCEM instances co-exist in a single data center with federated CMS configurations. There is one instance for each primary and secondary CMS. When these instances share CommandView and/or networks, it is critical to avoid any media access control (MAC) and worldwide name (WWN) conflicts by configuring exclusion ranges for each instance of VCEM.

Additional management servers


If you plan to use the HP StorageWorks XP Command View Advanced Edition, a separate storage management server must be allocated for the XP CV AE software. When implementing Insight Recovery, there must be separate storage management servers at each site to manage the local array storage (EVA or XP). See Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for more information. In environments where the number of managed nodes and virtual machines is large, HP recommends a separate database server to host the CMS information. VMware vCenter Server must be provided and managed by the customer on a separate server if the customer is managing VMware ESX hosts in the HP BladeSystem Matrix environment. Insight Control for VMware vCenter Server should not be installed on the CMS. HP Server Automation or HP Ignite-UX must be provided and managed on separate servers if the customer is using either of these software technologies for HP BladeSystem Matrix deployments. HP BladeSystem Matrix is capable of performing operating system deployment, operating system customization, and application deployment through HP Server Automation. To plan for integration of HP Server Automation with HP BladeSystem Matrix, become familiar with the instructions detailed in Integrating HP Server Automation with HP BladeSystem Matrix/Insight Dynamics. Microsoft System Center must be provided on separate servers if the customer desires to use this software technology as an additional management console for servers in a HP BladeSystem Matrix environment. If used, Insight Control for Microsoft System Center is installed on the separate servers.

Management server scenarios


When planning for this environment, take into consideration the purpose of HP BladeSystem Matrix deployment and current and future growth. The following scenarios assist in determining the configuration of the management environment.
20 HP BladeSystem Matrix services planning

Limited environmentDemo, evaluation, testing, or POC


Enclosures 1 to 2 enclosures Mix of up to 250 physical and virtual servers

Management server DL360 G7 with 2 processors and 32GB memory Windows Server 2008 R2 Enterprise Edition Insight Software Insight Control server deployment SQL Express 2005 or 2008 (installed by Insight Software). SQL Express is not recommended for medium or large environments. Storage management software, for example HP Command View EVA (can be installed on a separate server if required by the customer)

Network connections Production LAN (uplinked to data center) Management LAN (uplinked to data center) Deployment LAN (uplinked to data center)

For an illustration of a limited HP BladeSystem Matrix infrastructure as described above, please see Figure 2 (page 8) in the overview chapter.

ProLiant standard environment


Enclosures 1 to 4 enclosures Up to 70 VM hosts. A VM host is a system with a hypervisor installed on it to host virtual machines. A host machine can host more than one virtual machine. Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of a non-federated CMS. This limit is 1,500 logical servers (ProLiant nodes, virtual and physical) when using 64-bit Windows (32-bit CMS has been deprecated).

Management servers Server 1 DL360 G7 with 2 processors and 32GB memory Windows Server 2008 R2 Enterprise Edition Insight Software Insight Control server deployment DL360 G7 with 2 processors and 32GB memory Windows Server 2008 R2 Enterprise Edition SQL Server 2005 (or can be installed in a separate SQL server farm) Storage management software (may also be installed on a separate server)

Server 2

Management services

21

Network connections Production LAN, Management Servers #1, #2 Management LAN, Management Servers #1, #2 Deployment LAN, Management Server #1 only

Figure 3 HP BladeSystem Matrix infrastructure configured with ProLiant managed nodes

Integrity standard environment


Enclosures 1 to 4 enclosures Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of a non-federated CMS. This limit is 800 logical servers (count of HP-UX nodes, virtual and physical).

22

HP BladeSystem Matrix services planning

Management servers Server 1 Insight Software SQL Server 2005 HP Ignite-UX server Storage management software Server 2 Server 3 (can be installed in a separate SQL server farm) Server 4 (can be combined with SQL Server 2 if required)

Network connections Production LAN, Management Servers #1, #2 Management LAN, Management Servers #1, #2, #3, #4 Deployment LAN, Management Server #3 only SAN A and B; Management Server #4 and each Starter and Expansion kit

Management services

23

Figure 4 HP BladeSystem Matrix infrastructure configured with Integrity managed nodes

Federated environmentpositioning for growth


Enclosures Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of a federated CMS and positioned for additional growth. Each CMSs resource pool starts with a BladeSystem Matrix Starter kit and is expanded with BladeSystem Matrix Expansion kits. All infrastructure, management servers, and resource pools of the federation must be collocated in the same data center. Limits when all logical servers are ProLiant managed nodes: 1 primary CMS and up to 4 secondary CMSs 1,500 nodes for each secondary CMS

24

HP BladeSystem Matrix services planning

1,000 nodes for the primary CMS 6,000 nodes maximum across primary and secondary CMS resource pools 1 primary CMS and up to 4 secondary CMSs 800 nodes for each secondary CMS 600 nodes for the primary CMS 3,200 nodes maximum across primary and secondary CMS resource pools

Limits when all logical servers are Integrity managed nodes:

Management servers Server 1 (primary CMS) Insight Software Insight Software, excluding Insight Orchestration SQL Server 2005 Ignite-UX, Server Automation or Insight Control server deployment Additional deployment server (optional) Storage management software, for example HP CommandView EVA or XP edition (can be combined with another server only for EVA edition; XP edition must be installed on a separate server) Other/additional storage management software Servers 2 through 5 (secondary CMSs) Servers 6 through 10 (SQL servers) Server 1 (Deployment server) 1 Server 12 (Deployment server) Server 13 (Storage management server)

Server 14 (Storage management server)

Network connections Production LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10 Management LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10, #1 1, #12, #13, #14 Deployment LAN, Management Servers #1 #12 1, SAN A and B; Management Server #13 and the primary CMSs virtual connect domain group (VCDG) SAN C and D; Management Server #14 and some secondary CMSs VCDGs

Management services

25

NOTE: SAN switch infrastructure and storage management servers may be shared across CMS boundaries only if VCEM exclusion ranges are configured so that each CMS has a non-overlapping range of WWNs. An example of this is SAN C and D, illustrated in figure 5. NOTE: When VCDGs share any networks, but are managed in resource pools of more than one CMS (as shown in figure 5), VCEM exclusion ranges are mandatory to prevent overlap of MAC addresses. Figure 5 HP BladeSystem Matrix infrastructure configured with a federated CMS

26

HP BladeSystem Matrix services planning

Planning Step 1bDetermine management servers


When deploying the BladeSystem management environment, the Insight Software components are placed on the same ProLiant server along with infrastructure management tools. This configuration is outlined in the HP BladeSystem Setup and Installation Guide. The example table below shows management services implemented using the following configuration choices. Note that most environments will not require all of the servers and services shown here. See Optional Management Services integration notes (page 76) for more information. All Insight Software components that make up the management environment reside on the same physical blade. The optional StorageWorks EVA4400 is included with HP BladeSystem Matrix and is managed through HP Command View EVA hosted by the Management Service. The production network carries application data traffic and is connected to the data center. The management network provides operating system control and is connected to the data center. The deployment LAN is used by the Insight Control server deployment server to exclusively respond to PXE boot request and automated operating system installation. Other deployment technologies require a separate deployment network.

Management services

27

Table 6 Example management services for the HP BladeSystem Matrix environment


Service Matrix CMS #1 Host configuration Physical DL360 G7 2 Processors 32GB memory Software Windows Server 2008 SP2 (64-bit) Insight Software Insight Control server deployment HP Command View EVA SQL Server (installed by Insight Software) Ignite-UX Server Provided by Customer Physical (rack mount) Itanium-based HP-UX 1 V3 1i HP Ignite-UX Production Management Deployment Integrity OVMM SA primary core Provided by Customer Physical (rack mount) HP Server Automation software Production Management Deployment MSC #1 Provided by Customer Physical (rack mount) W2K3 R2 Microsoft System Center Configuration Manager Insight Control for MSC (CM integration modules) MSC #2 Provided by Customer Physical (rack mount) W2K8 R2 MSC Operations Manager MSC VM Manager Insight Control for MSC (OM & VMM modules) vCenter Server Provided by Customer W2K8 R2 VMware vCenter software Insight Control for VMware vCenter Server Production Management VMotion Production Management Production Management Storage requirements Boot from SAN Network requirements Production Management Deployment

IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control server deployment, Server Automation) are planned for an HP BladeSystem Matrix installation, a unique and separate deployment LAN must exist for each deployment server. IMPORTANT: Insight Software, SQL Server, and at least one deployment technology are in all HP BladeSystem Matrix implementations. Storage software for FC storage is also required in HP BladeSystem Matrix implementations (such as Command View EVA). Any other services must be running on customer provided servers. Separate installation services may be ordered with HP BladeSystem Matrix implementations to deliver Insight Control for Microsoft System Center and/or Insight Control for VMware vCenter Server. See the Appendix for additional integration details.

28

HP BladeSystem Matrix services planning

3 HP BladeSystem Matrix customer facility planning


Customer facility planning is not just about floor space, power, and cooling. It is the physical realization of all the services, networking and storage that combine to form an HP BladeSystem Matrix solution. A good facility plan contains known requirements balanced with consideration of future requirements.

Racks and enclosures planning


In this section, various infrastructure services are identified to enable HP BladeSystem Matrix implementation. If the service exists in the current customer environment, note the server name, IP or other relevant parameters adjacent to the infrastructure service.

Planning Step 2aHP BladeSystem Matrix rack and enclosure parameters


Complete the following template identifying basic information about the racks and enclosures in this order. Be sure to include the choice of enclosure implemented (Matrix Flex-10, Matrix FlexFabric, or Matrix with HP-UX). Table 7 Racks and enclosures plan
Item Matrix rack #1 Rack Model Rack Name Matrix Enclosure #1 (Starter Kit) Enclosure Model Enclosure Name Enclosure Location (Rack Name, U#) Value

Data center requirements


Customer responsibility Data center facility planning for BladeSystem installation is located in the HP BladeSystem c-Class Site Planning Guide.

Planning Step 2bDetermine HP BladeSystem Matrix facility requirements


Table 8 Facility requirements
Facility power Facility power connection characteristics Voltage, Phase Receptacle type Circuit rating Circuit de-rating percentage for the locality UPS or WALL Power redundancy? (If yes, specify labeling scheme) Planning metrics for rack: Racks and enclosures planning 29 (20% for NA/JP, 0% for much of the EU, or Custom percent) Value

Table 8 Facility requirements (continued)


Facility power Rack weight estimate (in Kg or lbs) Airflow estimate (in CMM/CFM) Watts (W). Volt-Amps (VA) Estimate for rack Thermal limit per rack (in Watts) (customer requirement compare to estimate) Quantity and type of PDUs for rack Monitored PDUs only Additional uplink & IP address SNMP community strings Installation characteristics: Identify data center location Side clearances/floor space allocation Verify ready to receive and install rack For example, IP address on management LAN For example, set to match current infrastructure Value

Virtual Connect domains


A VC domain represents the set of VC-Ethernet, FC modules, and server blades that are managed together in a single c7000 enclosure, or multiple connected enclosures (up to 4). VC domains are managed by a virtual connect manager (VCM). A VC domain group is the collection of one or more VC domains. HP VCEM is used to define the VC domain group and manage the pool of MAC addresses, WWNs, and server profiles within the domains. The following steps show how to determine whether to connect multiple enclosures into a single domain or use standalone domains under VCEM. The steps also show how to select unique MAC addresses, WWN addresses, and virtual serial numbers.

Determine enclosure stacking


If one or more HP BladeSystem Matrix expansion kits within the rack are being considered, then review the following information to determine whether a multi-enclosure VC domain configuration will be required. Stacking is used only for VC-Ethernet modules (Flex-10 or FlexFabric). For enclosures with VC Flex-10 and VC-FC modules, HP recommends defining one VC domain per rack. This simplifies cabling, conserves data center switch ports, and is straightforward to implement. For enclosures with VC FlexFabric modules, HP recommends one VC domain per enclosure to maximize available bandwidth for FC SAN and LAN uplinks. Interconnecting the modules to create a multi-enclosure domain allows all Ethernet NICs on all server blades in the VC domain to have access to any VC uplink port. Only LAN traffic will route through stacking links. FC SAN traffic does not flow over stacking links. Only perform multi-enclosure stacking with VC FlexFabric if the stacking link requirements do not conflict with the per enclosure SAN uplink requirements. By using these module-to-module links, a single pair of uplinks can be used as the data center network connections for the entire VC domain, which allows any server blade to be connected to any Ethernet network.

30

HP BladeSystem Matrix customer facility planning

Reasons to configure multi-enclosure domains


Data center switch ports or switch bandwidth are in short supply. VC stacking creates bandwidth sharing amongst enclosures, which conserves data center switch bandwidth. VC stacking creates bandwidth sharing among enclosures, which conserves data center switch bandwidth. Customer desires multi-enclosure domain configuration.

Reasons to configure single-enclosure domains


All traffic must be routed through the network. VC routes intra-enclosure (for example, server port to server port) within the domain via the cross-links. If the customer requires further manageability of this traffic, use single VC domains for each enclosure. The services, networking, and storage environments of each enclosure remain physically isolated.

Physical isolation

Any other situations in which bandwidth sharing between enclosures is not desirable or allowed. Customer desires single-enclosure domain configuration.

Stacking link configurations


The following considerations are for stacking VC Flex-10 Ethernet modules as well for stacking of VC FlexFabric modules. All VC-Ethernet modules within the VC domain must be interconnected. Any combination of cables can be used to interconnect the VC modules. Faceplate ports 7 and 8 are shared with the two built-in links, meaning that when port 7 or 8 is enabled (i.e. used as an uplink), the corresponding built-in stacking link is disabled. Two built-in 10Gb links are provided between modules in horizontally adjacent bays.

Supported cable lengths on 10Gb stacking links are 0.5 to 7 meters. Supported cable lengths on 10Gb uplinks are 3 to 15 meters. VC FC uplinks must always exist per enclosure as FC traffic is not transmitted across stacking links.

Simple stacking examples are diagrammed in the QuickSpecs for the HP Virtual connect Flex-10 10Gb Ethernet Module for c-Class BladeSystem: http://h18004.www1.hp.com/products/ quickspecs/13127_div/13127_div.pdf.

Virtual Connect domains

31

Figure 6 Multi-enclosure stacking enclosure cabling (VC modules are in Bays 1 & 2 for each enclosure)

Example VC domain stacking configurations based upon the number of enclosures are shown above. The one-meter cables are sufficient for stacking short links to adjacent enclosures, while
32 HP BladeSystem Matrix customer facility planning

three-meter cables are sufficient for stacking links that span multiple adjacent enclosures. The OA linking cables required for stacking are not shown in the figure. HP recommends that uplinks alternate between left and right sides, as shown in green. The examples show stacking of ports 5 and 6 while keeping the two internal cross-links active in a multi-enclosure domain configuration this is a total of four 10GbE stacking ports of shared bandwidth across enclosures (80Gbps line rate). The two internal cross-links remain active as long as ports 7 and 8 are unused. Order the following cables for each multi-enclosure domain: Quantity 1, 2, or 3 of Ethernet Cable 4ft CAT5 RJ45 for 2, 3 or 4 enclosures, respectively to be used as OA backplane links (not in figure). Quantity 2, 4, or 6 of HP 1m SFP+ 10GbE Copper Cable for 2, 3 or 4 enclosures, respectively to be used as VC stacking links. Order fixed quantity 2 of HP 3m SFP+ 10GbE Copper Cable to be used as wrap-around VC stacking links in VC domains with 3 or 4 enclosures.

Assign unique Virtual Connect MAC addresses


The MAC addresses assigned by VCEM must be unique throughout the data center. In the data center, there may be other BladeSystem enclosures with a range of assigned MAC addresses. Make sure to assign a range that does not conflict with those enclosures. Federated CMS configurations have VCEM instances for each primary and secondary CMS. When VCDGs in multiple VCEM instances share networks now or may share in the future, it is critical to avoid any MAC conflicts by configuring exclusion ranges so that non-overlapping usable ranges exist for each CMS. When implementing an HP IR configuration, if the primary and recovery site DR-protected servers share a common subnet, make sure that there is no conflict between the MAC addresses that VCEM assigns on both sites. One way to avoid conflicts is by using the sets of 64 MAC address ranges that VCEM provides with the exclusion ranges feature. An example of using exclusion ranges is included in Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide.

Assign unique Virtual Connect WWN addresses


The WWN addresses assigned by VCEM must be unique throughout the data center. You may have existing BladeSystem enclosures with a range of assigned WWN addresses. Make sure to assign a range that does not conflict with those enclosures. Federated CMS configurations have VCEM instances for each primary and secondary CMS. When VCDGs in multiple VCEM instances share SANs now or may share in the future, it is critical to avoid any WWN conflicts by configuring exclusion ranges so that non-overlapping usable ranges exist for each CMS. When implementing an HP IR configuration, if the primary and recovery site DR-protected servers share a common SAN Fabric, make sure that there is no conflict between the WWN addresses that VCEM assigns on both sites. One possible way to avoid conflicts is by using the sets of 64 WWN address ranges that VCEM provides with the exclusion ranges feature. An example of using exclusion ranges is included in Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide.

Select virtual serial numbers


Use virtual serial numbers to provide a virtual identity for your physical server blades; this allows you to easily move server identities. Ensure that each VC domain uses a unique range of virtual serial numbers.

Virtual Connect domains

33

Planning Step 2cVirtual Connect domain configurations


Each VC domain and the VC domain group must be assigned names. In most cases, a single VCDG is adequate for each HP BladeSystem Matrix implementation. In a federated CMS configuration, portability groups cannot be shared between CMSs (primary and/or secondary). One VCDG will be configured per each CMS in a typical BladeSystem Matrix federated CMS. Table 9 Virtual Connect domain configuration
Item Virtual Connect Domain Group #1 Name List the names of each VCD in this VCDG Virtual Connect Domain #1 Name List the names of each enclosure in this VCD Multi-enclosure stacking N/A, recommended, minimum or other? MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 Serial numbers HP-defined or user-defined? If HP-defined, select unique range 164 Virtual Connect Domain #2 Name List enclosures in domain Multi-enclosure stacking N/A, recommended, minimum or other? MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 Serial numbers HP-defined or user-defined? If HP-defined, select unique range 164 VCD name Enclosure name(s) VCD name Enclosure name(s) VCDG name VCD name(s) Value

34

HP BladeSystem Matrix customer facility planning

4 HP BladeSystem Matrix solution storage


After you determine the application and infrastructures services included in the HP BladeSystem Matrix solution, it is time to make several decisions regarding interconnectivity options, storage requirements, and customer provided infrastructure. For more detailed information about the processes outlined in this section, see the HP BladeSystem Matrix Setup and Installation Guide. For HP Insight Recovery (IR) implementations, this process must be used for both the primary and recovery sites.

Virtual Connect technology


This section identifies the network and storage connections used by the application services running on the HP BladeSystem Matrix physical servers. The external network and storage connections are mapped to physical servers using VC virtualization technology. VC is implemented through VC FC, VC-Ethernet with Flex-10 capability and VC FlexFabric with Flex-10 and FC capabilities. VC is managed in the HP BladeSystem Matrix environment using HP VCEM. An HP VCEM software license is included in each HP BladeSystem Matrix kit.

Storage connections
The VC FC or VC FlexFabric modules in a HP BladeSystem Matrix solution enable the c-Class administrator to reduce FC cabling by making use of NPIV. Because it uses an N-port uplink, it is connected to data center FC switches that support the NPIV protocol. When the server blade HBAs or FlexFabric Adapters log in to the fabric through the VC modules, the HBA WWN is visible to the FC switch name server and can be managed as if it was connected directly. The HP VC FC acts as an HBA aggregator where each NPIV-enabled N-port uplink can carry the FC traffic for multiple HBAs. The HP VC FlexFabric modules translate FCoE from the blades into FC protocol. With VC FlexFabric, FlexFabric Adapters on blade servers, not HBAs, are sending the FCoE traffic across the enclosure midplane. IMPORTANT: The HP VC FC uplinks must be connected to a data center FC switch that supports NPIV. See the switch firmware documentation for information to determine whether a specific switch supports NPIV and for instructions on enabling this support. The HP BladeSystem Matrix VC FC module has eight uplinks. The HP BladeSystem Matrix VC FlexFabric module has eight uplinks, four of which are dual personality uplinks which may be used as a FC uplink. In either case, each uplink is completely independent of the other uplinks and has a capability of aggregating up to 16 physical server HBA N-port links into an N-port uplink through the use of NPIV. Multiple VC FC module uplinks can be grouped logically into a VC fabric when attached to the same FC SAN fabric. This feature enables access to more than one FC SAN fabric, as well as enabling a flexible and fully redundant method to connect server blades to FC SANs.

Planning Step 3aCollect details about the customer provided SAN storage
The default configuration as described in the HP BladeSystem Matrix System installation and configuration documentation consists of an EVA and switches in the enclosure to create a complete self-contained SAN. In the case of a customer choosing an alternative storage configuration, the following information is required for planning the installation. For details on support storage options, see the HP BladeSystem Matrix Quick Specs.

Virtual Connect technology

35

Table 10 Storage and fabrics


Question Does some or all the SAN already exist? Will the matrix rack and enclosures be connected to an already installed and working SAN and array, or will some or all of the SAN storage be installed for the HP BladeSystem Matrix solution? Number of separate SANs Number of switches per SAN (assume 2): Number of arrays Response

Planning Step 3bFC SAN storage connections


The number of SAN connections per enclosure will vary depending on the number of redundant paths the customer chooses and the number of separate SAN environments they plan to connect. A typical solution has two SAN connections to the enclosure that connects the enclosure to an EVA. The two connections are for high availability through SAN multi-pathing. Table 1 FC SAN storage connections 1
Customer SAN name One of multiple connections to the same SAN 1 2 Storage controller WWPN VC FC SAN profile Note

Minimum of 1 Typically a second connection to first SAN for HA

3 4 5 6

NOTE: Every CMS in a federated CMS environment manages its own storage pool. Therefore storage pool entries must be created on each CMS for the portability groups that the CMS is managing.

Planning Step 3ciSCSI SAN Storage connections


IMPORTANT: iSCSI is not supported with Integrity nodes. Whenever iSCSI is used as a VM guest backing store, follow the best practice of separating iSCSI traffic from other network traffic. Physical separation is preferred for providing dedicated bandwidth (independent VC Ethernet uplinks) and logical separation (VLANs) is important when sharing switching infrastructure. Any bandwidth sharing between iSCSI and other network traffic can be problematic. When implementing iSCSI as a VM backing store, make sure that an iSCSI network is added to your list of networks (in addition to Management, Production, Deployment and vMotion networks). Relevant examples of network configurations applicable to HP BladeSystem Matrix environments for VMware with HP StorageWorks P4000 SANs are located in the white paper Running VMware vSphere 4 on HP Left-Hand P4000 SAN solutions (http://h20195.www2.hp.com/ v2/GetPDF.aspx/4AA3-0261ENW.pdf). For iSCSI SAN solutions in the HP portfolio, visit http://www.hp.com/go/iSCSI for more information.
36 HP BladeSystem Matrix solution storage

Table 12 Example iSCSI SAN Storage connections


Network iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI Host uplink P4300 G2 Node#1/Port1 P4300 G2 Node#1/Port2 P4300 G2 Node#2/Port1 P4300 G2 Node#2/Port2 P4300 G2 Node#3/Port1 P4300 G2 Node#3/Port2 P4300 G2 Node#4/Port1 P4300 G2 Node#4/Port2 Enclosure1:Bay1:Port3 Enclosure1:Bay2:Port3 Router uplink (Data Signal type center switch and port) DC1-switch/Port1 DC2-switch/Port1 DC1-switch/Port2 DC2-switch/Port2 DC1-switch/Port3 DC2-switch/Port3 DC1-switch/Port4 DC2-switch/Port4 DC1-switch/Port25 DC2-switch/Port25 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 10GBase-T 10GBase-T IP address Provision type (Static, or DHCP)

Storage volumes
HP recommends that the CMS be configured to boot from SAN. To facilitate the flexible movement of management services across blades and enclosures, these services must be configured to use shared storage for the OS boot image, the application image, and the application data. HP also recommends that virtual machine hosts also boot from SAN. If connectivity to customer provided SAN storage is desired, the FC switch must support the NPIV protocol. Access to the switch will be required by HP Services personnel to deploy boot from SAN LUNs. Fabric zones are required in a multi-path environment to ensure a successful operating system deployment.

Storage requirements
For each server profile, consider the boot LUN and any additional data storage requirements and list those parameters in the following table. The HP BladeSystem Matrix Starter Kit on-site implementation services include the deployment of operating systems on a limited number of configured LUNs on the new or existing customer SAN. For more details about HP BladeSystem Matrix Starter Kit Implementation Services, see the HP BladeSystem Matrix Quick Specs. The Replicated To column refers to the Insight Recovery remote storage controller target and data replication group names for the replicated LUNs. HP BladeSystem Matrix is disaster recovery ready, which means HP IR licenses are included and the HP IR feature can be enabled by applying Insight Dynamics licenses on supported ProLiant server blades. Application service recovery can be enabled by configuring a second HP BladeSystem Matrix infrastructure at a remote location and enabling storage replication between the two sites. Continuous access software and licenses are also required. If XP storage is used, Cluster Extension for XP software version 3.0.1 or later is required. See Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for additional information on storage and data replication requirements.

Storage volumes

37

The following table summarizes the type of information needed when planning application and management services deployed on HP BladeSystem Matrix. Table 13 Storage volumes
Server (server name) Use and size (LUN properties) vDisk (LUN) name (xxxx_vdisk) vHost name (xxxx_vhost) Replicated to Connected to

(remote target and (Local SAN data replication storage target) group name, if replicated)

The following details define the type of information needed when planning VC FC connections for application services deployed on HP BladeSystem Matrix: Server name: A label used to identify the application or management service Optionally, may consist of one or more tiers of a multi-tiered application The server name on which the application or management service is hosted

Use and size: The purpose and characteristics of the LUNs associated with the FC connection, for example, boot LUN; the LUN ID, and the LUN size.

vDisk (LUN) name: The vDisk label assigned to the LUN vHost name: The vHost label assigned to the LUN Replicated to: Specifies the remote storage controller WWPN and data replication group name, if using HP Insight Recovery

Connected to: Specifies the local storage controller WWPN hosting this LUN

Table 14 Example storage volumes for management services


Server CMS
1

Use and size 146GB boot

vDisk (LUN) name matrix_cms_vdisk

vHost name matrix_cms_vhost

Replicated to N/A
1

Connected to F400_3PAR

CMS storage is not replicated using HP IR as a second CMS is required at the remote location.

Application services storage definition examples


Appendix ADynamic infrastructure provisioning with HP BladeSystem Matrix (page 54) provides storage definition examples for use with application services using logical servers and with IO templates.

Planning Step 3dDefine storage volumes


Based on the service template completed previously, record the shared storage requirements, size, and connections for each service. If the service will be replicated using Insight Recovery, complete the Replicated to column.

38

HP BladeSystem Matrix solution storage

Table 15 Example storage volumes for application services


Server VM Host 1 VM Host 2 ESX shared disk Test W2K3 Host 1 Test W2K8 Host 2 {DB1} {DB2} {App1} {App2}
1

Use and size 20GB boot 20GB boot 500GB VMFS 20GB boot 40GB boot ###GB ###GB ###GB ###GB

vDisk (LUN) name esx1_vdisk esx2_vdisk esx_shared_vdisk

vHost name esx1_vhost esx2_vhost esx1_vhost, esx2_vhost

Replicated to Connected to None1 None1 None None


1 1

F400_3PAR F400_3PAR F400_3PAR F400_3PAR F400_3PAR (storage target) (storage target) (storage target) (storage target)

sp_w2k3_sys_01_vdisk sp_w2k3_sys_01_vhost sp_2008_sys_01_vdisk sp_2008_sys_01_vhost xxxx_vdisk xxxx_vdisk xxxx_vdisk xxxx_vdisk xxxx_vhost xxxx_vhost xxxx_vhost xxxx_vhost

None1 None1 None1 None1 None1

Storage configurations for Insight Recovery are not covered in this example

Isolating VM Guest storage from VM Host OS files


When performing multiple concurrent VM provisioning requests on the system drive of a hypervisor host, the disk I/O can become saturated during the virtual hard drive replication, which can cause the host to become unstable or unresponsive, or both. Current and future HP Insight Dynamics orchestration service requests can fail because the orchestration software is unable to successfully query the host for resource information and virtual machine-specific information. HP recommends planning hypervisors with separate disks for the hypervisor system drive and the backing storage for virtual machines. Doing so will result in greater performance and lower risk of starving the hypervisor of required I/O bandwidth. HP Insight Dynamics orchestration services offer the ability to control which devices are used for provisioning the virtual machine. To avoid this problem, see the HP BladeSystem Matrix Setup and Installation Guide for configuration steps to exclude hypervisor boot volumes from use.

Microsoft Hyper-V
Consult the Hyper-V Planning and Deployment Guide: http://www.microsoft.com/downloads/ details.aspx?FamilyID=5da4058e-72cc-4b8d-bbb1-5e16a136ef42&displaylang=en This document describes the separation of network traffic of the hypervisor host from virtual machines, where it recommends: Use a dedicated network adapter for the management operating system of the virtualization server. The HP recommendation, which has been validated by rigorous testing, is that the principle of isolating hypervisor resources from virtual machine resources should be applied to virtual machine storage as well as networking. The following site recommends that administrators Avoid storing system files on drives used for Hyper-V storage: http://blogs.technet.com/vikasma/archive/2008/06/26/ hyper-v-best-practices-quick-tips-1.aspx The following site recommends that administrators Place the pagefile and operating system files on separate physical disk drives: http://www.microsoft.com/whdc/ system/sysperf/Perf_tun_srv.mspx

VMware ESX
Most production ESX Server customers concentrate their virtual machine disk usage on external storage, such as a FC SAN, a hardware or software initiated iSCSI storage device, or a remote NAS file server (using the NFS protocol).
Storage volumes 39

5 HP BladeSystem Matrix solution networking


Network planning
This section identifies and collects the network configuration used to manage the HP BladeSystem Matrix enclosures. It is assumed that separate networks are used for production (for example, application level communications) and management communications (for example, managing servers and services). Distinct networks are not required, and the two networks can be one and the same. Each deployment network can only host a single deployment service, so planning to use multiple deployment technologies require multiple, distinct deployment networks. Collect the following customer network details that you will use to assign enclosure management and application services network information.

Planning Step 4aCollect details about the customer provided networks


The following details define information you need when planning networks for HP BladeSystem Matrix: Network nameThe VC network profile name IP address (network number) The representative (masked) address for the network Subnet mask A bit mask used to determine the membership of an IP address in a network Deployment server The server which handles deployment to the network IP range for auto-provisioning The addresses available to HP Insight Dynamics for static assignment to servers when HP Insight Dynamics provisions an instance of an application service VLAN tagThe VLAN id or tag associated with this network Preferred Link Connection SpeedThe default speed for server profile connections mapped to this network DHCP server The address of the DHCP server for each network DNS server The DNS server addresses for each network Gateway IP address The default gateway for address routing external to the network DNS domain name The DNS suffix specific to a network SMTP host SMTP mail services are required for HP Insight Dynamics workflow notifications. The CMS or another host can be configured to forward notifications Time source Having a time source is essential for services to function as designed

IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control server deployment, Server Automation) are planned for a HP BladeSystem Matrix installation, a unique and separate deployment LAN must exist for each deployment server. IMPORTANT: A federated CMS is highly dependent on the DNS configuration. On the primary CMS, forward and reverse DNS lookups must work for each secondary CMS. DNS lookups need to be resolvable using the FQDN of each system. Table 16 Configuration of networks and switches
Item Production LAN IP address (network number) Subnet mask 40 HP BladeSystem Matrix solution networking Value

Table 16 Configuration of networks and switches (continued)


Item IP range for auto-provisioning VLAN tag Preferred link connection speed Gateway IP address DHCP server DNS server #1 DNS server #2 DNS domain name Management LAN IP address (network number) Subnet mask IP range for auto-provisioning VLAN tag Preferred link connection speed DHCP server DNS server #1 DNS server #2 Gateway IP address DNS domain name Deployment LAN IP address (network number) Subnet mask Deployment server VLAN tag Preferred link connection speed DHCP server DNS server #1 DNS server #2 Gateway IP address DNS domain name VMotion LAN IP address (network number) Subnet mask VLAN tag Preferred link connection speed 192.168.2.0 255.255.255.0 N/A N/A N/A N/A 192.168.1.0 255.255.255.0 (Insight Control server deployment, HP Server Automation, or HP Ignite-UX) Value

Network planning

41

Table 16 Configuration of networks and switches (continued)


Item DHCP server DNS server #1 DNS server #2 Gateway IP address DNS domain name Other Network services SMTP host Time source N/A N/A N/A N/A Value

Virtual Connect Ethernet uplink connections


Each Flex-10 interconnect module has several numbered Ethernet connectors. All of these connectors can be used to connect to a data center switch (uplink ports), or they can be used to stack VC modules as part of a single VC domain (stacking ports). Networks must be defined within the VCM so that specific, named networks can be associated with specific external data center connections. These named networks can then be used to specify networking connectivity for individual servers and application services. The simplest approach to connecting the defined networks to the data center is to map each network to a specific uplink port. Whether a single or multi-enclosure domain is defined, any server has access to any Ethernet port. For a minimal production ready configuration, HP recommends that you define a single network using multiple uplinks (uplink port set). This configuration can provide improved throughput and availability. One data center uplink port is defined using the A side (such as, Bay1 or left side) VC Ethernet module and the second port defined on the B side (such as Bay 2 or right side) VC Ethernet module. The following table is an example of how the networks can be defined in a multi enclosure domain. The Production and Management networks are defined with redundant, cross-enclosure A/B uplink connections to the data center switches; the Deployment network traffic, such as a network dedicated to deployment services, is routed entirely within the enclosures so a data center uplink is not required. Table 17 VC Ethernet uplink connections example
Network name VC Uplinks (Enclosure VC module ports) Production Enclosure1:Bay1:Port2 Enclosure2:Bay2:Port2 Management Enclosure1:Bay1:Port3 Enclosure 2:Bay2:Port3 Deployment N/A Router uplinks (Data center switch Signal type and port) DC1net port #4 DC1net port #5 DC1net port #6 DC1net port #7 N/A

In situations where the customer has VLANs in place on the data center networks, or the number of uplinks are constrained, you can combine a number of networks in a shared uplink set.

42

HP BladeSystem Matrix solution networking

Table 18 VC Ethernet uplink connections example using Shared Uplink Sets


SUS name (Networks) SUS 1 Production Management VC Uplinks Enclosure1:Bay1:Port1 Enclosure 2:Bay2:Port1 Router uplinks DC1 port #4 DC1 port #5 Signal type 10GBase-SR 10GBase-SR

The following details define information you need when planning VC Ethernet connections for HP BladeSystem Matrix: Network nameThe VC network profile name. Shared uplink set (SUS) nameOptionally, the VC Shared Uplink Set name, when multiple networks share uplinks VC Uplinks (Enclosure VC module ports)The VC uplink Ethernet ports. If deploying redundant connections, specify additional ports as required. One VC Flex-10 transceiver must be ordered for each uplink port. Verify compatibility with data center switch transceivers and optical cables. Router uplinks (Data center switch and port)The uplink data center switch name and port number that is the destination of this connection. Signal typeThe physical signal cabling standard for the connection

Planning Step 4b Virtual Connect Ethernet uplinks


The VC uplink recommendation for a typical production environment is described in the HP BladeSystem Matrix Setup and Installation Guide. Complete the table by identifying the VC Ethernet ports used for uplink connections to the data center networks, and VLAN tags if required. If the same uplinks ports will share Production and Management data traffic, then VLAN tags and a SUS is defined. Table 19 VC Ethernet uplink connections with sample list of networks
Network name Production Management Deployment VMotion iSCSI Integrity OVMM SG heartbeat SG failover (other network) Enclosure.bay.port Switch.port VC uplinks Router uplinks Signal type

Virtual Connect Flex-10 Ethernet services connections


Flex-10 technology is a hardware-based solution that enables users to partition a 10-Gb/s Ethernet (10GbE) connection and regulate the data speed of each partition. While capable of supporting 10 Gb/s bandwidth, the VC-Ethernet interconnect is compatible with lower speed switches. Each Flex-10 network connection can be dynamically fine-tuned from 100 Mb/s to 10 Gb/s to help eliminate bottlenecks and conserve network capacity. Data center bandwidth requirements vary depending on the application. For example, TCP/IP communications, such as email, file share, web services, may consume 1 Gb/s of bandwidth. Data center management traffic, such as remote
Virtual Connect Flex-10 Ethernet services connections 43

desktop or virtual machine may consume 2 Gb/s and Inter-process communications used in cluster control could consume upward of 4 Gb/s in bandwidth. Using VC Flex-10 you can define a network that does not use any external uplinks. This creates a cableless network within the VC domain. The following details define information you need when planning VC Flex-10 Ethernet connections for application services deployed on HP BladeSystem Matrix: Server name: A label used to identify the application or management service Optionally, can consist of one or more tiers of a multi-tiered application The server name on which the application or management service is hosted The VC network profile name The Flex NIC port connected to this network. Used when specifying a physical blade not auto-provisioned by IO. Specifies the Flex-10 bandwidth allocation for this NIC. Used when specifying a physical blade not auto-provisioned by IO. Specifies the PXE options (Enabled, Disabled, Use BIOS) for this NIC. Used when specifying a physical blade not auto-provisioned by IO.

Network: Port Assignment:

Flex-10 bandwidth:

PXE settings:

Continuing with the services examples developed previously in the Servers and services to be deployed in HP BladeSystem Matrix section, and using the following table, define VC Ethernet parameters for those services.

Management services network configuration


The management server network connections consist of connection to the production and management subnets. The deployment network is used by the deployment server. Table 20 Network host connections example for management services
Server Management servers CMS Management Production Insight Control server deployment Deployment Production 1a, 1b 2a, 2b 1a, 1b 2a, 2b 1Gb 2Gb 1Gb 2Gb Disable Disable Enabled Disable Connection Port assignment Flex-10 Bandwidth allotment PXE setting

Application Services network connectivity examples


Appendix ADynamic infrastructure provisioning with HP BladeSystem Matrix provides network connectivity examples for use with application services using logical servers and with IO templates.

44

HP BladeSystem Matrix solution networking

Planning Step 4cDefine services VC Ethernet connections


Record the connections, type, and destination for each service based on the service template you completed previously. Table 21 Network host connections example for application services
Server (server names) (server names)
1

Connection (VC Ethernet connection #1) (VC Ethernet connection #2)

Port assignment1 (connection type) (connection type)

Flex-10 Bandwidth allotment1 (connection bandwidth) (connection bandwidth)

PXE setting1 (uplink destination) (uplink destination)

These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO, such as the CMS, deployment server, SQL Server, and ESX hosts.

NOTE: Currently, IO can only provision a network with a single VLAN ID mapped to a single Flex NIC port. Even though the VC profile network port definition allows traffic from multiple networks to be trunked over a single NIC (with VLAN ID tagging), IO cannot express this in a service template. Ensure that any server blade provisioned by IO has enough NIC ports to individually carry the defined networks.

Manageability connections
The following table lists required network connections to properly configure and manage each HP BladeSystem Matrix enclosure. Some connections can be provisioned using static address, DHCP, or the recommended EBIPA. This table lists the physical network connections and IP address requirements for the BladeSystem enclosure management connections. Table 22 Required management connections for HP BladeSystem Matrix enclosures
Network Host uplink Router uplink (Data Signal type center switch and port) 1000Base-T 1000Base-T Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed IP address Provision type (EBIPA, Static, or DHCP) Static Static EBIPA EBIPA EBIPA EBIPA Static EBIPA

Management Management Management Management Management Management Management Management

OA #1 OA #2 VC Ethernet #1 VC Ethernet #2 VC Fibre #1 VC Fibre #2 Optional VC Domain IP Enclosure iLO Range

If the EVA4400 (or EVA6400, EVA8400), P4300 G2 (or P4500 G2) or other storage solution is included in the HP BladeSystem Matrix configuration, the following is a sample of the required network connections. Other storage solutions such as a HP StorageWorks XP Array, HP 3PAR F-Class InServ storage system or HP 3PAR T-Class InServ storage system have similar network connection requirements.

Manageability connections

45

Table 23 Additional network connections for storage management


Network Management Management Management Management Management Management Management Host uplink EVA4400 ABM MGMT port EVA4400 Fibre switch #1 EVA4400 Fibre switch #2 P4300 G2 Node #1 P4300 G2 Node #2 Other SAN switch Other FC Storage controller Router uplink (Data Signal type center switch and port) 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T IP address Provision type Static Static Static Static Static Static Static

Other devices included in the HP BladeSystem Matrix configuration require management network connections such as monitored PDUs and network switches. Table 24 Other additional network connections
Network Management Management Management Management Host uplink Monitored PDU #1 Monitored PDU #2 Network Switch #1 Network Switch #2 N/A N/A Router uplink (Data Signal type center switch and port) 100Base-T 100Base-T N/A N/A IP address Provision type Static Static Static Static

Planning Step 4dDefine manageability connections


Based on the HP BladeSystem Matrix required network connections provided in the previous table, use the following template to record the various IP addresses required to manage the BladeSystem enclosures. Use one template for each enclosure ordered. Table 25 HP BladeSystem Matrix management connections
Network Host uplink Router uplink (Data Signal type center switch and port) 1000Base-T 1000Base-T Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed IP address Provision type (EBIPA, Static, or DHCP)

Management Management Management Management Management Management Management

Starter Kit OA #1 Starter Kit OA #2 Starter Kit VC-Enet #1 Starter Kit VC-Enet #2 Starter Kit VC-FC #1 Starter Kit VC-FC #2 Optional VC Domain IP

46

HP BladeSystem Matrix solution networking

Table 25 HP BladeSystem Matrix management connections (continued)


Network Host uplink Router uplink (Data Signal type center switch and port) Through OA connection Multiplexed 1000Base-T 1000Base-T Through OA connection Through OA connection Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T N/A N/A N/A N/A (starting IP ending IP) IP address Provision type (EBIPA, Static, or DHCP)

Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management

Starter Kit iLO Range Expansion Kit #1 OA #1 Expansion Kit #1 OA #2 Expansion Kit #1 VC-Enet #1 Expansion Kit #1 VC-Enet #2

(starting IP ending IP)

Expansion Kit #1 VC-FC Through OA #1 connection Expansion Kit #1 VC-FC Through OA #2 connection Expansion Kit #1 iLO Range EVA4400 ABM MGMT port EVA4400 Fibre switch #1 EVA4400 Fibre switch #2 P4300 G2 Node #1 MGMT P4300 G2 Node #2 MGMT Other SAN switch Other FC Storage controller Monitored PDU #1 Monitored PDU #2 Network Switch #1 Network Switch #2 Through OA connection

Access requirements
The following table shows the various access credentials to support the HP BladeSystem Matrix implementation. Each enclosure that is included in the HP BladeSystem configuration (for example, Starter Kit enclosure, one or more expansion kits) requires OA and VC credentials. Plan or identify credentials for all management services including the CMS, deployment servers, hypervisor consoles and storage management consoles. Also identify and plan SNMP settings and user credentials for managed device consoles such as SAN switches, VC, iLO and OA.

Manageability connections

47

IMPORTANT: It is the user's responsibility to be sure that all CMSs in a federated CMS environment have the user accounts in sync. Create the same user accounts on primary and secondary CMSs.

Planning Step 4eDetermine BladeSystem infrastructure credentials


Complete the following BladeSystem credentials template identifying those credentials that will be used to manage the HP BladeSystem Matrix environment. Table 26 Access credentials sample list
Access Starter KitOA Starter KitVC Starter Kit-iLOs SNMP READ ONLY community string SNMP READ WRITE community string CMS service account1 Insight Control server deployment SQL Server VMware vCenter Server2 Ignite-UX server HP Command View EVA SAN switch account IO User IO Architect IO Administrator Expansion Kit #1OA Expansion Kit #1VC Expansion Kit #1 - iLOs
1 2 Service account credentials are used to run the various Insight Software services. Typically this account is used only to install and authorize these services. Set the password property to Never Expire. The credentials used to communicate with vCenter will, in part, determine which ESX hosts are visible to HP BladeSystem Matrix. You can provide full access or restricted visibility based on your vCenter configuration.

Domain

Credential

Password

N/A N/A

N/A N/A

48

HP BladeSystem Matrix solution networking

6 HP BladeSystem Matrix pre-delivery planning checklist


Review user responsibility items before HP BladeSystem Matrix is delivered and before you meet with HP Services personnel. Making sure these items are in place and ready prevents delays in HP BladeSystem Matrix solution implementation. For HP IR implementations, user responsibility items are required for both the primary and recovery sites. Table 27 User responsibility summary
Item Floor space assigned and reserved? Power connections available? LAN connections to production and management LAN ports assigned and reserved? IP addresses assigned as enumerated above? If customer provided SAN storage, SAN fabric zoning defined? If customer provided SAN storage, are the LUNs created and ready to be presented? If customer provided CMS, record the system IP, hostname, iLO, virtual or physical media present, and credentials. If customer provided deployment server, record IP, and credentials. If customer provided vCenter server, record IP and credentials. If a separate (remote) SQL server, record hostname, SQL port, instance name and administrator credentials. For HP IR Implementations: Customer-determined, DR-protected logical servers? Intersite link operational between sites? Ready?

49

7 Next steps
After completing the preceding planning and pre-delivery steps, the customer environment is ready for physical delivery of the HP BladeSystem Matrix components. Upon arrival of the HP BladeSystem Matrix components, continue with the HP BladeSystem Matrix setup and installation following the procedures described in HP BladeSystem Matrix Setup and Installation Guide.

50

Next steps

8 Support and other resources


Contacting HP
Information to collect before contacting HP
Be sure to have the following information available before you contact HP: HP BladeSystem Matrix Starter Kit or Expansion Kit c7000 enclosure serial number and/or SAID (Service Agreement Identifier) if applicable Software product name Hardware product model number Operating system type and version Applicable error message Third-party hardware or software Technical support registration number (if applicable)

IMPORTANT: Be sure to mention that this is an HP BladeSystem Matrix configuration when you call for support. Each HP BladeSystem Matrix Starter Kit or Expansion Kit c7000 serial number identifies it as a HP BladeSystem Matrix installation.

How to contact HP
Use the following methods to contact HP technical support: In the United States, see the Customer Service / Contact HP United States website for contact options: http://www.hp.com/go/assistance In the United States, call 1-800-334-5144 to contact HP by telephone. This service is available 24 hours a day, 7 days a week. For continuous quality improvement, conversations might be recorded or monitored. In other locations, see the Contact HP Worldwide website for contact options: http://www.hp.com/go/assistance

Registering for software technical support and update service


HP BladeSystem Matrix includes as standard, three or one year of 24 x 7 HP Software Technical Support and Update Service and 24 x 7 four hour response HP Hardware Support Service. This service provides access to HP technical resources for assistance in resolving software implementation or operations problems. The service also provides access to software updates and reference manuals in electronic form as they are made available from HP. Customers who purchase an electronic license are eligible for electronic updates. With this service, Insight software customers benefit from expedited problem resolution as well as proactive notification and delivery of software updates. For more information about this service, see the following website: http://www.hp.com/services/insight Registration for this service takes place following online redemption of the license certificate.

Contacting HP

51

How to use your software technical support and update service


After you have registered, you will receive a service contract in the mail containing the Customer Service phone number and your Service Agreement Identifier (SAID). You need your SAID when you contact technical support. Using your SAID, you can also go to the Software Update Manager (SUM) webpage at http://www.itrc.hp.com to view your contract online.

Warranty information
HP will replace defective delivery media for a period of 90 days from the date of purchase. This warranty applies to all Insight software products.

HP authorized resellers
For the name of the nearest HP authorized reseller, see the following sources: In the United States, see the HP U.S. service locator website: http://www.hp.com/service_locator In other locations, see the Contact HP worldwide website: http://www.hp.com/go/assistance

Documentation feedback
HP welcomes your feedback. To make comments and suggestions about product documentation, send a message to: docsfeedback@hp.com Include the document title and manufacturing part number in your message. All submissions become the property of HP.

Security bulletin and alert policy for non-HP owned software components
Open source software (such as OpenSSL) or third-party software (such as Java) are sometimes included in HP products. HP discloses that the non-HP owned software components listed in the HP Insight Dynamics end user license agreement (EULA) are included with HP Insight Dynamics. To view the EULA, use a text editor to open the /opt/vse/src/README file on an HP-UX CMS, or the <installation-directory>\src\README file on a Windows CMS. (The default installation directory on a Windows CMS is C:\Program Files\HP\Virtual Server Environment, but this directory can be changed at installation time.) HP addresses security bulletins for the software components listed in the EULA with the same level of support afforded HP products. HP is committed to reducing security defects and helping you mitigate the risks associated with security defects when they do occur. HP has a well defined process when a security defect is found that culminates with the publication of a security bulletin. The security bulletin provides you with a high level description of the problem and explains how to mitigate the security defect.

Subscribing to security bulletins


To receive security information (bulletins and alerts) from HP: 1. Open a browser to the HP home page: http://www.hp.com 2. 3. Click the Support & Drivers tab. Click Sign up: driver, support, & security alerts, which appears under Additional Resources in the right navigation pane.

52

Support and other resources

4. 5.

Select Business & IT Professionals to open the Subscriber's Choice webpage. Do one of the following: Sign in if you are a registered customer. Enter your email address to sign-up now. Then, select the box next to Driver and Support alerts and click Continue. Select HP BladeSystem Matrix Converged Infrastructure from the product family section A, then select each of the BladeSystem Matrix entries in section B.

Related information
The latest versions of manuals (only the HP BladeSystem Matrix Compatibility Chart and HP BladeSystem Matrix Release Notes are available) and white papers for HP BladeSystem Matrix and related products can be downloaded from the web at http://h20000.www2.hp.com/ bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us& docIndexId=64180&taskId=1 15&prodTypeId=3709945&prodSeriesId=4223779.

Related information

53

A Dynamic infrastructure provisioning with HP BladeSystem Matrix


This section provides examples on how the HP BladeSystem Matrix solution, through dynamic infrastructure provisioning, allows the user to quickly re-purpose infrastructure without reinstallation. When compared with production environments, test and development environments are characterized by the need to quickly set up and then tear down an environment after a relatively short period of time. In fact, the workloads are characterized by short periods of high-levels of utilization, followed by long periods of minimal or no use. Static, stand-alone server environments are not particularity suited to this type of use-case. The following examples are real-world implementations and can be leveraged as guides to the planning process. The first example describes the services implementing Dynamic Infrastructure Provisioning using logical servers. The second example describes the services to implement Dynamic Infrastructure Provisioning using Insight Operations. This example is a database service consisting of multiple tiers each with multiple physical servers.

Example 1An agile test and development infrastructure using logical servers
In this example, logical servers deployed as physical server blades are used to create a dynamic test and development infrastructure. Logical server management operations are used to rapidly activate and deactivate test and development environments, to quickly re-purpose infrastructure without reinstallation. Resources may be pooled and shared, improving utilization and reducing cost. Test and development teams share a pool of VC server blades used to develop and test applications on multiple operating systems and versions. A number of different environments are needed, but not all are required at the same time. At any one time, several logical servers are active, making the currently required test and development environments available for use. Other logical servers, for those test and development environments that are not currently needed, are inactive. A deactivated logical server does not consume computer resources such as CPU, memory, or power, but its profile, including associated storage, is retained, making it easy to reactivate the logical server to quickly make the environment again available for use.

54

Dynamic infrastructure provisioning with HP BladeSystem Matrix

Figure 7 An agile test and development infrastructure using logical servers

Example 1An agile test and development infrastructure using logical servers

55

Bill of materials
Table 28 Bill of materials
Item HP BladeSystem c7000 Enclosure HP VC Flex-10 10Gb Ethernet module x2 HP VC 8Gb 8Port FC module x2 HP StorageWorks EVA4400 HP ProLiant BL460c G6CMS server blade HP Insight Software and Licenses HP ProCurve 6600 with 10Gb support HP ProLiant BL460c x3 Windows Server 2003 SP2 EE Red Hat Enterprise Linux HP BladeSystem Matrix included/orderable separately HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included (optional) HP BladeSystem Matrix included (default, deselectable) HP BladeSystem Matrix included Orderable Orderable Orderable Orderable

Step 1Define services


Table 29 Define services
Service Management Service (CMS) Host configuration ProLiant BL460c Software Insight Software Command View EVA SQL Express Storage requirements Boot from SAN Network requirements Corporate Network #1 1Gb Deployment Network #2 2Gb TestV1Windows Physical ProLiant BL460c Windows Server 2003 R2SP2 SE Boot from SAN Corporate network #1 1Gb

Deployment network #2 2Gb DevV2Windows Physical ProLiant BL460c Windows Server 2003 R2SP2 SE Boot from SAN Corporate network #1 1Gb

Step 2aRacks and enclosures


Table 30 Racks and enclosures plan
Item Matrix Rack #1 Rack Model Rack Name Matrix Enclosure #1 (Starter kit) Enclosure Model Enclosure Name Enclosure Location (Rack Name, U#) 56 Dynamic infrastructure provisioning with HP BladeSystem Matrix Matrix Starter Kit Mx-Rack1enc1 Mx-Rack1, U1 10642 G2 Mx-Rack1 Value

Step 2bFacility planning


Table 31 Facility requirements
Facility Planning Facility power connections: Voltage, Phase Receptacle type Circuit rating Circuit de-rating percentage for the locality UPS or WALL Power redundancy? (If yes, specify labeling scheme) Planning metrics for racks: Rack weight estimate (in Kg or lbs) Airflow estimate (in CMM/CFM) Watts (W). Volt-Amps (VA) Estimate for rack 698 lbs. 551 CFM 8088 W, 8255 VA (Watson order tool) 5355 W, 5464 VA (BladeSystem Power Sizing tool) Thermal limit per rack (in Watts) (customer requirement compare to estimate) Quantity and type of PDUs for rack 5000 Watts - customer desires power capping to manage limitations. Quantity two of 8.6 kVA Modular PDU, 24A 3 Phase (AF512A) Customer will add PDUs when expanding Monitored PDUs only: Additional uplink & IP address SNMP community strings Installation characteristics: Data center location for rack 123 Main St. Springfield, IL, USA Side clearances / floor space allocation Verify ready to receive and install rack Customer familiar with space requirements Logistics handled by customer upon delivery 208V, 3 phase (Delta) NEMA L15-30R 30A breakers 20% WALL PWR1 & PWR2 Value

Step 2cVirtual Connect domain configuration


Table 32 Virtual Connect domain configuration
Item Virtual Connect Domain Group #1 Name List the names of each VCD in this VCDG Virtual Connect Domain #1 Name List the names of each enclosure in this VCD Multi-enclosure stacking VCD1-Matrix1 MatrixRack1Enc1 N/A Example 1An agile test and development infrastructure using logical servers 57 VCDG-Matrix1 VCD1-Matrix1 Value

Table 32 Virtual Connect domain configuration (continued)


Item N/A, recommended, minimum or other? MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 (range: 00-17-xx-77-00-00 : 00-17-xx-77-03-FF) WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 (range: 50:06:xx:00:03:C2:62:00 50:06:xx:00:03:C2:65:FF) Serial numbers HP-defined or user-defined? If HP-defined, select unique range 164 (range: ??? - ???) 1 1 1 Value

Step 3cStorage volumes


Table 33 Storage volumes
Server CMS TestV1Windows DevV2Windows TestV1RHEL DevV1Windows Use and size 50GB boot 16GB boot 16GB boot 16GB boot 16GB boot vDisk (LUN) name matrix_cms_vdisk TestV1_w2k3_vdisk DevV2_w2k8_vdisk TestV1_rhel5_vdisk DevV1_w2k8_vdisk vHost name matrix_cms_vhost TestV1_w2k3_vhost DevV2_w2k8_vhost TestV1_rhel5_vhost DevV1_w2k8_vhost Replicated to N/A None None None None Connected to EVA4400 EVA4400 EVA4400 EVA4400 EVA4400

Step 4aNetwork configuration


Table 34 Configuration of networks and switches
Item Production LAN IP range Management LAN IP range Value 16.89.128.1 16.89.128.254 10.1.1.1 10.1.1.254

Step 4bVC Ethernet uplinks


Table 35 VC Ethernet uplink connections
Network name Production VC uplinks Mx-Rack1-enc1:bay 1:port 1 Mx-Rack1-enc1:bay 2:port 1 Management Mx-Rack1-enc1:bay 1:port 2 Mx-Rack1-enc1:bay 2:port 2 Deployment Mx-Rack1-enc1:bay 1:port 3 Mx-Rack1-enc1:bay 2:port 3 Router uplinks ProCurve Switch #1, port 25 ProCurve Switch #1, port 26 ProCurve Switch #1, port 27 ProCurve Switch #1, port 28 ProCurve Switch #1, port 29 ProCurve Switch #1, port 30 10Gbase-SR 10Gbase-SR Signal type 10Gbase-SR

58

Dynamic infrastructure provisioning with HP BladeSystem Matrix

Step 4cServices network connections


Table 36 Network host connections
Server Connection IP Flex-10 Bandwidth allotment 1Gb 2Gb 1Gb 2Gb 1Gb 2Gb 1Gb 2Gb 1Gb 2Gb Connected to1

VC-Ethernet Uplink Corporate Network #1 Management Service VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 TestV1Windows VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 DevV2Windows VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 TestV1RHEL VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 DevV1Windows VC-Ethernet Uplink Deployment Network #2
1

16.89.129.100 10.1.1.100 16.89.129.101 10.1.1.101 16.89.129.102 10.1.1.102 16.89.129.103 10.1.1.103 16.89.129.104 10.1.1.104

Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment

To enable network redundancy, multiple connections can be made to the same LAN segment.

Step 4dManagement network connections


Table 37 Management connections
Network Host uplink Router uplink (Data Signal type center switch and port) ProCurve Switch #1, port 1 ProCurve Switch #1, port 2 Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection 1000Base-T 1000Base-T Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed IP address Provision type (EBIPA, Static, or DHCP) Static Static EBIPA EBIPA EBIPA EBIPA Static

Production Production Production Production Production Production Production Production

Starter Kit OA #1 Starter Kit OA #2 Starter Kit VC Ethernet #1 Starter Kit VC Ethernet #2 Starter Kit VC Fibre #1 Starter Kit VC Fibre #2 Optional VC Domain IP iLO Range

16.89.129.10 16.89.129.1 1 16.89.129.70 16.89.129.71 16.89.129.72 16.89.129.73 16.89.129.12

16.89. 29.50-65 EBIPA 1

Example 1An agile test and development infrastructure using logical servers

59

Table 37 Management connections (continued)


Network Host uplink Router uplink (Data Signal type center switch and port) ProCurve Switch #1, port 3 ProCurve Switch #1, port 4 ProCurve Switch #1, port 5 DC switch #1, port 7 100Base-T 100Base-T 100Base-T 1000Base-T IP address Provision type (EBIPA, Static, or DHCP) Static Static Static Static

Production Production Production Production

EVA4400 ABM MGMT port EVA4400 Fibre Switch #1 EVA4400 Fibre Switch #2 ProCurve Switch #1

16.89.129.7 16.89.129.8 16.89.129.9 16.89.129.1

Step 4eAccess credentials


Table 38 Access credentials
Access Starter Kit OA Starter Kit VC SNMP READ ONLY community SNMP READ WRITE community CMS service account Insight Control server deployment HP Command View EVA N/A N/A Acme.net Acme.net Acme.net Domain Credential acmeAdmin acmeAdmin N/A N/A Administrator Administrator Administrator AcmeDC1-public AcmeDC1-private Password

Example 2An agile test and development infrastructure with IO


IO enables IT architects to visually design a catalog of infrastructure service templates, including multi-tier, multi-node configurations that can be activated in minutes. To ease the load on your IT staff and streamline processes, IO enables self-service provisioning. Authorized users can provision infrastructure from pools of shared server and storage resources using a self-service portal. This portal includes features to automate and streamline the approval process for requested infrastructure. In addition, IO is designed to integrate with existing IT management tools and processes using an embedded workflow automation tool. Integration can make the infrastructure delivery process work more efficiently and reliably across IT architects, administrators, and operations teams. In this solution, IO is used with Insight Dynamics to create a dynamic multi-server, multi-tier test and development infrastructure using both logical servers deployed as physical server blades and VMware ESX VM logical servers. Use of IO reduces cost, saves time, and improves quality by automatically provisioning easily managed services, as needed, using published service templates that incorporate best practices and policies. IO delivers advanced template-driven design, provisioning and ongoing operations for multi-server, multi-tier infrastructure services. IO provides role-based user interfaces and functionality. An architect uses the IO Designer to create and publish service templates that incorporate best practices and policies. Service templates can also include workflows to automate and integrate customer-specific actions, such as approvals or notifications, into automated service provisioning and resource allocation. Using the IO Self-Service Portal, an end user can easily create and manage a service, as needed. An administrator uses the IO console integrated with HP SIM to manage users, resource pools, and self-service requests.

60

Dynamic infrastructure provisioning with HP BladeSystem Matrix

Figure 8 An agile test and development infrastructure with IO templates

Bill of materials
Table 39 Bill of materials
Item HP BladeSystem c7000 Enclosure HP VC Flex10 10Gb Ethernet module x2 HP VC 8Gb 8Port FC module x2 HP StorageWorks EVA4400 HP ProLiant BL460c G6 Server CMS server blade HP Insight Software and Licenses HP BladeSystem Matrix included/orderable separately HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included (optional) HP BladeSystem Matrix included (default, deselectable) HP BladeSystem Matrix included Example 2An agile test and development infrastructure with IO 61

Table 39 Bill of materials (continued)


Item HP ProCurve 6600 with 10Gb support HP ProLiant BL680c Server x2 HP ProLiant BL460c Server x2 Windows Server 2003 SP2 EE Red Hat Enterprise Linux VMware ESX Server with VMotion VMware vCenter server HP BladeSystem Matrix included/orderable separately Orderable Orderable Orderable Orderable Orderable Orderable Orderable

Step 1Define services


Table 40 Define services
Service Management service Management service BL460c (CMS) Insight Software Boot from SAN Command View EVA SQL Express Corporate Network #1 -1 Gb/s Deployment Network #2 2 Gb/s 4AA2-3927ENW VMApp1 Virtual Server Windows Server 2003 R2SP2 SE VMFS Production LAN#1 1 Gb/s Management LAN#2 2 Gb/s VMFS Production LAN#1 1 Gb/s Management LAN#2 2 Gb/s Boot from SAN VMFS Storage Pool 1 90Gb Production LAN#1 1 Gb/s Management LAN#2 2 Gb/s VMotion LAN#3 4 Gb/s VMHost2 Physical BL680c ESX v4.1 Boot from SAN VMFS Storage Pool 1 90Gb Production LAN#1 1 Gb/s Management LAN#2 2 Gb/s VMotion LAN#3 4 Gb/s Database tier Host configuration Software Storage requirements Network requirements

VMApp2

Virtual Server Windows Server 2003 R2SP2 SE

VMHost1

Physical BL680c ESX v4.1

62

Dynamic infrastructure provisioning with HP BladeSystem Matrix

Table 40 Define services (continued)


Service DB1 Host configuration Physical BL460c Virtual Server Windows Server 2003 R2SP2 EE DB2 Physical BL460c Virtual Server Windows Server 2003 R2SP2 EE Boot from SAN Software Storage requirements Boot from SAN Network requirements Production LAN#1 1 Gb/s Management LAN#2 2 Gb/s Production LAN#1 1 Gb/s Management LAN#2 2 Gb/s

Step 2aRacks and enclosures


Table 41 Racks and enclosures plan
Item Matrix Rack #1 Rack Model Rack Name Matrix Enclosure #1 (Starter kit) Enclosure Model Enclosure Name Enclosure Location (Rack Name, U#) HP BladeSystem Matrix Starter Kit Mx-Rack1-enc1 Mx-Rack1, U1 10642 G2 Mx-Rack1 Value

Step 2bFacility planning


Table 42 Facility requirements
Facility power Facility power connection: Voltage, Phase Receptacle type Circuit rating Circuit de-rating percentage for the locality UPS or WALL Power redundancy? (If yes, specify labeling scheme) Planning metrics for rack: Rack weight estimate (in Kg or lbs) Airflow estimate (in CMM/CFM) Watts (W). Volt-Amps (VA) Estimate for rack Thermal limit per rack (in Watts) (customer requirement compare to estimate) 365 kg 16.596 CMM (BladeSystem Power Sizing tool) 8088 W, 8255 VA (Watson order tool) 5355 W, 5464 VA (BladeSystem Power Sizing tool) No limit specified. Customer to confirm facility limits exceed estimates. 200- 240VAC, Three EC60309 2 pole, 3 wire 32A 0% WALL PWR1 & PWR2 Value

Example 2An agile test and development infrastructure with IO

63

Table 42 Facility requirements (continued)


Facility power Quantity and type of PDUs for rack Value Selected Quantity four of HP 32A HV Core Only Modular Power Distribution Unit, 2 extra installed for future expansion

Monitored PDUs only: Additional uplink & IP address SNMP community strings Installation characteristics: Identify data center location Side clearances / floor space allocation Verify ready to receive and install rack Customers site in Brussels, Belgium Hot/Cold aisles adequate distances, bayed to end of existing row Customer requires 10 business days after delivery before scheduling install

Step 2cVirtual Connect domain configuration


Table 43 Virtual Connect domain configuration
Item Virtual Connect Domain Group #1 Name List the names of each VCD in this VCDG Virtual Connect Domain #1 Name List the names of each enclosure in this VCD Multi-enclosure stacking N/A, recommended, minimum or other? MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 (range: 00-17-xx-77-00-00 : 00-17-xx-77-03-FF) WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 (range: 50:06:xx:00:03:C2:62:00 50:06:xx:00:03:C2:65:FF) Serial numbers HP-defined or user-defined? If HP-defined, select unique range 164 (range: ??? - ???) 1 1 VCD1-Matrix1 MatrixRack1Enc1 N/A 1 VCDG-Matrix1 VCD1-Matrix1 Value

64

Dynamic infrastructure provisioning with HP BladeSystem Matrix

Step 3cStorage volumes


Table 44 Storage volumes
Service CMS VMApp1 VMApp2 VMHost1 VMHost2 ESX shared disk Use and size 50GB boot 16GB boot 16GB boot 16GB boot 16GB boot 250GB VMFS 16GB boot DB1 50GB data 16GB boot DB2 50GB data DB2_cstore_vdisk DB2_vhost None EVA4400 DB1_sql_vdisk DB2_boot_vdisk DB1_vhost DB2_vhost None None EVA4400 EVA4400 vDisk (LUN) name matrix_cms_vdisk VMApp1_vdisk VMApp2_vdisk VMHost1_vdisk VMHost2_vdisk ESX_shared_vdisk DB1_boot_vdisk vHost name matrix_cms_vhost VMApp1_vhost VMApp2_vhost VMHost1_vhost VMHost2_vhost VMHost1_vhost, VMHost2_vhost DB1_vhost Replicated to N/A None None None None None None Connected to EVA4400 EVA4400 EVA4400 EVA4400 EVA4400 EVA4400 EVA4400

Step 4aNetwork configuration


Table 45 Configuration of networks and switches
Item Production LAN IP range Management LAN IP range Value 16.89.128.1 16.89.128.254 10.1.1.1 10.1.1.254

Step 4bVirtual Connect Ethernet uplinks


Table 46 VC Ethernet uplink connections
Network name Production Management (SUS1) Deployment VC uplinks Mx-Rack1-enc1:bay 1:port 1 Mx-Rack1-enc1:bay 2:port 1 Mx-Rack1-enc1:bay 1:port 2 Mx-Rack1-enc1:bay 2:port 2 Router uplinks ProCurve Switch #1, port 25 ProCurve Switch #1, port 26 ProCurve Switch #1, port 27 ProCurve Switch #1, port 28 10Gbase-SR Signal type 10Gbase-SR

Step 4cServices network connections


Table 47 Network host connections
Service Connection IP Flex-10 Bandwidth allotment 1 Gb/s 2 Gb/s Connected to1

Management Service

VC-Ethernet Uplink Corporate Network #1

16.89.129.100

Production LAN segment Management LAN segment

VC-Ethernet Uplink Deployment 10.1.1.100 Network #2

Database service Application tier

Example 2An agile test and development infrastructure with IO

65

Table 47 Network host connections (continued)


Service Connection IP Flex-10 Bandwidth allotment 1 Gb/s 2 Gb/s 1 Gb/s 2 Gb/s 1 Gb/s 2 Gb/s 4 Gb/s 1 Gb/s 2 Gb/s 4 Gb/s Connected to1

VC-Ethernet Uplink Corporate Network #1 VMApp1

16.89.129.101

Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment None (Internal) Production LAN segment Management LAN segment None (Internal)

VC-Ethernet Uplink Deployment 10.1.1.101 Network #2 VC-Ethernet Uplink Corporate Network #1 VMApp2 VC-Ethernet Uplink Deployment 10.1.1.102 Network #2 VC-Ethernet Uplink Corporate Network #1 VMHost1 16.89.129.103 16.89.129.102

VC-Ethernet Uplink Deployment 10.1.1.103 Network #2 VC VMotion Network #3 VC-Ethernet Uplink Corporate Network #1 10.1.1.1 13 16.89.129.104

VMHost2

VC-Ethernet Uplink Deployment 10.1.1.104 Network #2 VC VMotion Network #3 10.1.1.1 14

Database tier VC-Ethernet Uplink Corporate Network #1 DB1 VC-Ethernet Uplink Deployment 10.1.1.105 Network #2 VC-Ethernet Uplink Corporate Network #1 DB2 VC-Ethernet Uplink Deployment 10.1.1.106 Network #2
1

16.89.129.105

1 Gb/s 2 Gb/s 1 Gb/s 2 Gb/s

Production LAN segment Management LAN segment Production LAN segment Management LAN segment

16.89.129.106

To enable network redundancy, multiple connections can be made to the same LAN segment.

Step 4dManagement network connections


Table 48 Management connections
Network Host uplink Router uplink (Data Signal type center switch and port) ProCurve Switch #1, port 1 ProCurve Switch #1, port 2 Through OA connection Through OA connection Through OA connection 1000Base-T 1000Base-T Multiplexed Multiplexed Multiplexed IP address Provision type (EBIPA, Static, or DHCP) Static Static EBIPA EBIPA EBIPA

Production Production Production Production Production

Starter Kit OA #1 Starter Kit OA #2 Starter Kit VC Ethernet #1 Starter Kit VC Ethernet #2 Starter Kit VC Fibre #1

16.89.129.10 16.89.129.1 1 16.89.129.70 16.89.129.71 16.89.129.72

66

Dynamic infrastructure provisioning with HP BladeSystem Matrix

Table 48 Management connections (continued)


Network Host uplink Router uplink (Data Signal type center switch and port) Through OA connection Through OA connection Through OA connection ProCurve Switch #1, port 3 ProCurve Switch #1, port 4 ProCurve Switch #1, port 5 DC switch #1, port 7 Multiplexed Multiplexed Multiplexed 1000Base-T 1000Base-T 1000Base-T 1000Base-T IP address Provision type (EBIPA, Static, or DHCP) EBIPA Static

Production Production Production Production Production Production Production

Starter Kit VC Fibre #2 Optional VC Domain IP iLO Range EVA4400 ABM MGMT port EVA4400 Fibre Switch #1 EVA4400 Fibre Switch #2 ProCurve Switch #1

16.89.129.73 16.89.129.12

16.89. 29.50-65 EBIPA 1 16.89.129.7 16.89.129.8 16.89.129.9 16.89.129.1 Static Static Static Static

Step 4eAccess credentials


Table 49 Access credentials
Access Starter Kit OA Starter Kit VC SNMP READ ONLY community SNMP READ WRITE community CMS service account Deployment server HP Command View EVA N/A N/A Acme.net Acme.net Acme.net Domain Credential Admin Admin N/A N/A Administrator Administrator Administrator AcmeDC1-public AcmeDC1-private Password

Example 2An agile test and development infrastructure with IO

67

B Sample configuration templates


This section provides sample templates. The following templates are also available in the accompanying HP BladeSystem Matrix Planning Guide Worksheet.

Step 1aApplication services template


Table 50 Application services
Service (service name) (service tier #1 name) (server) (server type) (SAN requirements) (LAN requirements) Host configuration Software Storage requirements Network requirements

(service tier #2 name) (server) (server type) (SAN requirements) (LAN requirements)

Step 1bManagement servers template


Table 51 Required management services
Service Management environment CMS #1 (server type) (server type) (server type) (server type) (server type) Insight Software (SAN requirements) (LAN requirements) (LAN requirements) (LAN requirements) (LAN requirements) (LAN requirements) Host configuration Software Storage requirements Network requirements

Insight Software server (SAN requirements) deployment HP Command View EVA SQL server (Other) (SAN requirements) (SAN requirements) (SAN requirements)

Step 2aRack and enclosures template


Table 52 Racks and enclosures plan
Item Matrix Rack #1 Rack Model Rack Name Matrix Enclosure #1 (Starter kit) Value

68

Sample configuration templates

Table 52 Racks and enclosures plan (continued)


Item Enclosure Model Enclosure Name Enclosure Location (Rack Name, U#) Value

Step 2bFacility planning template


Table 53 Facility requirements
Facility Planning Facility power connection characteristics: Voltage, Phase Receptacle type Circuit rating Circuit de-rating percentage for the locality UPS or WALL Power redundancy? (If yes, specify labeling scheme) Planning metrics for rack: Rack weight estimate (in Kg or lbs) Airflow estimate (in CMM/CFM) Watts (W). Volt-Amps (VA) Estimate for rack Thermal limit per rack (in Watts) (customer requirement compare to estimate) Quantity and type of PDUs for rack Monitored PDUs only: Additional uplink & IP address SNMP community strings Installation characteristics: Identify data center location Side clearances / floor space allocation Verify ready to receive and install rack Value

Step 2c Virtual Connect domain configuration


Table 54 Virtual Connect domain configuration
Item Virtual Connect Domain Group #1 Name List the names of each VCD in this VCDG Virtual Connect Domain #1 Value

69

Table 54 Virtual Connect domain configuration (continued)


Item Name List the names of each enclosure in this VCD Multi-enclosure stacking N/A, recommended, minimum or other? MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 Serial numbers HP-defined or user-defined? If HP-defined, select unique range 164 Virtual Connect Domain #2 Name List the names of each enclosure in this VCD Multi-enclosure stacking N/A, recommended, minimum or other? MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64 Serial numbers HP-defined or user-defined? If HP-defined, select unique range 164 Value

Step 3aCollect details about the customer-provided SAN storage


Table 55 Storage and fabrics
Question Does some or all the SAN already exist? Will the matrix rack and enclosures be connected to an already installed and working SAN and array, or will some or all of the SAN storage be installed for the HP BladeSystem Matrix solution? Number of separate SANs Number of switches per SAN (assume 2): Number of arrays Response

70

Sample configuration templates

Step 3bFC SAN storage connections


Table 56 FC SAN connections
Customer SAN name One of multiple connections to the same SAN 1 2 Storage controller WWPN VC FC SAN profile Note

Minimum of 1 Typically a second connection to first SAN for HA

3 4 5 6

Step 3ciSCSI SAN storage connections


Table 57 Example iSCSI SAN storage connections
Network iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI iSCSI Host uplink P4300 G2 Node#1/Port1 P4300 G2 Node#1/Port2 P4300 G2 Node#2/Port1 P4300 G2 Node#2/Port2 P4300 G2 Node#3/Port1 P4300 G2 Node#3/Port2 P4300 G2 Node#4/Port1 P4300 G2 Node#4/Port2 Enclosure1:Bay1:Port3 Enclosure1:Bay2:Port3 Router uplink (Data Signal type center switch and port) DC1-switch/Port1 DC2-switch/Port1 DC1-switch/Port2 DC2-switch/Port2 DC1-switch/Port3 DC2-switch/Port3 DC1-switch/Port4 DC2-switch/Port4 DC1-switch/Port25 DC2-switch/Port25 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 1000Base-T 10GBase-T 10GBase-T IP address Provision type (Static, or DHCP)

Step 3dDefine storage volumes


Table 58 Storage volumes
Server (server name) Use and size (LUN properties) vDisk (LUN) name (xxxx_vdisk) vHost name (xxxx_vhost) Replicated to Connected to

(remote target and (Local SAN data replication storage target) group name, if replicated)

71

Table 58 Storage volumes (continued)


Server Use and size vDisk (LUN) name vHost name Replicated to Connected to

Step 4aNetwork configuration details


Table 59 Configuration of networks and switches
Item Production LAN IP address (network number) Subnet mask IP range for auto-provisioning VLAN tag Preferred link connection speed Gateway IP address DHCP server DNS server #1 DNS server #2 DNS domain name Management LAN IP address (network number) Subnet mask IP range for auto-provisioning VLAN tag Preferred link connection speed DHCP server DNS server #1 DNS server #2 DNS domain name Deployment LAN IP address (network number) Subnet mask Deployment server VLAN tag Preferred link connection speed DHCP server DNS server #1 N/A 192.168.1.0 255.255.255.0 (Insight Control server deployment, HP Server Automation, or HP Ignite-UX) Value

72

Sample configuration templates

Table 59 Configuration of networks and switches (continued)


Item DNS server #2 Gateway IP address DNS domain name VMotion LAN IP address (network number) Subnet mask VLAN tag Preferred link connection speed DHCP server DNS server #1 DNS server #2 Gateway IP address DNS domain name Other Network services SMTP host Time source N/A N/A N/A N/A 192.168.2.0 255.255.255.0 Value N/A N/A N/A

Step 4bVirtual Connect Ethernet uplinks template


Table 60 VC Ethernet uplink connections
Network name Production Management Deployment VMotion iSCSI Integrity OVMM SG heartbeat SG failover (other network) Enclosure.bay.port Switch.port VC uplinks Router uplinks Signal type

73

Step 4cServices network connections template


Table 61 Network host connections
Server (server names) (server names) VMotion Deployment
1

Connection (VC Ethernet connection #1) (VC Ethernet connection #2) N/A N/A

Port assignment1 (connection type) (connection type) N/A N/A

Flex-10 Bandwidth allotment1 (connection bandwidth) (connection bandwidth)

PXE setting1 (uplink destination) (uplink destination)

These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO, such the CMS, deployment server, SQL Server, and ESX hosts.

Step 4dManagement network connections template


Table 62 Management connections
Network Host uplink Router uplink (Data Signal type center switch and port) 1000Base-T 1000Base-T Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed 1000Base-T 1000Base-T Through OA connection Through OA connection Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed 100Base-T (starting IP ending IP) (starting IP ending IP) IP address Provision type (EBIPA, Static, or DHCP)

Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management

Starter Kit OA #1 Starter Kit OA #2 Starter Kit VCEnet #1 Starter Kit VC-Enet #2 Starter Kit VC-FC #1 Starter Kit VC-FC #2 Optional VC Domain IP Starter Kit iLO Range Expansion Kit #1 OA #1 Expansion Kit #1 OA #2 Expansion Kit #1 VC-Enet #1 Expansion Kit #1 VC-Enet #2

Expansion Kit #1 VC-FC Through OA #1 connection Expansion Kit #1 VC-FC Through OA #2 connection Expansion Kit #1 iLO Range EVA4400 ABM MGMT port Through OA connection

74

Sample configuration templates

Table 62 Management connections (continued)


Network Host uplink Router uplink (Data Signal type center switch and port) 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T 100Base-T N/A N/A N/A N/A IP address Provision type (EBIPA, Static, or DHCP)

Management Management Management Management Management Management Management Management Management Management

EVA4400 Fibre switch #1 EVA4400 Fibre switch #2 P4300 G2 Node #1 MGMT P4300 G2 Node #2 MGMT Other SAN switch Other FC Storage controller Monitored PDU #1 Monitored PDU #2 Network Switch #1 Network Switch #2

Step 4eAccess credentials template


Table 63 Access credentials
Access Starter Kit OA Starter Kit VC SNMP READ ONLY community N/A string SNMP READ WRITE community N/A string CMS service account Deployment server SQL Server VMware vCenter HP Command View EVA SAN switch account IO User IO Architect IO Administrator Expansion Kit #1 OA Expansion Kit #1 VC N/A N/A Domain Credentials Password

75

C Optional Management Services integration notes


This section explains how to integrate HP BladeSystem Matrix implementation with optional management services. The following implementation services may be delivered in conjunction with a HP BladeSystem Matrix implementation service subject to meeting all requirements: Insight Recovery implementation Insight Control for Microsoft System Center implementation Insight Control for VMware vCenter Server implementation

These services are discussed in further detail below. If planned for integration, the following management services must be implemented before HP BladeSystem Matrix is delivered: HP Server Automation software HP Ignite-UX software VMware vCenter Server software Microsoft System Center software

If any implementation services are ordered for implementing these other management services, then all required planning must be done for these services in addition to planning for this implementation service.

HP BladeSystem Matrix and HP Server Automation


HP Server Automation is supported for integration with HP BladeSystem Matrix subject to meeting all requirements. A functional overview of this integration is detailed in Deliver Infrastructure and Applications in minutes with HP Server Automation and HP BladeSystem Matrix with Insight Dynamics Deliver Infrastructure and Applications in minutes with HP Server Automation and HP BladeSystem Matrix with Insight Dynamics. The implementation services and licensing for HP Server Automation are not included in the HP BladeSystem Matrix implementation service. These requirements and prerequisites related to this integration, including limitations, version compatibility, setup instructions and troubleshooting are detailed in Integrating HP Server Automation with HP BladeSystem Matrix/Insight Dynamics.

HP BladeSystem Matrix and Insight Recovery


HP BladeSystem Matrix includes licensing of servers for IR. However, the implementation service for IR is not included in the HP BladeSystem Matrix implementation service. Per event installation services should be ordered if the following service is delivered with the HP BladeSystem Matrix installation: 1 Quantity of HP Startup Insight Recovery SVC. See the Service Delivery Guide (SDG) for all other requirements and prerequisites related to fulfilling this service.

HP BladeSystem Matrix and Insight Control for VMware vCenter Server


HP BladeSystem Matrix provides a complete and comprehensive set of infrastructure management tools. However, some users may choose also to use VMware vCenter Server to manage their virtual infrastructure. HP provides an extension called HP Insight Control for VMware vCenter Server, which is part of Insight Control that is included with HP BladeSystem Matrix. Per event installation services should be ordered if the following service is delivered with the HP BladeSystem Matrix installation: 1 Quantity of HP Startup Insight Control VMware vCenter SVC. See the Service Delivery Guide (SDG) for all other requirements and prerequisites related to fulfilling this service.
76 Optional Management Services integration notes

HP Insight Control for VMware vCenter Server


HP Insight Control for VMware vCenter Server is a plug-in module that delivers HP ProLiant status and events into the vCenter console and enables direct, in-context launch of HP troubleshooting tools. The plug-in can be installed on any server with network access to the HP BladeSystem Matrix CMS, the vCenter CMS and the managed host systems.

HP BladeSystem Matrix and Insight Control for Microsoft System Center


Some users may also choose to use Microsoft System Center for some of their management functions. HP provides a set of integrated extensions called HP Insight Control for Microsoft System Center, which is part of Insight Control that is included with HP BladeSystem Matrix. Per event installation services should be ordered if the following service is delivered with the HP BladeSystem Matrix installation: 1 Quantity of HP Startup Insight Control System Center SVC. This section provides the supported configurations and guidelines for each of the components of HP Insight Control for Microsoft System Center when used in a HP BladeSystem Matrix environment. See the Service Delivery Guide (SDG) for all other requirements and prerequisites related to fulfilling this service.

Microsoft System Center Components


There are four components to the Microsoft System Center Management Suite, each of which has its own management console. Although they may be installed onto common servers, they are usually separated onto individual servers for performance reasons. System Center Configuration Manager System Center Operations Manager System Center Virtual Machine Manager System Center Data Protection Manager NOTE: System Center Data Protection Manger is not required Insight Control for Microsoft System Center.

HP Insight Control for Microsoft System Center


HP Insight Control for Microsoft System Center is a set of six integration modules that are installed directly onto the System Center consoles to provide seamless integration of the unique ProLiant and BladeSystem manageability features into the Microsoft System Center environment. HP ProLiant Server Management Packs for Operations Manager 2007 integrate with System Center Operations Manager to expose the native capabilities of ProLiant servers, including monitoring and alerting. HP BladeSystem Management Pack for Operations Manager 2007 integrates with System Center Operations Manager to expose the native capabilities of BladeSystem c-Class enclosures, including monitoring and alerting. HP ProLiant PRO Management Pack for Virtual Machine Manager 2008 works in conjunction with System Center Operations Manager and System Center Virtual Machine Manager to proactively guide and automate movement of virtual machines based upon host hardware alerts. HP ProLiant Server OS Deployment for Configuration Manager 2007 tightly integrates with System Center Configuration Manager to automatically deploy bare metal servers. This includes support for pre-deployment hardware and BIOS configuration, and post-OS driver and agent installation.

HP BladeSystem Matrix and Insight Control for Microsoft System Center

77

HP Hardware Inventory Tool for Configuration Manager 2007 uses native System Center Hardware Inventory to provide detailed component level inventory of every managed server. HP Server Updates Catalog for System Center Configuration Manager 2007 uses System Center Configuration Manager to install and update ProLiant drivers and firmware using a rules-based model.

HP BladeSystem Matrix CMS


HP does not support installing any of the Microsoft System Center consoles onto the same server as HP BladeSystem Matrix CMS components. However, some of the components of Microsoft System Center may manage a HP BladeSystem Matrix CMS. The following table provides a list of supported configurations for the HP BladeSystem Matrix CMS as a managed node of Microsoft System Center: Table 64 Supported configurations for the HP BladeSystem Matrix CMS
HP Insight Control for Microsoft System center component ProLiant Server Management Packs BladeSystem Management Pack Supported Yes Yes Comments HP BladeSystem Matrix CMS may be a System Center Operations Manager managed node. HP BladeSystem Matrix c-Class enclosures may be managed by System Center Operations Manager. However, the BladeSystem Monitor Service should not be installed on the BladeSystem CMS. Do not use ProLiant PRO Management Pack to move any virtual machine that may reside on a HP BladeSystem Matrix host. Do not use ProLiant Server OS Deployment to deploy operating systems on HP BladeSystem Matrix CMS. HP BladeSystem Matrix CMS inventory may be viewed by System Center Configuration Manager. Do not use Server Updates Catalog to update HP BladeSystem Matrix CMS.

ProLiant PRO Management Pack

No

ProLiant Server OS Deployment

No

Hardware Inventory Tool

Yes

Server Updates Catalog

No

The BladeSystem Management Pack uses a special Windows service to manage and monitor HP BladeSystem c-Class enclosures. This service is normally installed on the System Center Operations Manager console, but it can also be installed on any other Windows server. Do not install the BladeSystem Monitor Service on the HP BladeSystem Matrix CMS.

Other Managed Nodes in a HP BladeSystem Matrix Environment


Other server nodes in a HP BladeSystem Matrix environment (those that are not HP BladeSystem Matrix CMS) may be managed by Microsoft System Center. The following table provides a list of supported configurations for other nodes in a HP BladeSystem Matrix environment:

78

Optional Management Services integration notes

Table 65 Supported configurations for other nodes in a HP BladeSystem Matrix CMS


HP Insight Control for Microsoft System center component ProLiant Server Management Packs BladeSystem Management Pack Supported Yes Yes Comments Other nodes may be a System Center Operations Manager managed node. HP BladeSystem Matrix c-Class enclosures may be managed by System Center Operations Manager. ProLiant PRO Management Pack may be used to move virtual machines on other nodes. Ensure that these configuration changes are comprehended when using HP BladeSystem Matrix CMS components to view virtual machine configurations. Do not use ProLiant Server OS Deployment to deploy operating systems on other nodes in a HP BladeSystem Matrix environment. HP BladeSystem Matrix CMS inventory may be viewed by System Center Configuration Manager. Server Updates Catalog may be used to upgrade other nodes, but only when the firmware or driver version matches the supported HP BladeSystem Matrix version. However, you must explicitly filter out the HP BladeSystem Matrix CMS from the System Center Server Collection before deploying updates.

ProLiant PRO Management Pack

Yes

ProLiant Server OS Deployment

No

Hardware Inventory Tool

Yes

Server Updates Catalog

Yes

When using the HP Server Updates Catalog for System Center Configuration Manager to upgrade firmware or drivers, it is important to check that the firmware or driver version that you are updating on the managed nodes adheres to the supported version for HP BladeSystem Matrix. If they do not, you should not perform the upgrade. You may explicitly exclude servers from the System Center Server Collection before deploying updates. A Server Collection is a group of servers to which you wish to perform a System Center Configuration Manager function, such as updating firmware or drivers. In order to maintain the matched set of HP BladeSystem Matrix CMS firmware and driver versions, you should exclude the HP BladeSystem Matrix CMS from the Server Collection. This can be done by creating separate Collection for all ProLiant servers except for the CMS server. See the System Center Configuration Manager User Guide for creating Collections within System Center Configuration Manager.

HP BladeSystem Matrix and Insight Control for Microsoft System Center

79

D HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines


This appendix provides customers and technical services with the information necessary to plan for an HP BladeSystem Matrix solution that utilizes the HP BladeSystem Matrix FlexFabric Starter and HP BladeSystem Matrix Expansion Kits.

Virtual Connect FlexFabric hardware components


HP Virtual Connect FlexFabric 10Gb/24-Port module
The HP Virtual Connect FlexFabric 10Gb/24-Port module is a new interconnect built upon Virtual Connect's Flex-10 technology. The Virtual Connect FlexFabric module consolidates Ethernet and storage networks into a single converged network with the goal of reducing network costs and complexity. It extends previous-generation HP Virtual Connect Flex-10 technology with the inclusion of iSCSI, Converged Enhanced Ethernet (CEE), and Fibre Channel protocols. With FlexFabric, customers no longer need to purchase multiple interconnects for Ethernet and FC.

HP FlexFabric Converged Network Adapters


Customers need blade servers with the following supported FlexFabric adapters: HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter HP NC553m 10Gb 2-port FlexFabric Converged Network Adapter

HP G7 blade servers have a Converged Network Adapter chip integrated on the motherboard (LOM) and are compatible with FlexFabric interconnect modules. For additional bandwidth, the NC551m is supported in the BL465c G7, the BL685c G7 and all G6 BladeSystem servers. The NC553m is supported in all HP G7 and G6 BladeSystem servers. FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. Both the HP NC553m and the HP NC551m mezzanine adapters support Virtual Connect Flex-10 that allows each 10 Gb port to be divided into four physical NICs to optimize bandwidth management for virtualized servers. When connected to a CEE-capable switch, FC and Ethernet I/O are separated and routed to the corresponding network. For iSCSI storage, the NC553m and NC551m support full protocol offload providing better CPU efficiency when compared to software initiators enabling the server to handle increased virtualization workloads and compute-intensive applications. This combination of high-performance network and storage connectivity reduces cost and complexity and provides the flexibility and scalability to fully optimize BladeSystem servers. The NC553m and NC551m deliver the performance benefits and cost savings of converged network connectivity for HP BladeSystem servers. The dual-port NC553m and NC551m optimize network and storage traffic with hardware acceleration and offloads for stateless TCP/IP, TCP Offload Engine (TOE), FC and iSCSI. Older generation G6 blade servers have 10 GB Network Interface Cards (NICs) embedded on the motherboard as a LOM and are not readily compatible with FlexFabric. Customers need to purchase either the HP NC553m or the HP NC551m Converged Network Adapter mezzanine card and plug it into the servers mezzanine to enable any G6 BladeSystem server to support FlexFabric.

80

HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

FlexFabric interconnects/mezzanines HP BladeSystem c7000 port mapping


Figure 9 HP BladeSystem c7000 enclosure - rear view

1. Upper Fan System 2. Interconnect Bays 1 /2 3. Interconnect Bays 3 / 4 4. Interconnect Bays 5 / 6

5. Interconnect Bays 7 / 8 6. Onboard Administrator 7. Lower Fan System 8. Rear Redundant Power Complex

Understanding the concept of port mapping is critical to properly planning your HP BladeSystem Matrix FlexFabric configuration. The diagram above shows the proper placement of the Virtual Connect FlexFabric 10Gb/24-Port modules and the integrated or mezzanine adapters that will be used for your supported BladeSystem Matrix FlexFabric configurations. Port mapping differs slightly between full-height and half-height server blades due to the support of additional mezzanine cards on the full-height version. HP has simplified the processes of mapping mezzanine ports to switch ports by providing intelligent management tools via the Onboard Administrator and HP Insight Manager Software. The HP BladeSystem Onboard Administrator User Guide and HP BladeSystem c7000 Enclosure Setup and Installation Guide provide detailed information on port mapping. The following diagrams show the port mappings for half-height and full-height blades. The following tables represent a number of recommended and supported configurations for an HP BladeSystem c7000 enclosure with one or more redundant pairs of Virtual Connect FlexFabric modules.

FlexFabric interconnects/mezzanines HP BladeSystem c7000 port mapping

81

Figure 10 Half-height server blade port mapping

Figure 1 Full-height server blade port mapping 1

HP BladeSystem c7000 enclosure FlexFabric module placement


The port mapping diagrams show an association between the mezzanine type and placement (integrated adapter or mezzanine adapter located in Mezz 1, Mezz 2, or Mezz 3) and the HP BladeSystem c-Class interconnect bay that is utilized via the internal port mappings of the HP BladeSystem c7000 enclosure. When an integrated FlexFabric adapter is utilized, the blade communicates through the FlexFabric 24-port VC module redundant pair located in interconnect bays 1 & 2. When a FlexFabric mezzanine adapter in Mezz 1 is utilized, then the blade communicates from that mezzanine through the FlexFabric 24-port VC module redundant pair located in interconnect bays 3 & 4. When a FlexFabric mezzanine adapter in Mezz 2 is utilized for half-height or full-height blades, then the blade communicates through the FlexFabric 24-port VC module redundant pair located in interconnect bays 5 and 6, and so forth.

82

HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

FlexFabric configurations using only HP G7 BladeSystem servers


The following tables show the recommended configuration as well as a number of supported configurations for an HP BladeSystem c7000 enclosure with Virtual Connect FlexFabric modules in support of HP G7 BladeSystem servers. IMPORTANT: components: Configurations which support the G7 BladeSystem servers require the following

A minimum of two total (1 pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port modules located in interconnect bays 1-2 An integrated HP FlexFabric mezzanine adapter

HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth. The BladeSystem servers that are to utilize the increased bandwidth require one HP Virtual Connect FlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules. The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10Gb Converged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystem Matrix FlexFabric enclosure configured with G7 blades only. The HP NC553m is the recommended FlexFabric mezzanine adapter for configurations which use only HP G7 BladeSystem servers for the following reasons: Better performance Newer FlexFabric Converged Network Adapter technology Easier standardization since it is supported in all G6 and G7 blades

Best Practice recommended configuration


2 FlexFabric modules All G7 blades with an integrated FlexFabric adapter
Server network adapters required

Interconnect module configurations for G7 blades using NC551m or NC553i [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1

VC FlexFabric module [Bay 2] Empty Empty Empty [Bay 4] [Bay 6] [Bay 8]

VC FlexFabric module Integrated FlexFabric Adapter1 Empty Empty Empty

Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter

This configuration enables the customer to most cost effectively introduce FlexFabric technology while keeping the configuration complexity to a minimum. This configuration also provides the ability to most easily manage SAN connectivity. For the cost of two FlexFabric modules, each blade has up to 6 Ethernet ports and 2 FC ports for a total of 8 ports sharing up to 20 Gb of I/O bandwidth.

FlexFabric configurations using only HP G7 BladeSystem servers

83

Supported configuration
4 FlexFabric modules All G7 blades with an integrated FlexFabric Adapter 1 additional FlexFabric mezzanine adapter
Server network adapters used

Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 1 additional FlexFabric mezzanine(s) [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2

VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] Empty Empty [Bay 6] [Bay 8]

VC FlexFabric module Integrated FlexFabric adapter1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 12 Empty Empty

Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is less cost effective, more complex to configure, and more difficult to manage SAN connectivity. For the cost of 4 FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each blade can have 12 or 14 Ethernet ports and 2 or 4 FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.

Supported configuration
6 FlexFabric modules All G7 blades with an integrated FlexFabric adapter 2 additional FlexFabric mezzanine adapters
Server network adapters used

Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 2 additional FlexFabric mezzanine(s) [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2

VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] Empty [Bay 8]

VC FlexFabric module Integrated FlexFabric adapter1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 12 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 Empty

Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve even higher performance. This configuration is even less cost effective, more complex to configure, and introduces increased difficulty when managing SAN connectivity. For the cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, each blade can have 18, 20, or 22 Ethernet ports and 2, 4, or 6 FC ports for a

84

HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.

Supported configuration
8 FlexFabric modules All G7 blades with an integrated FlexFabric adapter 3 additional FlexFabric mezzanine adapters

NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4th set of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.
Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 3 additional FlexFabric mezzanine(s) [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2

Server network adapters used

VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] VC FlexFabric module [Bay 8]

VC FlexFabric module Integrated FlexFabric adapter1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 12 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 VC FlexFabric module FlexFabric adapter in Mezzanine slot 32

Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve even higher performance. This configuration is the least cost effective, the most complex to configure, and the most difficult to manage SAN connectivity. For the cost of eight FlexFabric modules and three (or two half-height blades) FlexFabric adapters for every blade in the enclosure, each blade can have 24, 26, 28, or 30 Ethernet ports and 2, 4, 6, or 8 FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.

FlexFabric configurations using only HP G6 or i2 BladeSystem servers


The following tables show a number of supported configurations for an HP BladeSystem c7000 enclosure with Virtual Connect FlexFabric modules in support of HP G6 or i2 BladeSystem servers. IMPORTANT: Configurations which support the G6 or i2 BladeSystem servers require: A minimum of four total (two pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port modules located in interconnect bays 1-4 A minimum of one FlexFabric mezzanine adapter placed in Mezz 1

HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth. The Bladesystem servers that are to utilize the increased bandwidth require one HP Virtual Connect FlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules. The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10Gb Converged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystem Matrix FlexFabric enclosure configured with G6 or i2 blades only. FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. The HP

FlexFabric configurations using only HP G6 or i2 BladeSystem servers

85

NC553m is the recommended FlexFabric mezzanine adapter for configurations which use only HP G6 BladeSystem servers for the following reasons: Better performance Newer FlexFabric Converged Network Adapter technology Easier standardization since it is supported in all G6 and G7 blades

Supported configuration
4 FlexFabric modules All G6 and i2 blades 1 additional FlexFabric mezzanine adapter

Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s) Server network adapters required [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1

VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] Empty Empty [Bay 6] [Bay 8]

VC FlexFabric module Flex-10/Enet LOM VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 Empty Empty

Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the cost of four FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each blade can have 14 Ethernet ports and 2 FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.

Supported configuration
6 FlexFabric modules All G6 and i2 blades with an integrated Flex10 Adapter 2 additional FlexFabric mezzanine adapters

Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s) Server network adapters used [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2

VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] Empty [Bay 8]

VC FlexFabric module Flex-10/Enet LOM VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 Empty

Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is even less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the
86 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, each blade can have 18 or 20 Ethernet ports and 2 or 4 FC ports for a total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.

Supported configuration
8 FlexFabric modules All G6 and i2 blades with an integrated Flex10 Adapter 3 additional FlexFabric mezzanine adapters

NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4th set of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.
Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s) Server network adapters used [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2

VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] VC FlexFabric module [Bay 8]

VC FlexFabric module Flex-10/Enet LOM VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 VC FlexFabric module FlexFabric adapter in Mezzanine slot 32

Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is the least cost effective, most complex to configure, and most difficult to manage SAN and network connectivity. For the cost of eight FlexFabric modules and three (or 2 for half-height blades) FlexFabric adapters for every blade in the enclosure, each blade can have 26, 28, or 30 Ethernet ports and 2, 4, or 6 FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers


The following tables show a number of supported configurations for an HP BladeSystem c7000 enclosure with Virtual Connect FlexFabric modules in support of a mixture of HP G7 with G6 and/or i2 BladeSystem servers.

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers

87

IMPORTANT: Configurations which support a mixture of HP G7 with G6 and/or i2 BladeSystem servers require: A minimum of four total (two pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port modules located in Interconnect Bays 1-4

A minimum of one FlexFabric mezzanine adapter placed in Mezz 1 of every BladeSystem server in the enclosure HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth if desired. The Bladesystem servers that are to utilize the increased bandwidth require one HP Virtual Connect FlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules. FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10Gb Converged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystem Matrix FlexFabric enclosure configured with G6 and G7 blades. The HP NC553m is the recommended FlexFabric mezzanine adapter for configurations which use a mixture of G6 and G7 BladeSystem servers for the following reasons: Better performance Newer FlexFabric Converged Network Adapter technology Easier standardization since it is supported in all G6 and G7 blades

Supported configuration
4 FlexFabric modules Mixed G7 with G6 and/or i2 blades 1 additional FlexFabric mezzanine adapter
Server network adapters required

Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additional FlexFabric mezzanine(s) [Bay 1] VC FlexFabric module [Bay 2]

VC FlexFabric module Flex-10/Enet LOM or FlexFabric Integrated Adapter VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 Empty Empty

[Bay 3] [Bay 5] [Bay 7]


1

VC FlexFabric module [Bay 4] Empty Empty [Bay 6] [Bay 8]

Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the cost of four FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each blade can have 12 or 14 Ethernet ports and 2 or 4 (G7s only) FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.

88

HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Supported configuration
6 FlexFabric modules Mixed G7 with G6 and/or i2 blades 2 additional FlexFabric mezzanine adapters
Server network adapters used

Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additional FlexFabric mezzanine(s) [Bay 1] VC FlexFabric module [Bay 2]

VC FlexFabric module Flex-10/Enet LOM or FlexFabric Integrated Adapter VC FlexFabric module FlexFabric adapter in Mezzanine slot 1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 Empty

[Bay 3] [Bay 5] [Bay 7]


1 2

VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] Empty [Bay 8]

Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is even less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, each blade can have 18, 20, or 22 Ethernet ports and 2, 4, or 6 (G7s only) FC ports for a total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.

Supported configuration
8 FlexFabric modules Mixed G7 with G6 and/or i2 blades 3 additional FlexFabric mezzanine adapters

NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4th set of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.
Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additional FlexFabric mezzanine(s) [Bay 1] VC FlexFabric module [Bay 2] Server network adapters used

VC FlexFabric module Flex-10/Enet LOM or FlexFabric Integrated Adapter VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 VC FlexFabric module FlexFabric adapter in Mezzanine slot 32

[Bay 3] [Bay 5] [Bay 7]


1 2

VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] VC FlexFabric module [Bay 8]

Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers

89

This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is the least cost effective, most complex to configure, and most difficult to manage SAN and network connectivity. For the cost of eight FlexFabric modules and three (or 2 half-height blades) FlexFabric adapters for every blade in the enclosure, each blade can have 24, 26, 28, or 30 Ethernet ports and 2, 4, 6, or 8 (G7s only) FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.

HP BladeSystem Matrix configuration guidelines for mixing FlexFabric with Flex-10


HP BladeSystem Matrix solutions do not support mixing and matching of Flex-10 and FlexFabric within the same enclosure, or in the same Virtual Connect domain group. When planning for HP BladeSystem Matrix solutions within a single VC domain group or within an enclosure, configuration choices are limited to either the Flex-10 with VC-FC or FlexFabric modules. The HP BladeSystem Matrix CMS can manage both Flex-10 and FlexFabric enclosures as separate VC domain groups. HP BladeSystem Matrix solutions do not support mixing and matching of FlexFabric modules and VC-FC within the same enclosure. If VC-FC support is desired, then the customer must utilize the HP BladeSystem Matrix Flex-10 Starter and Expansion kits.

90

HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Glossary
ABM API AWE BFS BOOTP CapAd CEE CIM CIMOM CLI CLX CMS CNA DAC DEP DG DHCP DMI DNS DR DSM EBIPA EFI ESA EVA FC FCoE FDT FQDN FTP GUI gWLM HA HBA HP OO HP SIM HP SUM HPIO HPVM HTTP Array-based management Application program interface Address Windowing Extensions. A method used by Windows OS to make more than 4 GB available to applications through system calls. Boot from SAN Bootstrap Protocol HP Capacity Advisor Converged Enhanced Ethernet Common Information Model Common Information Model Object Manager Command line interface. An interface comprised of various commands which are used to control operating system responses. HP Cluster Extensions Central management server Converged network adapter Direct Attach Cable Data execution prevention Device group Dynamic Host Configuration Protocol Desktop Management Interface Domain Name System Disaster Recovery Device specific module Enclosure Bay IP addressing Extensible Firmware Interface HP Extensible Server and Storage Adapter HP Enterprise Virtual Array. An HP storage array product line. Fibre Channel. A network technology primarily used for storage networks. Fibre Channel over Ethernet Firmware development tool Fully qualified domain name File Transfer Protocol Graphical user interface HP Global Workload Manager High availability Host Bus Adapter. A circuit board and/or integrated circuit adapter that provides input/output processing and physical connectivity between a server and a storage device. See OO. HP Systems Insight Manager HP Smart Update Manager See IO. HP Virtual Machine. Common name for HP Integrity Virtual Machines product. Hypertext Transfer Protocol
91

HTTPS IC iCAP ICE ICMP ID IIS iLO IO IPM IPv4 IPv6 IR JVM KB LDAP LinuxPE LOM LS LSM LUN LV MAC

Hypertext Transfer Protocol Secure HP Insight Control Instant Capacity HP Insight Control Suite Internet Control Message Protocol HP Insight Dynamics Internet Information Services Formerly HP Insight Control remote management. Renamed HP Integrated Lights-Out. HP Insight Orchestration. A web application that enables you to deploy, manage, and monitor the overall behavior of Insight Orchestration and its users, templates, services, and resources. Formerly HP Insight Power Manager. Renamed HP Insight Control power management. Internet Protocol version 4 Internet Protocol version 6 HP Insight Recovery Java Virtual Machine Knowledge base Lightweight Directory Access Protocol Linux pre-boot environment LAN on motherboard Logical server Logical server management Logical unit number. The identifier of a SCSI, Fibre Channel or iSCSI logical unit. Logical volume Media Access Control . A unique identifier assigned by the manufacturer to most network interface cards (NICs) or network adapters. In computer networking, a Media Access Control address. Also known as an Ethernet Hardware Address (EHA), hardware address, adapter address or physical address. Microsoft Management Console HP Multipath I/O HP StorageWorks Modular Smart Array. An HP storage array product line (also known as the P2000). Microsoft System Center Microsoft Cluster Server/Service HP Insight Managed System Setup Wizard Network Configuration Utility Network File System Network file transfer Network interface card. A device that handles communication between a device and other devices on a network. N_Port ID Virtualization Network Time Protocol Non-volatile random access memory HP Onboard Administrator Operating environment HP Operations Orchestration Operating system

MMC MPIO MSA MSC MSCS MSSW NCU NFS NFT NIC NPIV NTP NVRAM OA OE OO OS
92 Glossary

P-VOL PAE PDR PDU POC POST PSP PSUE PSUS PXE RAID RBAC RDP RG RM S-VOL SA SAID SAM SAN SCP SFIP SG SIM SLVM SMA SMH SMI-S SMP SMTP

Primary Volume Physical Address Extension. A feature of x86 processors to allow addressing more than 4 GB of memory. Power distribution rack Power distribution unit. The rack device that distributes conditioned AC or DC power within a rack. Proof of concept Power on self test HP ProLiant Support Pack Pair suspended-error Pair suspended-split Preboot Execution Environment Redundant Array of Independent Disks Role-based access control Formerly HP Rapid Deployment Pack. Renamed HP Insight Control server deployment. Recovery group HP Matrix recovery management Secondary Volume HP Server Automation Service agreement identifier System administration manager Storage area network. A network of storage devices available to one or more servers. State change pending Stress-free installation plan HP Serviceguard See HP SIM. Shared Logical Volume Manager Storage Management Appliance HP System Management Homepage Storage Management Initiative Specification Formerly HP Server Migration Pack. Renamed HP Insight Control server migration. Simple Mail Transfer Protocol. A protocol for sending email messages between servers and from mail clients to mail servers. The messages can then be retrieved with an email client using either POP or IMAP. Serial number Simple network management protocol Storage pool entry HP Storage Provisioning Manager. A means of defining logical server storage requirements by specifying volumes and their properties. Structured Query Language Shared Resource Domain Secure Shell Secure Sockets Layer Single sign-on System Type Manager Shared uplink set
93

SN SNMP SPE SPM SQL SRD SSH SSL SSO STM SUS

TCP/IP TFTP TOE UAC UDP URC URS USB UUID VC VCA VCDG VCEM

Transmission Control Protocol/Internet Protocol Trivial File Transfer Protocol TCP Offload Engine User Account Control User Datagram Protocol Utility Ready Computing Utility Ready Storage Universal Serial Bus. A serial bus standard used to interface devices. Universally Unique Identifier HP Virtual Connect Version Control Agent Virtual connect domain group HP Virtual Connect Enterprise Manager. HP VCEM centralizes network connection management and workload mobility for HP BladeSystem servers that use Virtual Connect to access local area networks (LANs), storage area networks (SANs), and converged network environments. Virtual Connect Manager Virtual CPU HP Version Control Repository Manager Virtual Connect Support Utility Virtual disk Volume group Virtual local area network Virtual machine HP Insight Virtualization Manager Formerly HP Virtual Machine Manager. Renamed HP Insight Control virtual machine management. Virtual port Formerly Virtual Server Environment. Renamed Insight Dynamics. Windows Automation Installation Kit Web-Based Enterprise Management Windows pre-boot environment Windows Management Instrumentation See WWN. Worldwide Name. A unique 64bit address assigned to a FC device. Worldwide Node Name. A WWN that identifies a device in a FC fabric. Worldwide Port Name. A WWN that identifies a port on a device in an FC fabric. Extensible Markup Language

VCM VCPU VCRM VCSU Vdisk VG VLAN VM VMAN VMM VPort VSE WAIK WBEM WinPE WMI WWID WWN WWNN WWPN XML

94

Glossary

Index
A
access credentials infrastructure, 48 templates, 75 test and development infrastructure using logical servers, 60 test and development infrastructure with IO, 67 access requirements, 47 application services, 14 define, 16 templates, 68 SAN storage, 35 SAN storage templates, 70

D
data center customer responsibility, 29 requirements, 29 define application services, 16 manageability connections, 46 services VC ethernet connections, 45 storage volumes, 38 define services test and development infrastructure, 62 test and development infrastructure using logical servers, 56 deployed servers and services, 14 disk requirements, CMS, 18 documents, HP BladeSystem Matrix, 5 domain VC configuration, 34 Virtual Connect, 30

B
bill of materials test and development infrastructure using logical servers, 56 test and development infrastructure with IO, 61

C
c7000 enclosure, 8, 10, 30, 61 CMS, 13 configuration, 37 disk requirements, 18 Insight Software, 14, 16 Microsoft System Center, 78 network connections, 18 non server blade, 18 planning, 16 SAN connections, 18 supported configurations, 78 components customer responsibility, 12 HP BladeSystem Matrix, 9 Microsoft System Center, 77 configuration CMS, 37 FlexFabric, 80, 83, 85, 87, 90 management services network, 44 sample templates, 68 VC domain, 34 connections define manageability, 46 FC, 38 FC SAN storage, 36 iSCSI SAN storage, 36 manageability, 45 storage, 35 VC ethernet, 45 VC ethernet uplink, 42 converged infrastructure, 7 customer facility planning, 29 network details, 40 responsibility, 12, 29

E
enclosure Flex-10, 8, 29 FlexFabric, 7, 29 parameters, 29 planning, 29 stacking, 30 enclosure stacking Flex-10, 30 FlexFabric, 30 ethernet define services connections, 45 VC, 8 VC Flex-10 services, 43 VC uplink connections, 42 VC uplinks, 43 Expansion Kit Flex-10, 1 1

F
facility planning, 29 planning templates, 69 planning test and development infrastructure using logical servers, 57 requirements, 29 facility planning, test and development infrastructure using IO, 63 FC connections, 38 module, 30, 35 SAN, 13, 36, 39 SAN storage, 71
95

storage, 28 switch, 37 VC, 8, 35 FC SAN storage connections, 36 templates, 71 federated CMS, 14 access requirements, 48 DNS configuration, 40 HP BladeSystem Matrix infrastructure, 26 planning, 19 storage pool, 36 supported management software, 19 federated environment, 24 Flex-10 capability, 35 enclosure, 8, 29 enclosure stacking, 30 Expansion Kit, 1 1 FlexFabric, 90 module, 31, 42 Starter Kit, 10 VC ethernet services, 43 FlexFabric configuration, 83, 85, 87, 90 configuration guidelines, 80 enclosure, 7, 29 enclosure stacking, 30 Flex-10, 90 hardware components, 80 Integrity, 80, 85, 87 interconnects or mezzanines, 81 module, 8, 31, 35 module placement, 82 Starter Kit, 10

HP Insight Orchestration see IO HP IR see IR HP server automation additional management servers, 20 optional management services, 76

I
Ignite-UX server, 19 infrastructure access credentials, 48 dynamic provisioning, 5467 HP BladeSystem Matrix, 7 management, 8 test and development, 54 IO, 60 infrastructure, converged, 7 Insight Control Microsoft System Center, 9, 20, 28, 76, 77 server deployment, 14, 18, 21, 27, 28, 40 VMware vCenter Server, 9, 20, 28, 76, 77 Insight Dynamics, 9, 14, 15, 16, 18, 20, 60, 76 orchestration service requests, 39 Insight Orchestration see IO Insight Recovery see IR Insight Software CMS planning, 16 integration, optional management services, 76 Integrity FlexFabric, 80, 85, 87 Integrity managed nodes, 23 Integrity server environment, standard, 22 intended audience, 5 IO, 44, 45 templates, 16, 38, 44, 61 test and development infrastructure, 60 access credentials, 67 bill of materials, 61, 62 facility planning, 63 managed network connections, 66 network configuration, 65 racks and enclosures, 63 services network connections, 65 storage volumes, 65 VC domain configuration, 64 VC Ethernet uplinks, 65 IR, optional management services, 76 iSCSI SAN storage, 71 SAN storage connections templates, 71 isolating VM guest/host, 39

H
HP BladeSystem c7000 FlexFabric, 82 port mapping, 81 HP BladeSystem Matrix basic infrastructure, 8 components, 9 customer facility planning, 29 documents, 5 infrastructure, 7 pre-delivery, 5 pre-delivery planning, 49 pre-order, 5 solution networking, 40 solution storage, 35 HP BladeSystem Matrix infrastructure basic, 8 federated CMS, 26 Integrity managed nodes, 23 overview, 5 ProLiant managed nodes, 22 HP Insight Control see Insight Control HP Insight Dynamcis see Insight Dynamics
96 Index

L
limited environment, 21 link configurations, stacking, 31 logical servers test and development infrastructure, 54 access credentials, 60 bill of materials, 56

define services, 56 facility planning, 57 management network connections, 59 network configuration, 58 racks and enclosures, 56 services network connections, 59 storage volumes, 58 VC domain configuration, 57 VC Ethernet uplinks, 58

HP server automation, 76 Insight Control for Microsoft System Center, 77 Insight Control for VMware vCenter Server, 76, 77 integration, 76 IR, 76 Microsoft System Center, 77, 78 orchestration service requests, 39 overview, HP BladeSystem Matrix infrastructure, 5

M
MAC address, assign, 33 manageability connections, 45 managed network connections test and development infrastructure, 66 management additional servers, 20 determine servers, 27 infrastructure, 8 network connections, 74 server scenarios, 20 services, 16, 44, 76 management network connections templates, 74 test and development infrastructure using logical servers, 59 management servers additional, 20 templates, 68 Microsoft System Center CMS, 78 components, 77 Insight Control, 9, 28, 76, 77 optional management services, 77, 78 other managed nodes, 78 module FC, 30, 35 Flex-10, 31, 42 FlexFabric, 8, 31, 35

P
planning checklist, 49 customer facility, 29 federated CMS, 19 Insight Software CMS, 16 network, 40 racks and enclosures, 29 server, 14 services, 14 summary, 6 planning step 1a-define application services, 16 1b-determine management servers, 27 2a-rack & enclosure parameters, 29 2b-determine facility requirements, 29 2c-VC domain configuration, 34 3a-collect customer SAN storage details, 35 3b-FC SAN storage connections, 36 3c-iSCSI SAN storage connections, 36 3d-define storage volumes, 38 4a-collect customer provided network details, 40 4b-VC ethernet uplilnks, 43 4c-define services VC ethernet connections, 45 4d-define manageability connections, 46 4e-determine infrastructure credentials, 48 port mapping, 81 pre-delivery, 5 pre-order, 5 ProLiant managed nodes HP BladeSystem Matrix infrastructure, 22 ProLiant server environment standard, 21

N
network management services, 44 planning, 40 network configuration templates, 72 test and development infrastructure, 65 test and development infrastructure using logical servers, 58 network connections CMS, 18 management, 74 services, 74 network details, customer, 40 next steps, 50 NPIV, 12, 13, 35, 37

R
rack parameters, 29 planning, 29 racks and enclosures templates, 68 test and development infrastructure, 63 test and development infrastructure using logical servers, 56 requirements data center, 29 facility, 29 storage volumes, 37

O
optional management services, 76

S
sample configuration templates, 68
97

SAN connections, 18 FC, 13, 39 SAN storage customer details, 35 FC, 71 FC connections, 36 iSCSI, 71 iSCSI connections, 36 templates, 70 server deployed, 14 Ignite-UX, 19 management, 27 management scenarios, 20 planning, 14 server environment federated, 24 limited, 21 services application, 14 deployed, 14 Flex-10 ethernet, 43 management, 16 network configuration, 44 network connections, 74 planning, 14 services network connections templates, 74 test and development infrastructure, 65 test and development infrastructure using logical servers, 59 solution networking, 40 stacking enclosure, 30 link configurations, 31 standard environment Integrity, 22 ProLiant, 21 Starter Kit Flex-10, 10 FlexFabric, 10 storage connections, 35 FC, 28 solution, 35 volumes, 37 storage pool federated CMS, 36 storage volumes, 37 define, 38 requirements, 37 templates, 71 test and development infrastructure, 65 test and development infrastructure using logical servers, 58 switch FC, 37
98 Index

T
templates access credentials, 75 application services, 68 customer-provided SAN storage, 70 facility planning, 69 FC SAN storage connections, 71 IO, 16, 38, 44, 61 iSCSI SAN storage connections, 71 management network connections, 74 management servers, 68 network configuration, 72 racks and enclosures, 68 sample configuration, 6875 SAN storage, 70 services network connections, 74 storage volumes, 71 VC domain configuration, 69 VC Ethernet uplinks, 73 test and development infrastructure IO, 60 access credentials, 67 bill of materials, 61 define services, 62 mangement network connections, 66 network configuration, 65 racks and enclosures, 63 services network connections, 65 storage volumes, 65 VC domain configuration, 64 VC Ethernet uplinks, 65 logical servers, 54 access credentials, 60 bill of materials, 56 define services, 56 facility planning, 57 management network connections, 59 network configuration, 58 racks and enclosures, 56 services network connections, 59 storage volumes, 58 VC domain configuration, 57 VC Ethernet uplinks, 58 test and development infrastructure with IO access credentials, 67 facility planning, 63

V
VC see VC assign MAC address, 33 assign WWN address, 33 define services ethernet connections, 45 domain, 30 ethernet module, 8 ethernet uplink connections, 42 ethernet uplinks, 43 FC, 8, 35

Flex-10 ethernet services, 43 technology, 35 VC domain configuration templates, 69 test and development infrastructure, 64 test and development infrastructure using logical servers, 57 VC Ethernet uplinks templates, 73 test and development infrastructure, 65 test and development infrastructure using logical servers, 58 VC Flex Fabric configuration, 80 hardware components, 80 virtual serial numbers, 33 VM guest storage isolating from VM host, 39 VM host, isolating from VM guest, 39 VMware vCenter Server, 76 Insight Control, 9, 20, 28

W
WWN address, 33

99

S-ar putea să vă placă și