Documente Academic
Documente Profesional
Documente Cultură
3 Planning Guide
Copyright 201 Hewlett-Packard Development Company, L.P. 1 The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 and 12.212, Commercial 1 Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Microsoft, Windows, and Windows Server are U.S. registered trademarks of Microsoft Corporation.
Contents
1 Overview..................................................................................................5
HP BladeSystem Matrix documents..............................................................................................5 Planning summary....................................................................................................................6 HP BladeSystem Matrix infrastructure...........................................................................................7 HP BladeSystem Matrix components............................................................................................9
6 HP BladeSystem Matrix pre-delivery planning checklist..................................49 7 Next steps...............................................................................................50 8 Support and other resources......................................................................51
Contacting HP........................................................................................................................51 Related information.................................................................................................................53
Contents
HP BladeSystem Matrix configuration guidelines for mixing FlexFabric with Flex-10 .........................90
Glossary....................................................................................................91 Index.........................................................................................................95
Contents
1 Overview
This guide is the recommended initial document for planning an HP BladeSystem Matrix infrastructure solution. The intended audience for this guide is pre-sales and HP Services involved in the planning, ordering, and installation of an HP BladeSystem Matrix-based solution. Planning is the key to success; early planning is the key to creating an HP BladeSystem Matrix order, which moves on to smooth, successful, and satisfactory delivery. This guide is intended for use, along with a planning worksheet, to capture planning decisions, customer-provided details, and HP BladeSystem Matrix configuration parameters for future implementation. Effective planning requires knowledge of BladeSystem technology, including Virtual Connect (VC) FlexFabric, VC Flex-10 Ethernet and Fibre Channel (FC); knowledge of FC shared storage, including fabric zoning, redundant paths, N_Port ID Virtualization (NPIV), and logical unit number (LUN) provisioning; knowledge of software configuration planning and functionality, including HP Insight Orchestration (IO), Central Management Server (CMS) software, OS deployment, and any customer-provided management software in connection with the HP BladeSystem Matrix implementation. The HP BladeSystem Matrix Starter Kits and optional expansion kits provide configuration options that enable integration into a customers existing environment. This document is intended to guide you through the planning processes by outlining the decisions involved and data collected in preparing for a HP BladeSystem Matrix solution implementation. There are two points during the HP BladeSystem Matrix implementation delivery process where design decision input and user action are required: This document outlines both sets of input information. 1. Pre-Order: Before placing the HP BladeSystem Matrix order, you must plan and specify requirements and order options. 2. Pre-Delivery: Before the delivery of the HP BladeSystem Matrix physical infrastructure, you must coordinate the environmental and configuration details to make sure the on-site implementation service can begin immediately.
The latest updates to the HP BladeSystem Matrix solution are located on the HP website, http:// www.hp.com/go/matrixcompatibility. The supported hardware, software, and firmware versions are listed in the HP BladeSystem Matrix Compatibility Chart. Updates to issues and solutions are listed in the HP BladeSystem Matrix Release Notes. White papers and external documentation listed above are located on the HP BladeSystem Matrix Infrastructure 6.x product manuals page or on the HP BladeSystem Matrix Documentation CD. HP BladeSystem Matrix QuickSpecs are located at http://h18004.www1.hp.com/products/ quickspecs/13297_div/13297_div.pdf and for HP-UX, see http://h18004.www1.hp.com/ products/quickspecs/13755_div/13755_div.pdf.
Planning summary
HP BladeSystem Matrix is a platform that optimally creates an HP Converged Infrastructure environment that is simple and straightforward to buy and use. This document presents steps to guide you through the HP BladeSystem Matrix planning process.
Overview
Planning begins with understanding what makes up each component. Some components might include existing services found in the customer data center. Other components are automatically provided by, or optionally ordered with, HP BladeSystem Matrix. The physical infrastructure as provided by HP BladeSystem Matrix consists of the following components: HP BladeSystem Matrix FlexFabric enclosures include the following: HP BladeSystem c7000 Enclosure with power and redundant HP Onboard Administrator (OA) modules Redundant pair of HP VC FlexFabric 10Gb/24-Port modules
HP BladeSystem Matrix Flex-10 enclosures include the following: HP BladeSystem c7000 Enclosure with power and redundant OA modules Redundant pair of HP VC Flex-10 10Gb Ethernet modules Redundant pair of HP VC 8Gb 24-Port FC HP 10000 G2 series rack HP ProLiant DL360c G7 server functioning as a Central Management Server
The following figure illustrates a basic HP BladeSystem Matrix configuration. Many components displayed in the diagram are discussed in detail in this guide, and are carried through to the HP BladeSystem Matrix Setup and Installation Guide. The examples in this document are based on this sample configuration. Additional detailed application examples are located in Appendix ADynamic infrastructure provisioning with HP BladeSystem Matrix (page 54). For an Insight Recovery implementation, these steps are required for the HP BladeSystem Matrix configurations at both the primary and recovery sites. Figure 2 Basic HP BladeSystem Matrix infrastructure
Management infrastructure The physical infrastructure provided by the customers data center includes power, cooling, and floor space.
Overview
The management infrastructure as provided by HP BladeSystem Matrix consists of the following components: HP Insight Software Advisor HP Insight Dynamics HP Insight Dynamics capacity planning, configuration, and workload management IO HP Insight Recovery (HP IR) (setup requires an additional per event service)
HP Insight Control HP Insight Control performance management HP Insight Control power management HP Insight Control virtual machine management HP Insight Control server migration HP Insight Control server deployment HP Insight Control licensing and reports HP iLO Advanced for BladeSystem
HP Virtual Connect Enterprise Manager (HP VCEM) software HP Insight Remote Support Advanced (formerly Remote Support Pack) HP Systems Insight Manager (HP SIM) HP System Management Homepage (HP SMH) HP Version Control Repository Manager (HP VCRM) Windows management instrumentation (WMI) Mapper
Optional management infrastructure, which can integrate with HP BladeSystem Matrix, includes the following components (discussed throughout this guide): Insight Control for Microsoft System Center (additional per event service required) Insight Control for VMware vCenter Server (additional per event service required) HP Server Automation software (customer-provided) HP Ignite-UX software (customer-provided) Microsoft System Center server (customer-provided) VMware vCenter server (customer-provided)
The customer provided components also include network connectivity, SAN fabric, and network management such as domain name system (DNS), dynamic host configuration protocol (DHCP), time source, and domain services. The HP BladeSystem Matrix management components integrate with the customers existing management infrastructure. The factory integration and integration services are described in the HP BladeSystem Matrix QuickSpecs.
Starter Kits, which contain the infrastructure needed for a fully-working environment when populated with additional server blades Expansion kits, which extend the HP BladeSystem Matrix with additional enclosures, infrastructure, and blades HP BladeSystem Matrix enclosure licenses Rack infrastructure Power infrastructure FC SAN storage iSCSI SAN storage (optional) Switches, transceivers and signal cables Other licenses to enable the HP BladeSystem Matrix environment
For all HP BladeSystem Matrix components and support options, see the HP BladeSystem Matrix QuickSpecs. Additional components such as FC SAN switches and network switches might be required to integrate the HP BladeSystem Matrix solution with the customers existing infrastructure and can be included with the HP BladeSystem Matrix order. Table 2 HP BladeSystem Matrix components
Hardware Component Choose blades Choose Blades configured to order Selection
Fill Starter and Expansion Kits to capacity In HP BladeSystem Matrix Flex-10 Starter or Expansion these will form your server resource pool Kits all blades require an host bus adapter (HBA) for HP BladeSystem Matrix. mezzanine card. When ProLiant G6 or Integrity i2 See the Compatibility Chart for supported blades are integrated within HP BladeSystem Matrix FlexFabric Starter or Expansion Kits, a NIC FlexFabric blade hardware. Adapter is required for all blades in the enclosure. For solutions with all ProLiant G7 blades, the NIC FlexFabric Adapter LOM is embedded on the blade so no additional modules or mezzanines are required. See HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines (page 80) for more information about these configuration options. Choose 1 or more CMS servers DL360 G7 Matrix CMS Server Default selection for CMS Includes 10Gb NIC Does not include SFPs or cables BL460c G6 Matrix CMS Server Selection for an all-blade solution Alternate CMS Server Right-sized per specific customer needs, ordered or customer provided The alternative CMS host must meet all the CMS hardware requirements listed by the HP Insight Software 6.3 Support Matrix and within this document. Choose 1 HP BladeSystem Matrix Starter Flex-10 Starter Kit for Integrity with HP-UX Kit Redundant VC-Enet Flex-10 modules HP BladeSystem c7000 Enclosure Redundant VC-FC 8Gb 24-port modules Redundant OA modules Fully populated with 10 active cool fans 8 full-height blade bays available Flex-10 Starter Kit for ProLiant
10
Overview
FlexFabric Starter Kit for ProLiant HP BladeSystem Matrix licenses required, but not included with Starter Redundant VC-FlexFabric modules Kits (see "Select HP BladeSystem 16 half-height blade bays available Matrix licenses" in this table.) Choose 1 or more Expansion Kits to grow Flex-10 Expansion Kit for Integrity the HP BladeSystem Matrix 8 full-height blade bays available HP BladeSystem c7000 Enclosure HP BladeSystem Matrix license not included Redundant OA modules Flex-10 Expansion Kit for ProLiant Fully populated with 10 active cool 16 half-height blade bays available fans Fully populated with six, 2400W power supplies six C19/C20 single phase power inputs available HP BladeSystem Matrix licenses included FlexFabric Expansion Kit for ProLiant 16 half-height blade bays available HP BladeSystem Matrix licenses included Select HP BladeSystem Matrix licenses HP BL Matrix SW 16-Svr 24x7 Supp Insight Software
HP BladeSystem Matrix licenses are either One required for each ProLiant Starter Kit. offered as a required order option, or This HP BladeSystem Matrix license is included with both included in the kit. Software license ProLiant Expansion Kits. ordering requirements are outlined in the HP BladeSystem Matrix QuickSpecs. HP-UX 1 Matrix Blade 2Skt PSL LTU Per Socket Licenses 1i HP BladeSystem Matrix licenses for for BL860c i2 Integrity: licenses required for both Starter Kits and Expansion kits. Minimum 8 HP-UX 1 Matrix Blade 4Skt PSL LTU Per Socket Licenses 1i licenses required. for BL870c i2 HP BladeSystem Matrix licenses for ProLiant: required to purchase license for 1i Starter Kit. License purchase not needed HP-UX 1 Matrix Blade 8Skt PSL LTU Per Socket Licenses for BL890c i2 for Expansion Kits (already included). HP VCEM BL7000 one enclosure license Required for each HP BladeSystem Matrix with HP-UX Starter or Expansion Kit Choose 1 or more racks HP 10000 G2 racks Customer provided Choose power infrastructure Each HP BladeSystem Matrix enclosure requires six C19/C20 connections Redundant power configuration recommended (i.e. order PDUs in pairs) Choose supported FC SAN Storage If the customer chooses to provide an existing array, the SAN array must be certified for HP BladeSystem c-Class servers (see HP StorageWorks and BladeSystem c-Class Support Matrix). HP PDUs Monitored power distribution units (PDU)s recommended for manageability and to reduce the number of power connections required per rack Customer provided PDUs Fibre Channel HP 3PAR F-Class and T-Class storage systems At this time, HP 3PAR storage systems can be purchased individually, on a separate order installed in a separate rack. A single 3PAR system may consist of multiple cabinets.
1 1
SAN storage must be qualified with the HP StorageWorks EVA VC-FC or VC-FlexFabric modules by the EVAs may be ordered in a HP BladeSystem Matrix rack, storage vendor (see SPOCK for qualified or in separate racks for better expandability. HP SAN Storage) HP StorageWorks XP Array Ordered in a separate rack. Other HP StorageWorks FC Storage Customer provided third party FC storage (Optional) Add supported iSCSI SAN Storage Supported in HP BladeSystem Matrix as a backing store for VM guests. See the HP BladeSystem Matrix QuickSpecs and HP BladeSystem Matrix Compatibility Chart for recommendations and requirements. HP StorageWorks P4300 G2 7.2TB SAS Starter SAN Solution Order up to 8 of these to build a 16 node cluster HP StorageWorks P4500 G2 10.8TB SAS Virtualization SAN Solution Add the 10Gb NIC option for high bandwidth storage applications Other HP StorageWorks iSCSI solutions Customer provided third party iSCSI storage Add switches, transceivers and signal cables See the HP BladeSystem Matrix QuickSpecs and HP BladeSystem Matrix Compatibility Chart for recommendations and requirements. Configured to order Ethernet switches and FC SAN switches are required to complete the solution. Transceivers and signal cables are required for uplinks to switches. The number and type of uplinks for Ethernet, SAN, and VC Stacking may be determined upon completion of this document. Consult the Quick Specs of individual components for compatible transceiver or cable choices. Customer provided FC SAN switches must support NPIV. Other licenses to enable the HP BladeSystem Matrix environment Storage licenses: purchase requirements depend on choice of storage, (some examples listed to the right). Hypervisor licenses: Refer to the Quick Specs for order options. HP StorageWorks XP Command View Advanced Edition (if an HP XP array is ordered, although the Remote Web Console can be used alternatively) HP Command View EVA License To Use to host boot and data LUNs (if an HP EVA is purchased) VMware licenses Hyper-V licenses
Customer responsibilities The customer can select and configure multiple physical Integrity or ProLiant server blades and additional HP BladeSystem Matrix expansion kits. If the default HP ProLiant DL360 G7 management server is not selected, the customer is required to provide a compatible ProLiant server to function as the CMS. The customer also provides connectivity to the HP BladeSystem Matrix infrastructure. The number and type of LAN connections is part of the network planning phase of this document.
12
Overview
IMPORTANT: Be sure that FC SAN SFP+ transceivers are used for FC SAN uplinks, and Ethernet SFP/SFP+ transceivers are used for Ethernet uplinks. VC Flex-10 modules only support Ethernet uplinks and VC FC modules only support FC SAN uplinks. IMPORTANT: VC FlexFabric modules have dual personality faceplate ports; only ports 1 through 4 may be used as FC SAN uplinks (4Gb/8Gb). Additionally, although all VC FlexFabric ports support 10Gb Ethernet uplinks, only ports 5 through 8 support both 1Gb and 10Gb Ethernet uplinks. Using the wrong port or SFP/SFP+ transceiver for any uplink will result in an invalid and unsupported configuration. IMPORTANT: Two additional VC FlexFabric interconnect modules must be purchased when a NIC FlexFabric Adapter mezzanine card is purchased for each blade. This includes any ProLiant G6 or Integrity i2 configuration. When the optional StorageWorks EVA4400 Array is ordered, two embedded FC SAN switches provide connectivity from HP BladeSystem Matrix enclosures to the array. If the EVA is not included with the Starter Kit, the customer must provide connectivity to a compatible FC SAN array. Customer-supplied FC switches to the external SAN must support boot from SAN and NPIV functionality. Refer to the HP website (http://www.hp.com/storage/spock) for a list of switches and storage that are supported by VC FC. Registration is required. Following the login, go to the left navigation and click on Other HardwareVirtual Connect. Then click on the module applicable to the customers solution: HP Virtual Connect FlexFabric 10Gb/24port Module for c-Class Blade System HP Virtual Connect 8Gb 24-Port Fibre Channel Module for c-Class Blade System HP Virtual Connect 4Gb / 8Gb 20-Port Fibre Channel Module for c-Class Blade System NOTE: This module is not in a Starter or Expansion kit. Matrix conversion services are required. Provisioning of suitable computer room space, power, and cooling is based on specifications described in the HP BladeSystem Matrix Quick Specs. When hardware is to be installed in customer provided racks, the customer must order hardware integration services. If the customer elects to not order these services, hardware installation must be done properly prior to any HP BladeSystem Matrix implementation services. When implementing Insight Recovery, two data center sites are used: a primary site that is used for production operations and a recovery site that is used in the event of a planned or unplanned outage at the primary site. Each site contains a complete HP BladeSystem Matrix configuration with an intersite link that connects the sites. Protecting data at the primary site is accomplished by using data replication to the recovery site. Network and data replication requirements for implementing Insight Recovery are described in Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide. Using this document to formulate a plan early on is an essential part of the order process for HP BladeSystem Matrix. IMPORTANT: Each secondary Matrix CMS in a federated environment requires the purchase of a Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation. The following chapter covers the planning considerations of a federated CMS in further detail.
13
Server planning required for the HP BladeSystem Matrix Installation and Startup Service
Plan management servers to be installed and configured as follows: Management Servers hosting the following services: Insight Software CMS Insight Control server deployment for environments with ProLiant blade servers HP Ignite-UX (pre-existing) for environments with HP-UX and Integrity blade servers SQL Server (or can be installed in a customer-provided SQL server farm) Required storage management software: HP Command View Enterprise Virtual Array (EVA) or XP Command View Advanced Edition or other storage management software required
Hypervisor host A and B (Integrity VM, Microsoft Hyper-V, VMware ESX, ESXi) Windows, Linux, or HP-UX operating system for a newly created logical server Unused server for logical server move operation target demonstration (Optional) Allocated for IO automated deployment targets
When implementing Insight Recovery, a similar plan is required for the recovery site. When implementing a federated CMS, the first CMS installed becomes the primary CMS. Any subsequent CMS which is then installed and joined with the federation is called a secondary CMS. A federated CMS may consist of up to five CMS servers (1 primary and 4 secondary). The Insight Orchestration software is only installed on the primary CMS. Each secondary CMS contains the full Insight Software stack, except for Insight Orchestration. IMPORTANT: Each secondary Matrix CMS of a federated CMS requires purchase of a Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation.
Application services
This section outlines the type of information you need when planning application services deployed on HP BladeSystem Matrix. These services may be deployed as logical servers or automatically provisioned by the infrastructure orchestration capabilities of Insight Dynamics.
14
The following defines the information to collect when describing HP BladeSystem Matrix application services: Service name: A label used to identify the application or management service Optionally, one or more tiers of a multi-tiered application The server name on which the application or management service is hosted Physical blades Server model (e.g. BL870c i2) Processor and memory requirements Hypervisor (ESX, Hyper-V, HP VM) Processor and memory requirements
Virtual machines
Software and OS requirements: List of applications or management services running on the server Operating System types: Windows Server Red Hat Enterprise Linux SUSE Linux Enterprise Server HP-UX Hypervisor OS: VMware ESX Hyper-V on Windows Server 2008 HP Integrity VM on HP-UX
SAN storage and fabric: Boot from SAN required for directly deployed physical servers Boot from SAN recommended for VM hosts FC or iSCSI SAN required for VM guest backing store LUN size and RAID level Remote storage for recovery Connectivity to corporate network. Private network requirements, for example, VMware service console, VMotion network Bandwidth requirements
Network connectivity:
The application services examples used in this document are based on use cases described in Exploring the Technology behind Key Use Cases for HP Insight Dynamics for ProLiant servers. For details on how the HP BladeSystem Matrix infrastructure solution can be used to provision a dynamic test and development infrastructure using logical servers or IO templates, see the examples in Appendix ADynamic infrastructure provisioning with HP BladeSystem Matrix (page 54).
Application services
15
For Insight Recovery implementations, discuss with the customer what Insight Recovery's DR capabilities and determine the VC-hosted physical blades and/or VM-hosted logical servers the customer wants Insight Recovery to protect. These logical servers are known as DR-protected logical servers. In addition, sufficient computer resources (physical blades and VM hosts) must be available at the recovery site for a successful Insight Recovery failover. See the Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for more information. Some customers may not yet be able to articulate the specific details of their failover requirements. In this case, HP recommends that several of the logical servers created as part of the HP BladeSystem Matrix SIG implementation be used as DR-protected logical servers to demonstrate an HP IR configuration and its failover functionality.
(tier #2 of service) (server) (server type) (installed software) (SAN requirements) (LAN requirements)
IMPORTANT: HP BladeSystem Matrix infrastructure is based on the requirement of boot from SAN for directly deployed physical servers and recommended for VM hosts.
Management services
The HP BladeSystem Matrix solution requires an Insight Software management environment. This environment consists of a CMS running Insight Software, a deployment server, storage management (for example, HP Command View EVA), and a SQL server. This environment may also include separate customer-provided servers for the optional management infrastructure mentioned previously. See the following paragraphs discussing separate servers.
16
NOTE: When planning a federated CMS, the plan for the primary and each secondary CMS must include exclusion ranges in its VCEM instance to remove overlap between all the current and planned instances of VCEM residing in the same data center. NOTE: If you are considering configuring the CMS in a high availability cluster either now or in the future, the CMS must be configured within a Windows domain and not as a standalone workgroup. HP does not currently support data migration of a CMS from a workgroup to a Windows domain. Server hardware Table 4 Confirm the CMS meets the minimum hardware requirements
Component Server Specification HP ProLiant BladeSystem c-Class server blades (G6 or higher series server is recommended), or an HP ProLiant ML300, DL300, DL500 or DL700 (G3 or higher series server is recommended) 12GB for 32-bit Windows management servers (deprecated) 32GB for 64-bit Windows management servers, appropriate for maximum scalability (see below) Processor Disk space File Structure DVD Drive 2 Processor dual core (2.4 GHz or faster recommended) 150GB disk space is recommended. If usage details are known in advance, a better estimate may be obtained from the disk requirements section below. New Technology File System Local or virtual/mapped DVD Drive required
Memory
There are several commonly used choices for installing and configuring a CMS with the HP BladeSystem Matrix: CMS on a rack-mounted ProLiant DL or ML server CMS on a ProLiant server blade CMS running from mirrored local disks CMS running from a SAN-based disk image (boot from SAN) A federated CMS, consisting of a primary CMS and one to four secondary CMSs
Each of these has benefits and tradeoffs. When choosing between a server blade and a racked server configuration, consider the environment's purpose. When choosing to implement the CMS as a server blade, keep in mind that an improper change to the VC-Ethernet network, server profile, or SAN network definitions can render the CMS on a blade unable to manage any other device, including the OA or VC modules. Well-defined processes for management and maintenance operations can mitigate this risk. When hosting the HP BladeSystem Matrix CMS within a HP BladeSystem Matrix enclosure, exercise greater care when accessing VCEM or the VC modules. When choosing the storage medium for the CMS, the default choice is to run the CMS from a SAN-based disk image. In environments where SAN availability may not be guaranteed (or uniform) it may be preferable to install a fully functional CMS on the mirrored local disk. However, this limits the choices, process, and time for recovery in the event of a hardware failure or planned maintenance.
Management services
17
NOTE: If this server is deselected, the customer must supply or order another server that meets the requirements for CMS.
Networking connections
The CMS must connect to multiple networks, which are common with those defined inside the HP BladeSystem Matrix environment. In the default configuration for HP BladeSystem Matrix, these networks are named: Management Production
If the CMS is also the deployment server for HP BladeSystem Matrix, the server must also connect to: Deployment If vCenter is not running, the VMotion networks do not need to be brought into the CMS in either the BL or external server case. Also ensure that the server has adequate physical ports and are configured for virtual local area networks (VLAN)s for any other networks to be used with HP BladeSystem Matrix. When implementing Insight Recovery, the CMS at the primary and recovery sites must be accessible to each other using a fully qualified domain name (FQDN).
SAN connections
In configurations where the CMS is either booted from SAN or also running storage software, the server requires necessary SAN HBAs and connectivity into the HP BladeSystem Matrix SAN.
Disk requirements
See the HP Insight Software Support Matrix, which shows several different supported combinations of HP SIM, Insight Control server deployment (RDP), and their databases. In addition to the disk space required for the CMS operating system, the requirements for Insight Software are summarized here for planning purposes: 20GB for install of Windows Server 2008 R2 Enterprise Edition (recommended CMS operating system) 20GB for install or upgrade of HP Insight Software Allot 8GB for OS temp space Allot 4GB for each OS to deploy. This additional storage must be accessible to the Insight Control server deployment software. Allot 65MB per workload on Windows or Linux managed systems or 35MB per workload on HP-UX managed systems. These allotments are for collecting and preserving a maximum of four years of data for use by Insight Capacity Advisor. Allot 4GB (CMS DB) per 100 workloads to preserve historical data for Insight Global Workload Manager.
The HP SIM Sizer can help estimate the long-term disk space requirements for logging events and other historic data based on your number of managed nodes and retention plans.
18
Ignite-UX server
Ignite-UX is required for all HP BladeSystem Matrix with HP-UX installations.
HP Server Automation Ignite-UX Server HP Insight Control server deployment (RDP) vCenter Server CommandView Server HP Insight Orchestration HP Insight Control (except RDP) HP Insight Control for Microsoft System Center
Management services
19
HP Insight Dynamics Yes capacity planning, configuration, and workload management HP VCEM Microsoft SQL Server (CMS database) Microsoft System Center HP Insight Recovery HP Cloud Service Automation (CSA)
1 2
No No No N/A N/A
The primary CMS must have access to all deployment servers in a federated CMS configuration. Multiple VCEM instances co-exist in a single data center with federated CMS configurations. There is one instance for each primary and secondary CMS. When these instances share CommandView and/or networks, it is critical to avoid any media access control (MAC) and worldwide name (WWN) conflicts by configuring exclusion ranges for each instance of VCEM.
Management server DL360 G7 with 2 processors and 32GB memory Windows Server 2008 R2 Enterprise Edition Insight Software Insight Control server deployment SQL Express 2005 or 2008 (installed by Insight Software). SQL Express is not recommended for medium or large environments. Storage management software, for example HP Command View EVA (can be installed on a separate server if required by the customer)
Network connections Production LAN (uplinked to data center) Management LAN (uplinked to data center) Deployment LAN (uplinked to data center)
For an illustration of a limited HP BladeSystem Matrix infrastructure as described above, please see Figure 2 (page 8) in the overview chapter.
Management servers Server 1 DL360 G7 with 2 processors and 32GB memory Windows Server 2008 R2 Enterprise Edition Insight Software Insight Control server deployment DL360 G7 with 2 processors and 32GB memory Windows Server 2008 R2 Enterprise Edition SQL Server 2005 (or can be installed in a separate SQL server farm) Storage management software (may also be installed on a separate server)
Server 2
Management services
21
Network connections Production LAN, Management Servers #1, #2 Management LAN, Management Servers #1, #2 Deployment LAN, Management Server #1 only
22
Management servers Server 1 Insight Software SQL Server 2005 HP Ignite-UX server Storage management software Server 2 Server 3 (can be installed in a separate SQL server farm) Server 4 (can be combined with SQL Server 2 if required)
Network connections Production LAN, Management Servers #1, #2 Management LAN, Management Servers #1, #2, #3, #4 Deployment LAN, Management Server #3 only SAN A and B; Management Server #4 and each Starter and Expansion kit
Management services
23
24
1,000 nodes for the primary CMS 6,000 nodes maximum across primary and secondary CMS resource pools 1 primary CMS and up to 4 secondary CMSs 800 nodes for each secondary CMS 600 nodes for the primary CMS 3,200 nodes maximum across primary and secondary CMS resource pools
Management servers Server 1 (primary CMS) Insight Software Insight Software, excluding Insight Orchestration SQL Server 2005 Ignite-UX, Server Automation or Insight Control server deployment Additional deployment server (optional) Storage management software, for example HP CommandView EVA or XP edition (can be combined with another server only for EVA edition; XP edition must be installed on a separate server) Other/additional storage management software Servers 2 through 5 (secondary CMSs) Servers 6 through 10 (SQL servers) Server 1 (Deployment server) 1 Server 12 (Deployment server) Server 13 (Storage management server)
Network connections Production LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10 Management LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10, #1 1, #12, #13, #14 Deployment LAN, Management Servers #1 #12 1, SAN A and B; Management Server #13 and the primary CMSs virtual connect domain group (VCDG) SAN C and D; Management Server #14 and some secondary CMSs VCDGs
Management services
25
NOTE: SAN switch infrastructure and storage management servers may be shared across CMS boundaries only if VCEM exclusion ranges are configured so that each CMS has a non-overlapping range of WWNs. An example of this is SAN C and D, illustrated in figure 5. NOTE: When VCDGs share any networks, but are managed in resource pools of more than one CMS (as shown in figure 5), VCEM exclusion ranges are mandatory to prevent overlap of MAC addresses. Figure 5 HP BladeSystem Matrix infrastructure configured with a federated CMS
26
Management services
27
IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control server deployment, Server Automation) are planned for an HP BladeSystem Matrix installation, a unique and separate deployment LAN must exist for each deployment server. IMPORTANT: Insight Software, SQL Server, and at least one deployment technology are in all HP BladeSystem Matrix implementations. Storage software for FC storage is also required in HP BladeSystem Matrix implementations (such as Command View EVA). Any other services must be running on customer provided servers. Separate installation services may be ordered with HP BladeSystem Matrix implementations to deliver Insight Control for Microsoft System Center and/or Insight Control for VMware vCenter Server. See the Appendix for additional integration details.
28
30
Physical isolation
Any other situations in which bandwidth sharing between enclosures is not desirable or allowed. Customer desires single-enclosure domain configuration.
Supported cable lengths on 10Gb stacking links are 0.5 to 7 meters. Supported cable lengths on 10Gb uplinks are 3 to 15 meters. VC FC uplinks must always exist per enclosure as FC traffic is not transmitted across stacking links.
Simple stacking examples are diagrammed in the QuickSpecs for the HP Virtual connect Flex-10 10Gb Ethernet Module for c-Class BladeSystem: http://h18004.www1.hp.com/products/ quickspecs/13127_div/13127_div.pdf.
31
Figure 6 Multi-enclosure stacking enclosure cabling (VC modules are in Bays 1 & 2 for each enclosure)
Example VC domain stacking configurations based upon the number of enclosures are shown above. The one-meter cables are sufficient for stacking short links to adjacent enclosures, while
32 HP BladeSystem Matrix customer facility planning
three-meter cables are sufficient for stacking links that span multiple adjacent enclosures. The OA linking cables required for stacking are not shown in the figure. HP recommends that uplinks alternate between left and right sides, as shown in green. The examples show stacking of ports 5 and 6 while keeping the two internal cross-links active in a multi-enclosure domain configuration this is a total of four 10GbE stacking ports of shared bandwidth across enclosures (80Gbps line rate). The two internal cross-links remain active as long as ports 7 and 8 are unused. Order the following cables for each multi-enclosure domain: Quantity 1, 2, or 3 of Ethernet Cable 4ft CAT5 RJ45 for 2, 3 or 4 enclosures, respectively to be used as OA backplane links (not in figure). Quantity 2, 4, or 6 of HP 1m SFP+ 10GbE Copper Cable for 2, 3 or 4 enclosures, respectively to be used as VC stacking links. Order fixed quantity 2 of HP 3m SFP+ 10GbE Copper Cable to be used as wrap-around VC stacking links in VC domains with 3 or 4 enclosures.
33
34
Storage connections
The VC FC or VC FlexFabric modules in a HP BladeSystem Matrix solution enable the c-Class administrator to reduce FC cabling by making use of NPIV. Because it uses an N-port uplink, it is connected to data center FC switches that support the NPIV protocol. When the server blade HBAs or FlexFabric Adapters log in to the fabric through the VC modules, the HBA WWN is visible to the FC switch name server and can be managed as if it was connected directly. The HP VC FC acts as an HBA aggregator where each NPIV-enabled N-port uplink can carry the FC traffic for multiple HBAs. The HP VC FlexFabric modules translate FCoE from the blades into FC protocol. With VC FlexFabric, FlexFabric Adapters on blade servers, not HBAs, are sending the FCoE traffic across the enclosure midplane. IMPORTANT: The HP VC FC uplinks must be connected to a data center FC switch that supports NPIV. See the switch firmware documentation for information to determine whether a specific switch supports NPIV and for instructions on enabling this support. The HP BladeSystem Matrix VC FC module has eight uplinks. The HP BladeSystem Matrix VC FlexFabric module has eight uplinks, four of which are dual personality uplinks which may be used as a FC uplink. In either case, each uplink is completely independent of the other uplinks and has a capability of aggregating up to 16 physical server HBA N-port links into an N-port uplink through the use of NPIV. Multiple VC FC module uplinks can be grouped logically into a VC fabric when attached to the same FC SAN fabric. This feature enables access to more than one FC SAN fabric, as well as enabling a flexible and fully redundant method to connect server blades to FC SANs.
Planning Step 3aCollect details about the customer provided SAN storage
The default configuration as described in the HP BladeSystem Matrix System installation and configuration documentation consists of an EVA and switches in the enclosure to create a complete self-contained SAN. In the case of a customer choosing an alternative storage configuration, the following information is required for planning the installation. For details on support storage options, see the HP BladeSystem Matrix Quick Specs.
35
3 4 5 6
NOTE: Every CMS in a federated CMS environment manages its own storage pool. Therefore storage pool entries must be created on each CMS for the portability groups that the CMS is managing.
Storage volumes
HP recommends that the CMS be configured to boot from SAN. To facilitate the flexible movement of management services across blades and enclosures, these services must be configured to use shared storage for the OS boot image, the application image, and the application data. HP also recommends that virtual machine hosts also boot from SAN. If connectivity to customer provided SAN storage is desired, the FC switch must support the NPIV protocol. Access to the switch will be required by HP Services personnel to deploy boot from SAN LUNs. Fabric zones are required in a multi-path environment to ensure a successful operating system deployment.
Storage requirements
For each server profile, consider the boot LUN and any additional data storage requirements and list those parameters in the following table. The HP BladeSystem Matrix Starter Kit on-site implementation services include the deployment of operating systems on a limited number of configured LUNs on the new or existing customer SAN. For more details about HP BladeSystem Matrix Starter Kit Implementation Services, see the HP BladeSystem Matrix Quick Specs. The Replicated To column refers to the Insight Recovery remote storage controller target and data replication group names for the replicated LUNs. HP BladeSystem Matrix is disaster recovery ready, which means HP IR licenses are included and the HP IR feature can be enabled by applying Insight Dynamics licenses on supported ProLiant server blades. Application service recovery can be enabled by configuring a second HP BladeSystem Matrix infrastructure at a remote location and enabling storage replication between the two sites. Continuous access software and licenses are also required. If XP storage is used, Cluster Extension for XP software version 3.0.1 or later is required. See Volume 4, For Insight Recovery on ProLiant servers, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for additional information on storage and data replication requirements.
Storage volumes
37
The following table summarizes the type of information needed when planning application and management services deployed on HP BladeSystem Matrix. Table 13 Storage volumes
Server (server name) Use and size (LUN properties) vDisk (LUN) name (xxxx_vdisk) vHost name (xxxx_vhost) Replicated to Connected to
(remote target and (Local SAN data replication storage target) group name, if replicated)
The following details define the type of information needed when planning VC FC connections for application services deployed on HP BladeSystem Matrix: Server name: A label used to identify the application or management service Optionally, may consist of one or more tiers of a multi-tiered application The server name on which the application or management service is hosted
Use and size: The purpose and characteristics of the LUNs associated with the FC connection, for example, boot LUN; the LUN ID, and the LUN size.
vDisk (LUN) name: The vDisk label assigned to the LUN vHost name: The vHost label assigned to the LUN Replicated to: Specifies the remote storage controller WWPN and data replication group name, if using HP Insight Recovery
Connected to: Specifies the local storage controller WWPN hosting this LUN
Replicated to N/A
1
Connected to F400_3PAR
CMS storage is not replicated using HP IR as a second CMS is required at the remote location.
38
Use and size 20GB boot 20GB boot 500GB VMFS 20GB boot 40GB boot ###GB ###GB ###GB ###GB
F400_3PAR F400_3PAR F400_3PAR F400_3PAR F400_3PAR (storage target) (storage target) (storage target) (storage target)
sp_w2k3_sys_01_vdisk sp_w2k3_sys_01_vhost sp_2008_sys_01_vdisk sp_2008_sys_01_vhost xxxx_vdisk xxxx_vdisk xxxx_vdisk xxxx_vdisk xxxx_vhost xxxx_vhost xxxx_vhost xxxx_vhost
Storage configurations for Insight Recovery are not covered in this example
Microsoft Hyper-V
Consult the Hyper-V Planning and Deployment Guide: http://www.microsoft.com/downloads/ details.aspx?FamilyID=5da4058e-72cc-4b8d-bbb1-5e16a136ef42&displaylang=en This document describes the separation of network traffic of the hypervisor host from virtual machines, where it recommends: Use a dedicated network adapter for the management operating system of the virtualization server. The HP recommendation, which has been validated by rigorous testing, is that the principle of isolating hypervisor resources from virtual machine resources should be applied to virtual machine storage as well as networking. The following site recommends that administrators Avoid storing system files on drives used for Hyper-V storage: http://blogs.technet.com/vikasma/archive/2008/06/26/ hyper-v-best-practices-quick-tips-1.aspx The following site recommends that administrators Place the pagefile and operating system files on separate physical disk drives: http://www.microsoft.com/whdc/ system/sysperf/Perf_tun_srv.mspx
VMware ESX
Most production ESX Server customers concentrate their virtual machine disk usage on external storage, such as a FC SAN, a hardware or software initiated iSCSI storage device, or a remote NAS file server (using the NFS protocol).
Storage volumes 39
IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control server deployment, Server Automation) are planned for a HP BladeSystem Matrix installation, a unique and separate deployment LAN must exist for each deployment server. IMPORTANT: A federated CMS is highly dependent on the DNS configuration. On the primary CMS, forward and reverse DNS lookups must work for each secondary CMS. DNS lookups need to be resolvable using the FQDN of each system. Table 16 Configuration of networks and switches
Item Production LAN IP address (network number) Subnet mask 40 HP BladeSystem Matrix solution networking Value
Network planning
41
In situations where the customer has VLANs in place on the data center networks, or the number of uplinks are constrained, you can combine a number of networks in a shared uplink set.
42
The following details define information you need when planning VC Ethernet connections for HP BladeSystem Matrix: Network nameThe VC network profile name. Shared uplink set (SUS) nameOptionally, the VC Shared Uplink Set name, when multiple networks share uplinks VC Uplinks (Enclosure VC module ports)The VC uplink Ethernet ports. If deploying redundant connections, specify additional ports as required. One VC Flex-10 transceiver must be ordered for each uplink port. Verify compatibility with data center switch transceivers and optical cables. Router uplinks (Data center switch and port)The uplink data center switch name and port number that is the destination of this connection. Signal typeThe physical signal cabling standard for the connection
desktop or virtual machine may consume 2 Gb/s and Inter-process communications used in cluster control could consume upward of 4 Gb/s in bandwidth. Using VC Flex-10 you can define a network that does not use any external uplinks. This creates a cableless network within the VC domain. The following details define information you need when planning VC Flex-10 Ethernet connections for application services deployed on HP BladeSystem Matrix: Server name: A label used to identify the application or management service Optionally, can consist of one or more tiers of a multi-tiered application The server name on which the application or management service is hosted The VC network profile name The Flex NIC port connected to this network. Used when specifying a physical blade not auto-provisioned by IO. Specifies the Flex-10 bandwidth allocation for this NIC. Used when specifying a physical blade not auto-provisioned by IO. Specifies the PXE options (Enabled, Disabled, Use BIOS) for this NIC. Used when specifying a physical blade not auto-provisioned by IO.
Flex-10 bandwidth:
PXE settings:
Continuing with the services examples developed previously in the Servers and services to be deployed in HP BladeSystem Matrix section, and using the following table, define VC Ethernet parameters for those services.
44
These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO, such as the CMS, deployment server, SQL Server, and ESX hosts.
NOTE: Currently, IO can only provision a network with a single VLAN ID mapped to a single Flex NIC port. Even though the VC profile network port definition allows traffic from multiple networks to be trunked over a single NIC (with VLAN ID tagging), IO cannot express this in a service template. Ensure that any server blade provisioned by IO has enough NIC ports to individually carry the defined networks.
Manageability connections
The following table lists required network connections to properly configure and manage each HP BladeSystem Matrix enclosure. Some connections can be provisioned using static address, DHCP, or the recommended EBIPA. This table lists the physical network connections and IP address requirements for the BladeSystem enclosure management connections. Table 22 Required management connections for HP BladeSystem Matrix enclosures
Network Host uplink Router uplink (Data Signal type center switch and port) 1000Base-T 1000Base-T Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Through OA connection Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed Multiplexed IP address Provision type (EBIPA, Static, or DHCP) Static Static EBIPA EBIPA EBIPA EBIPA Static EBIPA
If the EVA4400 (or EVA6400, EVA8400), P4300 G2 (or P4500 G2) or other storage solution is included in the HP BladeSystem Matrix configuration, the following is a sample of the required network connections. Other storage solutions such as a HP StorageWorks XP Array, HP 3PAR F-Class InServ storage system or HP 3PAR T-Class InServ storage system have similar network connection requirements.
Manageability connections
45
Other devices included in the HP BladeSystem Matrix configuration require management network connections such as monitored PDUs and network switches. Table 24 Other additional network connections
Network Management Management Management Management Host uplink Monitored PDU #1 Monitored PDU #2 Network Switch #1 Network Switch #2 N/A N/A Router uplink (Data Signal type center switch and port) 100Base-T 100Base-T N/A N/A IP address Provision type Static Static Static Static
Starter Kit OA #1 Starter Kit OA #2 Starter Kit VC-Enet #1 Starter Kit VC-Enet #2 Starter Kit VC-FC #1 Starter Kit VC-FC #2 Optional VC Domain IP
46
Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management
Starter Kit iLO Range Expansion Kit #1 OA #1 Expansion Kit #1 OA #2 Expansion Kit #1 VC-Enet #1 Expansion Kit #1 VC-Enet #2
Expansion Kit #1 VC-FC Through OA #1 connection Expansion Kit #1 VC-FC Through OA #2 connection Expansion Kit #1 iLO Range EVA4400 ABM MGMT port EVA4400 Fibre switch #1 EVA4400 Fibre switch #2 P4300 G2 Node #1 MGMT P4300 G2 Node #2 MGMT Other SAN switch Other FC Storage controller Monitored PDU #1 Monitored PDU #2 Network Switch #1 Network Switch #2 Through OA connection
Access requirements
The following table shows the various access credentials to support the HP BladeSystem Matrix implementation. Each enclosure that is included in the HP BladeSystem configuration (for example, Starter Kit enclosure, one or more expansion kits) requires OA and VC credentials. Plan or identify credentials for all management services including the CMS, deployment servers, hypervisor consoles and storage management consoles. Also identify and plan SNMP settings and user credentials for managed device consoles such as SAN switches, VC, iLO and OA.
Manageability connections
47
IMPORTANT: It is the user's responsibility to be sure that all CMSs in a federated CMS environment have the user accounts in sync. Create the same user accounts on primary and secondary CMSs.
Domain
Credential
Password
N/A N/A
N/A N/A
48
49
7 Next steps
After completing the preceding planning and pre-delivery steps, the customer environment is ready for physical delivery of the HP BladeSystem Matrix components. Upon arrival of the HP BladeSystem Matrix components, continue with the HP BladeSystem Matrix setup and installation following the procedures described in HP BladeSystem Matrix Setup and Installation Guide.
50
Next steps
IMPORTANT: Be sure to mention that this is an HP BladeSystem Matrix configuration when you call for support. Each HP BladeSystem Matrix Starter Kit or Expansion Kit c7000 serial number identifies it as a HP BladeSystem Matrix installation.
How to contact HP
Use the following methods to contact HP technical support: In the United States, see the Customer Service / Contact HP United States website for contact options: http://www.hp.com/go/assistance In the United States, call 1-800-334-5144 to contact HP by telephone. This service is available 24 hours a day, 7 days a week. For continuous quality improvement, conversations might be recorded or monitored. In other locations, see the Contact HP Worldwide website for contact options: http://www.hp.com/go/assistance
Contacting HP
51
Warranty information
HP will replace defective delivery media for a period of 90 days from the date of purchase. This warranty applies to all Insight software products.
HP authorized resellers
For the name of the nearest HP authorized reseller, see the following sources: In the United States, see the HP U.S. service locator website: http://www.hp.com/service_locator In other locations, see the Contact HP worldwide website: http://www.hp.com/go/assistance
Documentation feedback
HP welcomes your feedback. To make comments and suggestions about product documentation, send a message to: docsfeedback@hp.com Include the document title and manufacturing part number in your message. All submissions become the property of HP.
Security bulletin and alert policy for non-HP owned software components
Open source software (such as OpenSSL) or third-party software (such as Java) are sometimes included in HP products. HP discloses that the non-HP owned software components listed in the HP Insight Dynamics end user license agreement (EULA) are included with HP Insight Dynamics. To view the EULA, use a text editor to open the /opt/vse/src/README file on an HP-UX CMS, or the <installation-directory>\src\README file on a Windows CMS. (The default installation directory on a Windows CMS is C:\Program Files\HP\Virtual Server Environment, but this directory can be changed at installation time.) HP addresses security bulletins for the software components listed in the EULA with the same level of support afforded HP products. HP is committed to reducing security defects and helping you mitigate the risks associated with security defects when they do occur. HP has a well defined process when a security defect is found that culminates with the publication of a security bulletin. The security bulletin provides you with a high level description of the problem and explains how to mitigate the security defect.
52
4. 5.
Select Business & IT Professionals to open the Subscriber's Choice webpage. Do one of the following: Sign in if you are a registered customer. Enter your email address to sign-up now. Then, select the box next to Driver and Support alerts and click Continue. Select HP BladeSystem Matrix Converged Infrastructure from the product family section A, then select each of the BladeSystem Matrix entries in section B.
Related information
The latest versions of manuals (only the HP BladeSystem Matrix Compatibility Chart and HP BladeSystem Matrix Release Notes are available) and white papers for HP BladeSystem Matrix and related products can be downloaded from the web at http://h20000.www2.hp.com/ bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us& docIndexId=64180&taskId=1 15&prodTypeId=3709945&prodSeriesId=4223779.
Related information
53
Example 1An agile test and development infrastructure using logical servers
In this example, logical servers deployed as physical server blades are used to create a dynamic test and development infrastructure. Logical server management operations are used to rapidly activate and deactivate test and development environments, to quickly re-purpose infrastructure without reinstallation. Resources may be pooled and shared, improving utilization and reducing cost. Test and development teams share a pool of VC server blades used to develop and test applications on multiple operating systems and versions. A number of different environments are needed, but not all are required at the same time. At any one time, several logical servers are active, making the currently required test and development environments available for use. Other logical servers, for those test and development environments that are not currently needed, are inactive. A deactivated logical server does not consume computer resources such as CPU, memory, or power, but its profile, including associated storage, is retained, making it easy to reactivate the logical server to quickly make the environment again available for use.
54
Example 1An agile test and development infrastructure using logical servers
55
Bill of materials
Table 28 Bill of materials
Item HP BladeSystem c7000 Enclosure HP VC Flex-10 10Gb Ethernet module x2 HP VC 8Gb 8Port FC module x2 HP StorageWorks EVA4400 HP ProLiant BL460c G6CMS server blade HP Insight Software and Licenses HP ProCurve 6600 with 10Gb support HP ProLiant BL460c x3 Windows Server 2003 SP2 EE Red Hat Enterprise Linux HP BladeSystem Matrix included/orderable separately HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included (optional) HP BladeSystem Matrix included (default, deselectable) HP BladeSystem Matrix included Orderable Orderable Orderable Orderable
Deployment network #2 2Gb DevV2Windows Physical ProLiant BL460c Windows Server 2003 R2SP2 SE Boot from SAN Corporate network #1 1Gb
58
VC-Ethernet Uplink Corporate Network #1 Management Service VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 TestV1Windows VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 DevV2Windows VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 TestV1RHEL VC-Ethernet Uplink Deployment Network #2 VC-Ethernet Uplink Corporate Network #1 DevV1Windows VC-Ethernet Uplink Deployment Network #2
1
16.89.129.100 10.1.1.100 16.89.129.101 10.1.1.101 16.89.129.102 10.1.1.102 16.89.129.103 10.1.1.103 16.89.129.104 10.1.1.104
Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment
To enable network redundancy, multiple connections can be made to the same LAN segment.
Starter Kit OA #1 Starter Kit OA #2 Starter Kit VC Ethernet #1 Starter Kit VC Ethernet #2 Starter Kit VC Fibre #1 Starter Kit VC Fibre #2 Optional VC Domain IP iLO Range
Example 1An agile test and development infrastructure using logical servers
59
EVA4400 ABM MGMT port EVA4400 Fibre Switch #1 EVA4400 Fibre Switch #2 ProCurve Switch #1
60
Bill of materials
Table 39 Bill of materials
Item HP BladeSystem c7000 Enclosure HP VC Flex10 10Gb Ethernet module x2 HP VC 8Gb 8Port FC module x2 HP StorageWorks EVA4400 HP ProLiant BL460c G6 Server CMS server blade HP Insight Software and Licenses HP BladeSystem Matrix included/orderable separately HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included HP BladeSystem Matrix included (optional) HP BladeSystem Matrix included (default, deselectable) HP BladeSystem Matrix included Example 2An agile test and development infrastructure with IO 61
VMApp2
VMHost1
62
63
Monitored PDUs only: Additional uplink & IP address SNMP community strings Installation characteristics: Identify data center location Side clearances / floor space allocation Verify ready to receive and install rack Customers site in Brussels, Belgium Hot/Cold aisles adequate distances, bayed to end of existing row Customer requires 10 business days after delivery before scheduling install
64
Management Service
16.89.129.100
65
16.89.129.101
Production LAN segment Management LAN segment Production LAN segment Management LAN segment Production LAN segment Management LAN segment None (Internal) Production LAN segment Management LAN segment None (Internal)
VC-Ethernet Uplink Deployment 10.1.1.101 Network #2 VC-Ethernet Uplink Corporate Network #1 VMApp2 VC-Ethernet Uplink Deployment 10.1.1.102 Network #2 VC-Ethernet Uplink Corporate Network #1 VMHost1 16.89.129.103 16.89.129.102
VC-Ethernet Uplink Deployment 10.1.1.103 Network #2 VC VMotion Network #3 VC-Ethernet Uplink Corporate Network #1 10.1.1.1 13 16.89.129.104
VMHost2
Database tier VC-Ethernet Uplink Corporate Network #1 DB1 VC-Ethernet Uplink Deployment 10.1.1.105 Network #2 VC-Ethernet Uplink Corporate Network #1 DB2 VC-Ethernet Uplink Deployment 10.1.1.106 Network #2
1
16.89.129.105
Production LAN segment Management LAN segment Production LAN segment Management LAN segment
16.89.129.106
To enable network redundancy, multiple connections can be made to the same LAN segment.
Starter Kit OA #1 Starter Kit OA #2 Starter Kit VC Ethernet #1 Starter Kit VC Ethernet #2 Starter Kit VC Fibre #1
66
Starter Kit VC Fibre #2 Optional VC Domain IP iLO Range EVA4400 ABM MGMT port EVA4400 Fibre Switch #1 EVA4400 Fibre Switch #2 ProCurve Switch #1
16.89.129.73 16.89.129.12
16.89. 29.50-65 EBIPA 1 16.89.129.7 16.89.129.8 16.89.129.9 16.89.129.1 Static Static Static Static
67
(service tier #2 name) (server) (server type) (SAN requirements) (LAN requirements)
Insight Software server (SAN requirements) deployment HP Command View EVA SQL server (Other) (SAN requirements) (SAN requirements) (SAN requirements)
68
69
70
3 4 5 6
(remote target and (Local SAN data replication storage target) group name, if replicated)
71
72
73
Connection (VC Ethernet connection #1) (VC Ethernet connection #2) N/A N/A
These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO, such the CMS, deployment server, SQL Server, and ESX hosts.
Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management
Starter Kit OA #1 Starter Kit OA #2 Starter Kit VCEnet #1 Starter Kit VC-Enet #2 Starter Kit VC-FC #1 Starter Kit VC-FC #2 Optional VC Domain IP Starter Kit iLO Range Expansion Kit #1 OA #1 Expansion Kit #1 OA #2 Expansion Kit #1 VC-Enet #1 Expansion Kit #1 VC-Enet #2
Expansion Kit #1 VC-FC Through OA #1 connection Expansion Kit #1 VC-FC Through OA #2 connection Expansion Kit #1 iLO Range EVA4400 ABM MGMT port Through OA connection
74
Management Management Management Management Management Management Management Management Management Management
EVA4400 Fibre switch #1 EVA4400 Fibre switch #2 P4300 G2 Node #1 MGMT P4300 G2 Node #2 MGMT Other SAN switch Other FC Storage controller Monitored PDU #1 Monitored PDU #2 Network Switch #1 Network Switch #2
75
These services are discussed in further detail below. If planned for integration, the following management services must be implemented before HP BladeSystem Matrix is delivered: HP Server Automation software HP Ignite-UX software VMware vCenter Server software Microsoft System Center software
If any implementation services are ordered for implementing these other management services, then all required planning must be done for these services in addition to planning for this implementation service.
77
HP Hardware Inventory Tool for Configuration Manager 2007 uses native System Center Hardware Inventory to provide detailed component level inventory of every managed server. HP Server Updates Catalog for System Center Configuration Manager 2007 uses System Center Configuration Manager to install and update ProLiant drivers and firmware using a rules-based model.
No
No
Yes
No
The BladeSystem Management Pack uses a special Windows service to manage and monitor HP BladeSystem c-Class enclosures. This service is normally installed on the System Center Operations Manager console, but it can also be installed on any other Windows server. Do not install the BladeSystem Monitor Service on the HP BladeSystem Matrix CMS.
78
Yes
No
Yes
Yes
When using the HP Server Updates Catalog for System Center Configuration Manager to upgrade firmware or drivers, it is important to check that the firmware or driver version that you are updating on the managed nodes adheres to the supported version for HP BladeSystem Matrix. If they do not, you should not perform the upgrade. You may explicitly exclude servers from the System Center Server Collection before deploying updates. A Server Collection is a group of servers to which you wish to perform a System Center Configuration Manager function, such as updating firmware or drivers. In order to maintain the matched set of HP BladeSystem Matrix CMS firmware and driver versions, you should exclude the HP BladeSystem Matrix CMS from the Server Collection. This can be done by creating separate Collection for all ProLiant servers except for the CMS server. See the System Center Configuration Manager User Guide for creating Collections within System Center Configuration Manager.
79
HP G7 blade servers have a Converged Network Adapter chip integrated on the motherboard (LOM) and are compatible with FlexFabric interconnect modules. For additional bandwidth, the NC551m is supported in the BL465c G7, the BL685c G7 and all G6 BladeSystem servers. The NC553m is supported in all HP G7 and G6 BladeSystem servers. FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. Both the HP NC553m and the HP NC551m mezzanine adapters support Virtual Connect Flex-10 that allows each 10 Gb port to be divided into four physical NICs to optimize bandwidth management for virtualized servers. When connected to a CEE-capable switch, FC and Ethernet I/O are separated and routed to the corresponding network. For iSCSI storage, the NC553m and NC551m support full protocol offload providing better CPU efficiency when compared to software initiators enabling the server to handle increased virtualization workloads and compute-intensive applications. This combination of high-performance network and storage connectivity reduces cost and complexity and provides the flexibility and scalability to fully optimize BladeSystem servers. The NC553m and NC551m deliver the performance benefits and cost savings of converged network connectivity for HP BladeSystem servers. The dual-port NC553m and NC551m optimize network and storage traffic with hardware acceleration and offloads for stateless TCP/IP, TCP Offload Engine (TOE), FC and iSCSI. Older generation G6 blade servers have 10 GB Network Interface Cards (NICs) embedded on the motherboard as a LOM and are not readily compatible with FlexFabric. Customers need to purchase either the HP NC553m or the HP NC551m Converged Network Adapter mezzanine card and plug it into the servers mezzanine to enable any G6 BladeSystem server to support FlexFabric.
80
5. Interconnect Bays 7 / 8 6. Onboard Administrator 7. Lower Fan System 8. Rear Redundant Power Complex
Understanding the concept of port mapping is critical to properly planning your HP BladeSystem Matrix FlexFabric configuration. The diagram above shows the proper placement of the Virtual Connect FlexFabric 10Gb/24-Port modules and the integrated or mezzanine adapters that will be used for your supported BladeSystem Matrix FlexFabric configurations. Port mapping differs slightly between full-height and half-height server blades due to the support of additional mezzanine cards on the full-height version. HP has simplified the processes of mapping mezzanine ports to switch ports by providing intelligent management tools via the Onboard Administrator and HP Insight Manager Software. The HP BladeSystem Onboard Administrator User Guide and HP BladeSystem c7000 Enclosure Setup and Installation Guide provide detailed information on port mapping. The following diagrams show the port mappings for half-height and full-height blades. The following tables represent a number of recommended and supported configurations for an HP BladeSystem c7000 enclosure with one or more redundant pairs of Virtual Connect FlexFabric modules.
81
82
A minimum of two total (1 pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port modules located in interconnect bays 1-2 An integrated HP FlexFabric mezzanine adapter
HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth. The BladeSystem servers that are to utilize the increased bandwidth require one HP Virtual Connect FlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules. The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10Gb Converged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystem Matrix FlexFabric enclosure configured with G7 blades only. The HP NC553m is the recommended FlexFabric mezzanine adapter for configurations which use only HP G7 BladeSystem servers for the following reasons: Better performance Newer FlexFabric Converged Network Adapter technology Easier standardization since it is supported in all G6 and G7 blades
Interconnect module configurations for G7 blades using NC551m or NC553i [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1
Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter
This configuration enables the customer to most cost effectively introduce FlexFabric technology while keeping the configuration complexity to a minimum. This configuration also provides the ability to most easily manage SAN connectivity. For the cost of two FlexFabric modules, each blade has up to 6 Ethernet ports and 2 FC ports for a total of 8 ports sharing up to 20 Gb of I/O bandwidth.
83
Supported configuration
4 FlexFabric modules All G7 blades with an integrated FlexFabric Adapter 1 additional FlexFabric mezzanine adapter
Server network adapters used
Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 1 additional FlexFabric mezzanine(s) [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2
VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] Empty Empty [Bay 6] [Bay 8]
VC FlexFabric module Integrated FlexFabric adapter1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 12 Empty Empty
Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is less cost effective, more complex to configure, and more difficult to manage SAN connectivity. For the cost of 4 FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each blade can have 12 or 14 Ethernet ports and 2 or 4 FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.
Supported configuration
6 FlexFabric modules All G7 blades with an integrated FlexFabric adapter 2 additional FlexFabric mezzanine adapters
Server network adapters used
Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 2 additional FlexFabric mezzanine(s) [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2
VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] Empty [Bay 8]
VC FlexFabric module Integrated FlexFabric adapter1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 12 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 Empty
Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve even higher performance. This configuration is even less cost effective, more complex to configure, and introduces increased difficulty when managing SAN connectivity. For the cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, each blade can have 18, 20, or 22 Ethernet ports and 2, 4, or 6 FC ports for a
84
total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.
Supported configuration
8 FlexFabric modules All G7 blades with an integrated FlexFabric adapter 3 additional FlexFabric mezzanine adapters
NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4th set of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.
Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 3 additional FlexFabric mezzanine(s) [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2
VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] VC FlexFabric module [Bay 8]
VC FlexFabric module Integrated FlexFabric adapter1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 12 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 VC FlexFabric module FlexFabric adapter in Mezzanine slot 32
Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve even higher performance. This configuration is the least cost effective, the most complex to configure, and the most difficult to manage SAN connectivity. For the cost of eight FlexFabric modules and three (or two half-height blades) FlexFabric adapters for every blade in the enclosure, each blade can have 24, 26, 28, or 30 Ethernet ports and 2, 4, 6, or 8 FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.
HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth. The Bladesystem servers that are to utilize the increased bandwidth require one HP Virtual Connect FlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules. The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10Gb Converged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystem Matrix FlexFabric enclosure configured with G6 or i2 blades only. FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. The HP
85
NC553m is the recommended FlexFabric mezzanine adapter for configurations which use only HP G6 BladeSystem servers for the following reasons: Better performance Newer FlexFabric Converged Network Adapter technology Easier standardization since it is supported in all G6 and G7 blades
Supported configuration
4 FlexFabric modules All G6 and i2 blades 1 additional FlexFabric mezzanine adapter
Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s) Server network adapters required [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1
VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] Empty Empty [Bay 6] [Bay 8]
VC FlexFabric module Flex-10/Enet LOM VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 Empty Empty
Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the cost of four FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each blade can have 14 Ethernet ports and 2 FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.
Supported configuration
6 FlexFabric modules All G6 and i2 blades with an integrated Flex10 Adapter 2 additional FlexFabric mezzanine adapters
Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s) Server network adapters used [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2
VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] Empty [Bay 8]
VC FlexFabric module Flex-10/Enet LOM VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 Empty
Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is even less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the
86 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines
cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, each blade can have 18 or 20 Ethernet ports and 2 or 4 FC ports for a total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.
Supported configuration
8 FlexFabric modules All G6 and i2 blades with an integrated Flex10 Adapter 3 additional FlexFabric mezzanine adapters
NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4th set of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.
Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s) Server network adapters used [Bay 1] [Bay 3] [Bay 5] [Bay 7]
1 2
VC FlexFabric module [Bay 2] VC FlexFabric module [Bay 4] VC FlexFabric module [Bay 6] VC FlexFabric module [Bay 8]
VC FlexFabric module Flex-10/Enet LOM VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 VC FlexFabric module FlexFabric adapter in Mezzanine slot 32
Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is the least cost effective, most complex to configure, and most difficult to manage SAN and network connectivity. For the cost of eight FlexFabric modules and three (or 2 for half-height blades) FlexFabric adapters for every blade in the enclosure, each blade can have 26, 28, or 30 Ethernet ports and 2, 4, or 6 FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.
87
IMPORTANT: Configurations which support a mixture of HP G7 with G6 and/or i2 BladeSystem servers require: A minimum of four total (two pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port modules located in Interconnect Bays 1-4
A minimum of one FlexFabric mezzanine adapter placed in Mezz 1 of every BladeSystem server in the enclosure HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth if desired. The Bladesystem servers that are to utilize the increased bandwidth require one HP Virtual Connect FlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules. FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10Gb Converged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystem Matrix FlexFabric enclosure configured with G6 and G7 blades. The HP NC553m is the recommended FlexFabric mezzanine adapter for configurations which use a mixture of G6 and G7 BladeSystem servers for the following reasons: Better performance Newer FlexFabric Converged Network Adapter technology Easier standardization since it is supported in all G6 and G7 blades
Supported configuration
4 FlexFabric modules Mixed G7 with G6 and/or i2 blades 1 additional FlexFabric mezzanine adapter
Server network adapters required
Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additional FlexFabric mezzanine(s) [Bay 1] VC FlexFabric module [Bay 2]
VC FlexFabric module Flex-10/Enet LOM or FlexFabric Integrated Adapter VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 Empty Empty
Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the cost of four FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each blade can have 12 or 14 Ethernet ports and 2 or 4 (G7s only) FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.
88
Supported configuration
6 FlexFabric modules Mixed G7 with G6 and/or i2 blades 2 additional FlexFabric mezzanine adapters
Server network adapters used
Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additional FlexFabric mezzanine(s) [Bay 1] VC FlexFabric module [Bay 2]
VC FlexFabric module Flex-10/Enet LOM or FlexFabric Integrated Adapter VC FlexFabric module FlexFabric adapter in Mezzanine slot 1 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 Empty
Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is even less cost effective, more complex to configure, and more difficult to manage SAN and network connectivity. For the cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, each blade can have 18, 20, or 22 Ethernet ports and 2, 4, or 6 (G7s only) FC ports for a total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.
Supported configuration
8 FlexFabric modules Mixed G7 with G6 and/or i2 blades 3 additional FlexFabric mezzanine adapters
NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4th set of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.
Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additional FlexFabric mezzanine(s) [Bay 1] VC FlexFabric module [Bay 2] Server network adapters used
VC FlexFabric module Flex-10/Enet LOM or FlexFabric Integrated Adapter VC FlexFabric module FlexFabric adapter in Mezzanine slot 11 VC FlexFabric module FlexFabric adapter in Mezzanine slot 22 VC FlexFabric module FlexFabric adapter in Mezzanine slot 32
Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade communicating through this pair of VC FlexFabric modules
89
This configuration is supported, but has complexities that must be carefully weighed against the opportunity to potentially achieve higher performance. This configuration is the least cost effective, most complex to configure, and most difficult to manage SAN and network connectivity. For the cost of eight FlexFabric modules and three (or 2 half-height blades) FlexFabric adapters for every blade in the enclosure, each blade can have 24, 26, 28, or 30 Ethernet ports and 2, 4, 6, or 8 (G7s only) FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.
90
Glossary
ABM API AWE BFS BOOTP CapAd CEE CIM CIMOM CLI CLX CMS CNA DAC DEP DG DHCP DMI DNS DR DSM EBIPA EFI ESA EVA FC FCoE FDT FQDN FTP GUI gWLM HA HBA HP OO HP SIM HP SUM HPIO HPVM HTTP Array-based management Application program interface Address Windowing Extensions. A method used by Windows OS to make more than 4 GB available to applications through system calls. Boot from SAN Bootstrap Protocol HP Capacity Advisor Converged Enhanced Ethernet Common Information Model Common Information Model Object Manager Command line interface. An interface comprised of various commands which are used to control operating system responses. HP Cluster Extensions Central management server Converged network adapter Direct Attach Cable Data execution prevention Device group Dynamic Host Configuration Protocol Desktop Management Interface Domain Name System Disaster Recovery Device specific module Enclosure Bay IP addressing Extensible Firmware Interface HP Extensible Server and Storage Adapter HP Enterprise Virtual Array. An HP storage array product line. Fibre Channel. A network technology primarily used for storage networks. Fibre Channel over Ethernet Firmware development tool Fully qualified domain name File Transfer Protocol Graphical user interface HP Global Workload Manager High availability Host Bus Adapter. A circuit board and/or integrated circuit adapter that provides input/output processing and physical connectivity between a server and a storage device. See OO. HP Systems Insight Manager HP Smart Update Manager See IO. HP Virtual Machine. Common name for HP Integrity Virtual Machines product. Hypertext Transfer Protocol
91
HTTPS IC iCAP ICE ICMP ID IIS iLO IO IPM IPv4 IPv6 IR JVM KB LDAP LinuxPE LOM LS LSM LUN LV MAC
Hypertext Transfer Protocol Secure HP Insight Control Instant Capacity HP Insight Control Suite Internet Control Message Protocol HP Insight Dynamics Internet Information Services Formerly HP Insight Control remote management. Renamed HP Integrated Lights-Out. HP Insight Orchestration. A web application that enables you to deploy, manage, and monitor the overall behavior of Insight Orchestration and its users, templates, services, and resources. Formerly HP Insight Power Manager. Renamed HP Insight Control power management. Internet Protocol version 4 Internet Protocol version 6 HP Insight Recovery Java Virtual Machine Knowledge base Lightweight Directory Access Protocol Linux pre-boot environment LAN on motherboard Logical server Logical server management Logical unit number. The identifier of a SCSI, Fibre Channel or iSCSI logical unit. Logical volume Media Access Control . A unique identifier assigned by the manufacturer to most network interface cards (NICs) or network adapters. In computer networking, a Media Access Control address. Also known as an Ethernet Hardware Address (EHA), hardware address, adapter address or physical address. Microsoft Management Console HP Multipath I/O HP StorageWorks Modular Smart Array. An HP storage array product line (also known as the P2000). Microsoft System Center Microsoft Cluster Server/Service HP Insight Managed System Setup Wizard Network Configuration Utility Network File System Network file transfer Network interface card. A device that handles communication between a device and other devices on a network. N_Port ID Virtualization Network Time Protocol Non-volatile random access memory HP Onboard Administrator Operating environment HP Operations Orchestration Operating system
MMC MPIO MSA MSC MSCS MSSW NCU NFS NFT NIC NPIV NTP NVRAM OA OE OO OS
92 Glossary
P-VOL PAE PDR PDU POC POST PSP PSUE PSUS PXE RAID RBAC RDP RG RM S-VOL SA SAID SAM SAN SCP SFIP SG SIM SLVM SMA SMH SMI-S SMP SMTP
Primary Volume Physical Address Extension. A feature of x86 processors to allow addressing more than 4 GB of memory. Power distribution rack Power distribution unit. The rack device that distributes conditioned AC or DC power within a rack. Proof of concept Power on self test HP ProLiant Support Pack Pair suspended-error Pair suspended-split Preboot Execution Environment Redundant Array of Independent Disks Role-based access control Formerly HP Rapid Deployment Pack. Renamed HP Insight Control server deployment. Recovery group HP Matrix recovery management Secondary Volume HP Server Automation Service agreement identifier System administration manager Storage area network. A network of storage devices available to one or more servers. State change pending Stress-free installation plan HP Serviceguard See HP SIM. Shared Logical Volume Manager Storage Management Appliance HP System Management Homepage Storage Management Initiative Specification Formerly HP Server Migration Pack. Renamed HP Insight Control server migration. Simple Mail Transfer Protocol. A protocol for sending email messages between servers and from mail clients to mail servers. The messages can then be retrieved with an email client using either POP or IMAP. Serial number Simple network management protocol Storage pool entry HP Storage Provisioning Manager. A means of defining logical server storage requirements by specifying volumes and their properties. Structured Query Language Shared Resource Domain Secure Shell Secure Sockets Layer Single sign-on System Type Manager Shared uplink set
93
SN SNMP SPE SPM SQL SRD SSH SSL SSO STM SUS
TCP/IP TFTP TOE UAC UDP URC URS USB UUID VC VCA VCDG VCEM
Transmission Control Protocol/Internet Protocol Trivial File Transfer Protocol TCP Offload Engine User Account Control User Datagram Protocol Utility Ready Computing Utility Ready Storage Universal Serial Bus. A serial bus standard used to interface devices. Universally Unique Identifier HP Virtual Connect Version Control Agent Virtual connect domain group HP Virtual Connect Enterprise Manager. HP VCEM centralizes network connection management and workload mobility for HP BladeSystem servers that use Virtual Connect to access local area networks (LANs), storage area networks (SANs), and converged network environments. Virtual Connect Manager Virtual CPU HP Version Control Repository Manager Virtual Connect Support Utility Virtual disk Volume group Virtual local area network Virtual machine HP Insight Virtualization Manager Formerly HP Virtual Machine Manager. Renamed HP Insight Control virtual machine management. Virtual port Formerly Virtual Server Environment. Renamed Insight Dynamics. Windows Automation Installation Kit Web-Based Enterprise Management Windows pre-boot environment Windows Management Instrumentation See WWN. Worldwide Name. A unique 64bit address assigned to a FC device. Worldwide Node Name. A WWN that identifies a device in a FC fabric. Worldwide Port Name. A WWN that identifies a port on a device in an FC fabric. Extensible Markup Language
VCM VCPU VCRM VCSU Vdisk VG VLAN VM VMAN VMM VPort VSE WAIK WBEM WinPE WMI WWID WWN WWNN WWPN XML
94
Glossary
Index
A
access credentials infrastructure, 48 templates, 75 test and development infrastructure using logical servers, 60 test and development infrastructure with IO, 67 access requirements, 47 application services, 14 define, 16 templates, 68 SAN storage, 35 SAN storage templates, 70
D
data center customer responsibility, 29 requirements, 29 define application services, 16 manageability connections, 46 services VC ethernet connections, 45 storage volumes, 38 define services test and development infrastructure, 62 test and development infrastructure using logical servers, 56 deployed servers and services, 14 disk requirements, CMS, 18 documents, HP BladeSystem Matrix, 5 domain VC configuration, 34 Virtual Connect, 30
B
bill of materials test and development infrastructure using logical servers, 56 test and development infrastructure with IO, 61
C
c7000 enclosure, 8, 10, 30, 61 CMS, 13 configuration, 37 disk requirements, 18 Insight Software, 14, 16 Microsoft System Center, 78 network connections, 18 non server blade, 18 planning, 16 SAN connections, 18 supported configurations, 78 components customer responsibility, 12 HP BladeSystem Matrix, 9 Microsoft System Center, 77 configuration CMS, 37 FlexFabric, 80, 83, 85, 87, 90 management services network, 44 sample templates, 68 VC domain, 34 connections define manageability, 46 FC, 38 FC SAN storage, 36 iSCSI SAN storage, 36 manageability, 45 storage, 35 VC ethernet, 45 VC ethernet uplink, 42 converged infrastructure, 7 customer facility planning, 29 network details, 40 responsibility, 12, 29
E
enclosure Flex-10, 8, 29 FlexFabric, 7, 29 parameters, 29 planning, 29 stacking, 30 enclosure stacking Flex-10, 30 FlexFabric, 30 ethernet define services connections, 45 VC, 8 VC Flex-10 services, 43 VC uplink connections, 42 VC uplinks, 43 Expansion Kit Flex-10, 1 1
F
facility planning, 29 planning templates, 69 planning test and development infrastructure using logical servers, 57 requirements, 29 facility planning, test and development infrastructure using IO, 63 FC connections, 38 module, 30, 35 SAN, 13, 36, 39 SAN storage, 71
95
storage, 28 switch, 37 VC, 8, 35 FC SAN storage connections, 36 templates, 71 federated CMS, 14 access requirements, 48 DNS configuration, 40 HP BladeSystem Matrix infrastructure, 26 planning, 19 storage pool, 36 supported management software, 19 federated environment, 24 Flex-10 capability, 35 enclosure, 8, 29 enclosure stacking, 30 Expansion Kit, 1 1 FlexFabric, 90 module, 31, 42 Starter Kit, 10 VC ethernet services, 43 FlexFabric configuration, 83, 85, 87, 90 configuration guidelines, 80 enclosure, 7, 29 enclosure stacking, 30 Flex-10, 90 hardware components, 80 Integrity, 80, 85, 87 interconnects or mezzanines, 81 module, 8, 31, 35 module placement, 82 Starter Kit, 10
HP Insight Orchestration see IO HP IR see IR HP server automation additional management servers, 20 optional management services, 76
I
Ignite-UX server, 19 infrastructure access credentials, 48 dynamic provisioning, 5467 HP BladeSystem Matrix, 7 management, 8 test and development, 54 IO, 60 infrastructure, converged, 7 Insight Control Microsoft System Center, 9, 20, 28, 76, 77 server deployment, 14, 18, 21, 27, 28, 40 VMware vCenter Server, 9, 20, 28, 76, 77 Insight Dynamics, 9, 14, 15, 16, 18, 20, 60, 76 orchestration service requests, 39 Insight Orchestration see IO Insight Recovery see IR Insight Software CMS planning, 16 integration, optional management services, 76 Integrity FlexFabric, 80, 85, 87 Integrity managed nodes, 23 Integrity server environment, standard, 22 intended audience, 5 IO, 44, 45 templates, 16, 38, 44, 61 test and development infrastructure, 60 access credentials, 67 bill of materials, 61, 62 facility planning, 63 managed network connections, 66 network configuration, 65 racks and enclosures, 63 services network connections, 65 storage volumes, 65 VC domain configuration, 64 VC Ethernet uplinks, 65 IR, optional management services, 76 iSCSI SAN storage, 71 SAN storage connections templates, 71 isolating VM guest/host, 39
H
HP BladeSystem c7000 FlexFabric, 82 port mapping, 81 HP BladeSystem Matrix basic infrastructure, 8 components, 9 customer facility planning, 29 documents, 5 infrastructure, 7 pre-delivery, 5 pre-delivery planning, 49 pre-order, 5 solution networking, 40 solution storage, 35 HP BladeSystem Matrix infrastructure basic, 8 federated CMS, 26 Integrity managed nodes, 23 overview, 5 ProLiant managed nodes, 22 HP Insight Control see Insight Control HP Insight Dynamcis see Insight Dynamics
96 Index
L
limited environment, 21 link configurations, stacking, 31 logical servers test and development infrastructure, 54 access credentials, 60 bill of materials, 56
define services, 56 facility planning, 57 management network connections, 59 network configuration, 58 racks and enclosures, 56 services network connections, 59 storage volumes, 58 VC domain configuration, 57 VC Ethernet uplinks, 58
HP server automation, 76 Insight Control for Microsoft System Center, 77 Insight Control for VMware vCenter Server, 76, 77 integration, 76 IR, 76 Microsoft System Center, 77, 78 orchestration service requests, 39 overview, HP BladeSystem Matrix infrastructure, 5
M
MAC address, assign, 33 manageability connections, 45 managed network connections test and development infrastructure, 66 management additional servers, 20 determine servers, 27 infrastructure, 8 network connections, 74 server scenarios, 20 services, 16, 44, 76 management network connections templates, 74 test and development infrastructure using logical servers, 59 management servers additional, 20 templates, 68 Microsoft System Center CMS, 78 components, 77 Insight Control, 9, 28, 76, 77 optional management services, 77, 78 other managed nodes, 78 module FC, 30, 35 Flex-10, 31, 42 FlexFabric, 8, 31, 35
P
planning checklist, 49 customer facility, 29 federated CMS, 19 Insight Software CMS, 16 network, 40 racks and enclosures, 29 server, 14 services, 14 summary, 6 planning step 1a-define application services, 16 1b-determine management servers, 27 2a-rack & enclosure parameters, 29 2b-determine facility requirements, 29 2c-VC domain configuration, 34 3a-collect customer SAN storage details, 35 3b-FC SAN storage connections, 36 3c-iSCSI SAN storage connections, 36 3d-define storage volumes, 38 4a-collect customer provided network details, 40 4b-VC ethernet uplilnks, 43 4c-define services VC ethernet connections, 45 4d-define manageability connections, 46 4e-determine infrastructure credentials, 48 port mapping, 81 pre-delivery, 5 pre-order, 5 ProLiant managed nodes HP BladeSystem Matrix infrastructure, 22 ProLiant server environment standard, 21
N
network management services, 44 planning, 40 network configuration templates, 72 test and development infrastructure, 65 test and development infrastructure using logical servers, 58 network connections CMS, 18 management, 74 services, 74 network details, customer, 40 next steps, 50 NPIV, 12, 13, 35, 37
R
rack parameters, 29 planning, 29 racks and enclosures templates, 68 test and development infrastructure, 63 test and development infrastructure using logical servers, 56 requirements data center, 29 facility, 29 storage volumes, 37
O
optional management services, 76
S
sample configuration templates, 68
97
SAN connections, 18 FC, 13, 39 SAN storage customer details, 35 FC, 71 FC connections, 36 iSCSI, 71 iSCSI connections, 36 templates, 70 server deployed, 14 Ignite-UX, 19 management, 27 management scenarios, 20 planning, 14 server environment federated, 24 limited, 21 services application, 14 deployed, 14 Flex-10 ethernet, 43 management, 16 network configuration, 44 network connections, 74 planning, 14 services network connections templates, 74 test and development infrastructure, 65 test and development infrastructure using logical servers, 59 solution networking, 40 stacking enclosure, 30 link configurations, 31 standard environment Integrity, 22 ProLiant, 21 Starter Kit Flex-10, 10 FlexFabric, 10 storage connections, 35 FC, 28 solution, 35 volumes, 37 storage pool federated CMS, 36 storage volumes, 37 define, 38 requirements, 37 templates, 71 test and development infrastructure, 65 test and development infrastructure using logical servers, 58 switch FC, 37
98 Index
T
templates access credentials, 75 application services, 68 customer-provided SAN storage, 70 facility planning, 69 FC SAN storage connections, 71 IO, 16, 38, 44, 61 iSCSI SAN storage connections, 71 management network connections, 74 management servers, 68 network configuration, 72 racks and enclosures, 68 sample configuration, 6875 SAN storage, 70 services network connections, 74 storage volumes, 71 VC domain configuration, 69 VC Ethernet uplinks, 73 test and development infrastructure IO, 60 access credentials, 67 bill of materials, 61 define services, 62 mangement network connections, 66 network configuration, 65 racks and enclosures, 63 services network connections, 65 storage volumes, 65 VC domain configuration, 64 VC Ethernet uplinks, 65 logical servers, 54 access credentials, 60 bill of materials, 56 define services, 56 facility planning, 57 management network connections, 59 network configuration, 58 racks and enclosures, 56 services network connections, 59 storage volumes, 58 VC domain configuration, 57 VC Ethernet uplinks, 58 test and development infrastructure with IO access credentials, 67 facility planning, 63
V
VC see VC assign MAC address, 33 assign WWN address, 33 define services ethernet connections, 45 domain, 30 ethernet module, 8 ethernet uplink connections, 42 ethernet uplinks, 43 FC, 8, 35
Flex-10 ethernet services, 43 technology, 35 VC domain configuration templates, 69 test and development infrastructure, 64 test and development infrastructure using logical servers, 57 VC Ethernet uplinks templates, 73 test and development infrastructure, 65 test and development infrastructure using logical servers, 58 VC Flex Fabric configuration, 80 hardware components, 80 virtual serial numbers, 33 VM guest storage isolating from VM host, 39 VM host, isolating from VM guest, 39 VMware vCenter Server, 76 Insight Control, 9, 20, 28
W
WWN address, 33
99