Sunteți pe pagina 1din 35

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Configuration and Implementation Guide using IBM System x Servers and DS3500

Scott Smith

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Contents
IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x - Users Guide ..................... 1 Overview ....................................................................................................................................................... 4 Intended Audience ........................................................................................................................................ 5 Microsoft Hyper-V and Failover Clustering ................................................................................................... 5 Components .................................................................................................................................................. 5 IBM System x3650 M4.............................................................................................................................. 5 IBM System Storage DS3500 ................................................................................................................... 6 Juniper Ethernet Switch EX2200 .............................................................................................................. 7 IBM Virtualization Reference Configuration Best Practice and Implementation Guidelines ......................... 7 Racking and Power Distribution .................................................................................................................... 7 Networking and VLANs ................................................................................................................................. 8 VLAN Description ..................................................................................................................................... 8 iSCSI Storage Network (VLAN 10 & 20) .................................................................................................. 9 Storage Controller Access .................................................................................................................... 9 Physical Host Storage Access ............................................................................................................. 9 Cluster Heartbeat & CSV Networks (VLANs 30) ...................................................................................... 9 Production Live Migration Network (VLAN 40) ......................................................................................... 9 Production Communication Network (VLAN 50) ...................................................................................... 9 IBM DS3500 Network Ports ...................................................................................................................... 9 IBM x3650 M4 Network Ports ................................................................................................................. 10 Juniper EX2200 Ethernet Configuration ................................................................................................. 10 Active Directory ........................................................................................................................................... 12 Storage ........................................................................................................................................................ 12 Overview ................................................................................................................................................. 12 Cabling .................................................................................................................................................... 12

Page 2

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


Management ........................................................................................................................................... 13 Microsoft Hyper-V Cluster Storage Considerations ................................................................................ 15 Configuration........................................................................................................................................... 15 Multipath I/O (MPIO) Fault-Tolerance Driver .......................................................................................... 17 IBM System x3650 M4 Setup ..................................................................................................................... 18 Pre-OS Installation .................................................................................................................................. 18 OS Installation and Configuration ........................................................................................................... 18 Network Configuration ............................................................................................................................ 19 Storage Connections .............................................................................................................................. 21 Cluster Creation ...................................................................................................................................... 24 Virtual Machine Setup and Configuration ............................................................................................... 25 Summary ..................................................................................................................................................... 26 Related Links .............................................................................................................................................. 27 Bill of Materials ............................................................................................................................................ 28 Networking Worksheets .............................................................................................................................. 29 The team who wrote this paper ................................................................................................................... 33 Trademarks and special notices ................................................................................................................. 33

Page 3

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Overview
The IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x provides businesses an affordable, interoperable and reliable industry-leading virtualization solution. Validated by the Microsoft Private Cloud Fast Track program, the IBM virtualization reference architecture combines Microsoft software, consolidated guidance and validated configurations for compute, network, and storage. The Microsoft program requires a minimum level of redundancy and fault tolerance across the servers, storage, and networking for the Hyper-V clusters to help ensure a certain level of fault tolerance while managing private cloud pooled resources. This Virtualization Reference Configuration and Implementation guide provides ordering, setup and configuration details for the IBM 2-Node highly available virtualization environment that has been validated as a Microsofts Hyper-V Fast Track Small configuration. The design consists of two IBM System x3650 M4 servers, attached to IBM DS3500 iSCSI storage, and networked together with Juniper Networks EX2200 Top of Rack switches. This fault tolerant hardware configuration is clustered using Microsofts Windows Server 2012 operating system. A short summary of the IBM Virtualization Reference Architecture software and hardware components is listed below, followed by best practice implementation guidelines. The IBM Virtualization Reference Configuration is constructed of the following enterprise-class components: Two IBM System x3650 M4 System x Servers in a Windows Failover Cluster running Hyper-V One DS3500 shared Highly Available (HA) storage w/ dual iSCSI controllers Two Juniper Networks EX2200 switches providing redundant networking Together, these software and hardware components form a high-performance, cost-effective solution that supports Microsoft Hyper-V cloud environments for the most popular business-critical applications and many custom third-party solutions. Equally important, these components meet the criteria set by Microsoft for the Private Cloud Fast Track program which promotes robust cloud environments to help satisfy even the most demanding virtualization requirements.

A diagram of the overall architecture is illustrated in figure 1 below:

Figure 1) IBM Virtualization Reference Configuration for Microsoft Private Cloud Fast Track Architecture

Page 4

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Intended Audience
This Virtualization Reference Architecture configuration and implementation guide targets smaller organizations implementing Hyper-V and IT engineers familiar with the hardware and software that make up the IBM Virtualization Reference Architecture. Additionally, the Systems x sales teams and their customers evaluating or pursuing Hyper-V virtualization solutions will benefit from this previously validated configuration. Advanced comprehensive experience with the various Virtualization Reference Configuration components is recommended.

Microsoft Hyper-V and Failover Clustering


Microsoft Hyper-V technology continues to gain competitive traction as a key cloud component in many customer virtualization environments. Hyper-V is included as a role in x64 versions of Windows Server 2012 Standard, and Datacenter editions. Windows 2012 Microsoft Hyper-V VMs supports up to sixty-four virtual processors and 1TB of memory. Individual virtual machines (VMs) have their own operating system instance and are completely isolated from the host operating system as well as other VMs. VM isolation helps promote higher business-critical application availability while the Microsoft failover clustering feature, found in the Windows Server 2012 Standard and Datacenter Editions, can dramatically improve production uptimes. Microsoft failover clustering helps eliminate single points of failure so users have near-continuous access to important server-based, business-productivity resources. In the event of physical or logical outages linked to unplanned failures or scheduled maintenance, VMs can automatically migrate to other cluster member nodes. As a result, clients experience little-to-no downtime. This seamless operation is attractive for organizations trying to create new business and maintain healthy service level agreements. Additionally, Microsoft failover clustering in Windows Server 2012 now supports in box NIC teaming to improve network fault tolerance and further improves physical resource utilization by load balancing VMs across cluster members in active/active configurations.

Components
This highly available IBM virtualization architecture is comprised of IBM System x servers, storage, and networking running Microsofts Windows 2012 operating system. Each component provides a key element to the overall solution.

IBM System x3650 M4


At the core of the IBM Virtualization Reference Configuration solution, the 2U IBM System x3650 M4 servers deliver the performance and reliability required for virtualizing business-critical applications in Hyper-V cloud environments. To provide the expected virtualization performance to handle any Microsoft production environment, IBM System x3650 M4 servers can be equipped with up to two 8 core EX-2600 processors, and up to 768 GB of memory. The IBM System x3650 M4 is highly scalable with a storage capacity of up to sixteen 2.5 hot-swappable SAS/SATA hard disks or SSDs. It also contains hotswappable power supplies and fans as well as optional remote management via keyboard, video and mouse, which enable continuous management capabilities. All of these key features, including many not listed, help solidify the dependability IBM customers have grown accustomed to with System x servers. By virtualizing with Microsoft Hyper-V technology on IBM System x3650 M4 servers (Figure 2), businesses reduce physical server sprawl, power consumption and total cost of ownership (TCO). Virtualizing the server environment also results in lower server administrative overhead, giving IT administrators the capability to manage more systems than exclusive physical environments. Highly

Page 5

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


available critical applications residing on clustered host servers can be managed with greater flexibility and minimal downtime due to Microsofts Hyper-V live and quick migration capabilities.

Figure 2) IBM System x3650 M4

IBM System Storage DS3500


IBM System Storage DS3500 combines best-of-breed storage development with leading 6 Gbps host interface and drive technology. With its simple, efficient and flexible approach to storage, the DS3500 is a cost-effective, fully integrated complement to IBM System x servers, and IBM BladeCenter Systems. By offering substantial features at a price that fits most budgets, the DS3500 delivers superior price/performance ratios, functionality, scalability and ease of use for the entry-level storage user. The DS3500 offers: Scalability to mid-range performance and features starting at entry-level prices Efficiency to help reduce annual energy expenditures and environmental footprints Simplicity that does not sacrifice control with the perfect combination of robustness and ease of use IBM System Storage DS3500 (Figure 3) is well-suited for Microsoft virtualized cloud environments. The IBM DS3500 compliments the IBM System x3650 M4 servers and the Juniper networking infrastructure in an end-to-end Microsoft Hyper-V private cloud solution by delivering proven disk storage in flexible, scalable configurations. Connecting optional EXP3500 enclosures to your DS3500 can scale up to 192 SAS, SATA, and SSD disks and with up to 192 TB of raw capacity. The DS3500 has 1GB or cache per controller, upgradable to 2GB. The DS3500 comes with 4 storage partitions activated, with the option of purchasing additional capability up to 128 partitions. Optional premium features such as FlashCopy, VolumeCopy, and Enhanced Remote Mirroring are also available on the DS3500.

Figure 3) IBM DS3524 Storage

Page 6

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Juniper Networks Ex2200 Top of Rack Ethernet Switch


The Juniper Networks EX2200 Top of Rack Ethernet switches offers a compact, high-performance solution for supporting todays network access deployments. Each EX2200 switch includes an application-specific integrated circuit (ASIC)-based Packet Forwarding Engine (PFE) with an integrated CPU to consistently deliver wire-rate forwarding, even with all control plane features enabled. Based on existing, field proven Juniper Networks technology, the PFE brings the same level of carrier-class performance and reliability to the EX2200 switches that Juniper Networks routers bring to the worlds largest service providers This IBM virtualization architecture uses the Juniper EX2200 (Figure 4) switch to provide a secure, redundant back-end management network to the attached hosts, virtual machines, and iSCSI storage.

Figure 4) Juniper EX2200 Switch

IBM Virtualization Reference Configuration Best Practice and Implementation Guidelines


A successful Microsoft Hyper-V deployment and operation can be significantly attributed to a set of testproven planning and deployment techniques. Proper planning includes sizing of needed server resources (CPU and memory), storage (space, and IOPS), and networking bandwidth needed to support the infrastructure. This information can then be implemented using industry best practices to achieve optimal performance and growth headroom necessary for the solution. The Microsoft Private Cloud Fast Track program combined with IBMs enterprise-class hardware prepares IT administrators to successfully meet their virtualization performance and growth objectives by deploying private clouds efficiently and reliably. An IBM and Microsoft collaboration-based collection of Virtualization Reference Configuration best practices and implementation guidelines, which aid in planning and configuration of the solution, are shared in the remaining sections below. Categorically, they are broken down into the following topics: Racking location and power distribution Networking and VLANs Storage setup and configuration Setup of x3650 M4 Windows Server Failover Cluster setup and Hyper-V

Racking and Power Distribution


The installation of power distribution units (PDUs) and their cabling should be performed before any system is racked. When cabling the PDUs, keep the following in mind: Ensure sufficient, separate electrical circuits and receptacles to support the required PDUs. To minimize the chance of a single electrical circuit failure taking down a device, ensure there are sufficient PDUs to feed redundant power supplies using separate electrical circuits.

Page 7

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


For devices that have redundant power supplies, plan for individual electrical cords from separate PDUs. Each Ethernet switch should be powered by a separate PDU. Locate switches to promote optimal cabling. Maintain appropriate shielding and surge suppression practices, as well as employ appropriate battery back-up techniques

Networking and VLANs


Combinations of physical and virtual isolated networks are configured at the host, switch, and storage layers to satisfy isolation requirements. At the physical host layer, there are eight 1GbE Ethernet devices for each Hyper-V server (four on board, and one quad port Intel I340 network interface card. At the physical switch layer, there are a two redundant Juniper EX2200 24 port 1GbE Ethernet switches for storage and host connectivity. The servers and storage maintain connectivity via multiple iSCSI connections using Multi Path I/O (MPIO). Windows Server 2012 NIC teaming is used to provide fault tolerance to the host management and VM communication networks. Ethernet traffic is isolated by type through the use of VLANs on the switch. At the physical switch layer, VLANs are used to provide logical isolation between the various and storage and data traffic using the switch. A key element is properly configuring the switches to maximize available bandwidth and reduce congestion. However based on individual environment preferences, there is flexibility regarding how many VLANs are created and what type of role-based traffic they handle. However, once a final selection is made, ensure the switch configurations are saved or backed up. Switch ports used for iSCSI traffic, Cluster Private, and Live Migration should be configured in access mode. This limits that port to only a single VLAN and the Ethernet frame is tagged with the VLAN at the switch (no settings needed at the OS level).

VLAN Description
The five VLANS are described in Table 1 below. Additional information such as an example of port layouts and configuration are shown in Figure 5. Worksheets to assist in planning network layout can be found at the end of this document.

Network
VLAN 10 VLAN 20 VLAN 30

Name
iSCSI Storage Network iSCSI Storage Network Cluster Private Network

Description
Used for iSCSI storage traffic Used for iSCSI storage traffic Used for private cluster communication and Cluster Shared Volume traffic Used for cluster VM Live Migration traffic

VLAN 40

Cluster Live Migration Network Cluster Public Network

VLAN 50

Used for host management and VM communication

Table 1) VLAN definitions

Page 8

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

iSCSI Storage Network (VLAN 10 & 20)


At the physical storage layer, the IBM DS3500 uses iSCSI ports for connectivity. Each controller has four 1GbE Ethernet ports for iSCSI traffic. The use of the DS3500 Device Specific Module (DSM) manages the multiple I/O paths between the host servers and storage, and optimizes the storage paths for maximum performance. VLANs are used to isolate storage traffic from other data traffic occurring on the switches. Ethernet Jumbo Frames are set on the hosts and storage to maximize storage traffic throughput. VLAN 10 and VLAN 20 are reserved for server access to the iSCSI storage. All iSCSI traffic should be isolated on VLAN 10 & 20. One switch will host VLAN 10, and the second switch will host VLAN 20.

Storage Controller iSCSI Access


To help balance iSCSI workloads, each DS3500 controller will maintain two iSCSI connections to the networks. One connection from each controller to each switch (Figure 5) Jumbo frames on both EX2200 switches for all iSCSI ports (9014 bytes) Jumbo frames on in the DS3500 iSCSI settings (9000 bytes) Ports should be assigned the VLAN in Access mode

Physical Host iSCSI Storage Access


Each physical host will have two connections to the iSCSI networks (one to each VLAN). One connection should be made from an on board NIC and the second from a NIC card port (Figure 6) Since the switch ports are configured for a single VLAN in access mode, no VLAN ID needs to be specified on the NIC itself. Traffic isolation will occur at the switch. Each NIC port connected to these VLANs should be set for Jumbo Frames in the advanced properties of the NIC under Windows Device Manager

Cluster Private & CSV Networks (VLANs 30)


This network is reserved for cluster private (heartbeat) communication between clustered servers. Switch ports should be configured to appropriately limit the scope of each of these VLANs. There should be no IP routing or default gateways for cluster private networks. These network ports should be assigned the VLAN in Access mode.

Production Live Migration Network (VLAN 40)


A separate VLAN should be created to support Live Migration for the cluster. There should be no routing on the Live Migration VLAN. These network ports should be assigned the VLAN in Access mode.

Production Communication Network (VLAN 50)


This network supports communication for the hosts as well as the virtual machines. Two teams, created using the Windows Server 2012 in box NIC teaming feature, will be used to provide fault tolerance, and load balancing for communication for host servers and virtual machines. These switch ports should be configured with their assigned VLAN ID in trunk mode. VLAN IDs will have to be assigned in Windows Server 2012 to enable Ethernet traffic across these ports. For further IBM Virtualization Reference Architecture network planning and configuration assistance, please review the appendices located at the end of the document.

IBM DS3500 Network Ports


At the physical storage layer, the IBM DS3500 uses iSCSI ports for storage connectivity. Each controller has four 1GbE Ethernet ports for iSCSI traffic. The use of the DS3500 Device Specific Module (DSM)

Page 9

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


manages the multiple I/O paths between the host servers and storage, and optimizes the storage paths for maximum performance. VLANs are used to isolate storage traffic from other data traffic occurring on the switches. Ethernet Jumbo Frames are set on the hosts and storage to maximize storage traffic throughput. Two Ethernet ports on each controller are reserved for management of the DS3500. At a minimum one management connection from each controller should be connected to the network. Connecting each controller to both switches provides additional redundancy

Figure 5) IBM DS3500 backside

IBM x3650 M4 Network Ports


The host servers have a total of eight 1 GbE network ports that will be used for iSCSI storage connectivity, public and private cluster communication, and VM communication. Windows Server 2012 NIC teaming is used for certain networks to provide fault tolerance, and spread the workload across the network interfaces. The NIC teams will follow good practice by insuring it has members from the onboard interfaces as well as the PCI NIC card installed.

Figure 6) IBM System x3650 M4 Switch Port Layout

Juniper EX2200 Ethernet Configuration


The IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x uses two Juniper EX2200 switches containing twenty-four 1GbE ports each. The EX2200 provides primary storage access and data communication services. Additionally in-band and out-of-band server and device management is needed. Four 1GbE uplink ports are also on this switch with optional SFP devices for uplinks to corporate net. The redundant EX2200 switch ports should be configured to allow the following VLAN traffic: Port 0 3 iSCSI Traffic o VLAN 10 (Top switch) o VLAN 20 (Bottom switch) o Jumbo Frames (9216 bytes)

Page 10

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


Port 6 7 Cluster Private Network(s) o VLAN 30 (Top Switch - Cluster Communication/CSV) o VLAN 40 (Bottom Switch - Live Migration) Port 11 19 Cluster Public (Production Network) o VLAN 50 Port 22 23 (Switch link aggregation) o LACP Aggregation o VLAN 50 Uplink Ports Port 0 1 Uplinks LACP Aggregation VLAN 50

Figure 7) Switch Port Layout for EX2200 Management of the Juniper EX2200 switches can be performed either via command line interface (SSH) or a web based user interface. If the default management IP address (192.168.1.1) is not available a serial connection (serial to RJ-45) can be used to set these values. The default user name and password for the Juniper switches is root with no password. It is recommended that these be changed to something suitably secure for your corporate environment during the configuration process. In addition changes will not be able to be saved until a password is set. Creating a LAG interface cannot be performed solely with the web interface without deletion of a default port property. This can only be performed using the command line or the J-Web interface to the CLI toolset. Below are the steps from within SSH or traverse the J-Web configuration interface to the CLI Tools menu and edit the port properties under interfaces. Enter the command line interface by typing cli edit (to enter edit mode) edit interfaces ge-0/0/x (where x is the port number to modify) show (to inspect that the default unit 0 family ethernet property switching is still on the port) delete unit 0 commit (to save the change and apply)

The remaining steps can be performed using the web interface. Spanning Tree should be enabled on both switches according to your organizations requirements. An in band management IP address can be set during setup. This should be created on the Management network if implemented. Otherwise and out of band address can be assigned to the management port in the back of the switch.

Page 11

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


Aggregated ports in a LAG that need to pass more than one VLAN ID should be created in trunk mode. VLAN IDs will have to be assigned at the host for Ethernet frames to be properly identified.

Active Directory
The IBM Virtualization Reference Architecture must be part of an Active Directory domain. This is required to form the Microsoft Windows Server 2012 clusters. An active directory (AD) server is presumed to be pre-existing, and either the switch ports on the EX2200 can be used for connectivity or it can be achieved from your uplink ports to your organizations network.

Storage
Overview
The DS3500 Introduction and Implementation Guide can be found on the IBM Redbook site at the following link: http://www.redbooks.ibm.com/redbooks.nsf/RedbookAbstracts/sg247914.html?Open

Cabling
There are four 1GbE iSCSI connections per controller. Two cables from each controller should be plugged into the EX2200 switches as described above with one connection from each controller to each switch. Two 1GbE connections using MPIO provides sufficient bandwidth for most configurations of this size. However if the storage network load requires additional bandwidth then the remaining two iSCSI ports on the DS3500 may be connected as well. These switch ports should already be configured for access mode and their respective VLAN IDs

There are two management Ethernet ports on the back of the DS3500. Distribute the management connections across both EX2200 switches to help insure connectivity in the event one switch is temporarily down. These switch ports should also be configured with VLAN 50 in Access mode to be able to communicate properly

Figure 8 below shows the storage connections for both iSCSI and management to the EX2200 switches.

Figure 8) Storage Ethernet Connections

Page 12

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Management
Management of the DS3500 is performed using the IBM Total Storage Manager tools available for download at the IBM support website (support account registration required). The MPIO driver is also included in this package. To begin management of the IBM DS3500, perform the following actions: Establish an out of band connection with Total Storage Manager using the default TCP/IP addresses (Figure 9) o Management Interface 1: o Controller-A - 192.168.128.101 o Controller-B - 192.168.128.102 o Management Interface 1: o Controller-A - 192.168.129.101 o Controller-B - 192.168.129.102

Figure 9) DS3500 Storage Management Connections Navigate to the Setup page to change the Management and iSCSI Port TCP/IP addresses to the address to be used in production (Figure 10)

Page 13

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Figure 10) Configure Management and iSCSI ports Set the iSCSI port TCP/IP addresses and set to jumbo frames (9000 bytes) under Advanced Port Settings (Figure 11)

Figure 11) iSCSI port settings

Page 14

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Microsoft Hyper-V Cluster Storage Considerations


Microsoft Windows Failover Clustering supports Cluster Shared Volumes (CSVs). Cluster Share Volumes provide the primary storage for the virtual machines configuration files and virtual hard disks. All CSVs are concurrently visible to all cluster nodes, and allow for simultaneous access from each node. Two disks will be created on the DS3500 and used to create CSV1 and CSV2 for primary storage for this cluster. A third disk will be used to create a cluster quorum, and CSV3 that can be used for sequential workloads such as transaction logging. Figure 12 shows a recommended disk configuration for the DS3500

Figure 12) DS3524 Storage Configuration The following is a suggested configuration. The storage I/O profiling performed previously will be very useful to help balance workloads across these volumes. It is recommended to balance the workload on the IBM DS3500 volumes across both controllers by assigning each to a preferred controller. As CSV1 and CSV2 will be the primary VM storage, one should be assigned to each controller.

Configuration
The following step-by-step processes are shared with sample screenshots to illustrate how quick and easy it is to configure DS3500 storage. Create each of the arrays needed for the production configuration. Assign it an array name, and select the number of disks and RAID type to be used (Figure 13).

Figure 13) DS3500 Array Creation

Page 15

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Disks can now be created off each of the arrays (Figure 14)

Figure 14) DS3500 Drive Creation The primary Cluster Shared Volumes will host primarily VHD and other large files, so the segment size should be increased to 512 KB for the disks that will be CSV1 and CSV2 (Figure 15)

Figure 15) DS3500 Disk Segment size specification

Page 16

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

A Host Group is a logical group that will contain the host servers that should all see the same storage volumes. Create a Host Group to that will contain each of the host servers (Figure 16)

Figure 16) DS3500 Host Group creation

Multipath I/O (MPIO) Fault-Tolerance Driver


Multipath I/O is used to provide balanced and fault tolerant paths to DS3500. This requires an additional driver to be installed on the host servers before attaching the storage.

Page 17

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

IBM System x3650 M4 Setup


This Windows Server cluster consists of two dual socket IBM x3650 M4 servers with a minimum of 64GB of RAM, and eight 1GbE NIC ports each. Setup involves the installation of Windows Server 2012 Datacenter edition on each server followed by confirming network and storage connectivity. After which, Hyper-V and Microsoft Clustering can be enabled and configured. Highly available VMs can then be created to perform the various production tasks your organization requires.

Pre-OS Installation
Confirm additional Intel I340 quad port NIC is installed in each server IBM x3650 M4 firmware should be flashed to the latest firmware using a Bootable Media Creator image o Bootable Media Creator will create a bootable image of the latest IBM x3650 M4 updates (download previously) o If no CD/DVD internal drive is available then an external device will be required o http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC Confirm each NIC ports is connected to their assigned EX2200 switch ports Switches have been configured o Jumbo Frames are set for iSCSI ports o Inter switch links have been created and show as active under the JunOS management utility. o Uplinks have been created and show as active under the JunOS management utility o VLANs are configured as defined previously for their respective ports in the JunOS management utility DS3500 iSCSI storage should be configured as defined above and be ready for IQN assignments to map the volumes to the servers The two local disks should be configured as a RAID 1 array. Complete the following actions under the Configuring RAID arrays section of the IBM System x3650 M4 Installation and Users Guide. http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5089516 Set the IMM TCP/IP address in uEFI

OS Installation and Configuration


Install Windows Server 2012 Datacenter Edition o Windows Server 2012 Datacenter allows unlimited Windows virtual machine rights on the host servers and is the preferred version for building private cloud configurations o Windows Server 2012 Standard now supports clustering as well, but only provides licensing rights for up to twoWindows virtual machines (additional licenses would be needed for additional VMs). Windows Server 2012 Standard Edition is intended for physical servers that have very few or no virtual machines running on it. Set your server name, and join the domain Install the Hyper-V role and Failover Clustering feature Run Windows Update to insure any new patches are installed Multipath I/O is used to provide balanced and fault tolerant paths to DS3500. This requires an additional driver to be installed on the host servers before attaching the storage. This driver is packaged with the IBM Total Storage Manager software. o Running the Total Storage Manager setup wizard will allow the MPIO to be selectively installed. The Microsoft MPIO pre-requisite driver will also be installed if not already on the system.

Page 18

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Network Configuration
For the iSCSI network interfaces set the MTU size to 9014 to support jumbo frames. The larger packet size will assist in storage performance o This is performed under the device properties of each NIC (Figure 17)

Figure 17) Jumbo Frame Settings for Host Server One key new feature of Windows Server 2012 is in box NIC teaming. This in box teaming can provide fault tolerance, and link aggregation, and be tailored to host or virtual machine (VM) connectivity. Two separate Windows Server 2012 teams will be created in this configuration. One team is used to support host server management traffic, and a second team is used to provide VM communication. Windows Server 2012 in box NIC teaming can be found in the Server Manager console seen below.

Figure 18) NIC teaming in Server Manager The team created to support cluster public communication with the host servers should be created using the two dedicated NIC ports as described in the networking section. This team should be created using the default switch independent mode and address hash mode (Figure 19). o This will provide 2Gb/s of outbound traffic bandwidth, and 1Gb of inbound traffic bandwidth

Page 19

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Figure 19) Windows Server 2012 NIC team The VLAN properties of the team can be set under the Team Properties, and should be set to VLAN 50 (Figure 20)

Figure 20) Windows Server 2012 NIC team VLAN configuration The team created to support virtual machine communication with the host servers should be created using the two dedicated NIC ports as described in the networking section. This team should be created using the default switch independent mode and Hyper-V port mode. o Ethernet traffic for each virtual machine will be assigned to one of the team members as the default path. The VM traffic is spread evenly across the team. In the event of a failure traffic will be reassigned to an alternate team member o VLAN setting will be configured under Hyper-V When Windows Server 2012 NIC teaming is complete there should be two teams displayed under the NIC teaming management utility (Figure 21)

Page 20

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Figure 21) Windows Server NIC teaming. When this is completed, use Hyper-V Manager to create a vSwitch using the VMTeam. Uncheck the checkbox that allows management traffic on this device. o VLAN IDs can set as needed in each virtual machines configuration settings o Confirm the switch name is the same on all cluster nodes to insure Live Migration works properly Assign TCP/IP addresses and confirm network connectivity for all network connections on each VLAN The cluster public network should be at the top of the network binding order (VLAN 50) The iSCSI, Cluster Private, and Live Migration networks should not have any default gateway defined. In addition the Client for Microsoft networks, and File and Print Sharing can be disabled for these interfaces

Storage Connections
The IBM DS3500 provides shared storage used to create highly available fault tolerant drives for use by the cluster. The following steps will complete the configuration and presentation of the disks on the DS3500, and also step through the process of making the iSCSI connections from Windows Server 2012 back to these disks. Each disk is used to ensure the DS3500 storage volumes are only accessible to the specific servers assigned to them. IQN names are assigned to each server, and can be seen in the Microsoft iSCSI initiator properties found in Control Panel. The IQN name for each server will change after the host servers join the Windows domain Record these names for each server to complete the host mapping in the DS3500 Storage Manager (Figure 22)

Figure 22) Server IQN name in Windows Server 2012 iSCSI Initiator Properties

Page 21

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


From the Total Storage Manager application add the cluster hosts to the host group (Figure 23)

Figure 23) Add Host to Host Group Select iSCSI for interface type, add the unique IQN name for the host, and assign a chosen name (Figure 24)

Figure 24) Host Definition Choose Windows Clustered when queried for Host Type (Figure 25)

Figure 25) Host Type The DS3500 disks should now be ready and able to be seen by the host servers. iSCSI connections will be made from each server to the DS3500 to complete the storage connections. Using the Microsoft iSCSI initiator a connection should be made for each host to server path. The Quick Connect option may be used if not using any advanced features. If a CHAP secret has been defined on the Target (DS3500), then use the Discover tab, enter the target IP and then select Advanced button at the bottom of the dialog box (Figure 26)

Page 22

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Figure 26) Target discovery with advanced options When completed there should be a minimum of four paths defined between the server and the storage (Figure 27)

Figure 27) iSCSI storage paths The Volumes and Devices tab should now display the targets that are available to the host server. The disks should also appear in Windows Disk Manager o A disk rescan may be required From a single server, bring each disk online, and format it as a GPT disk for use by the cluster. Assigning drive letters is optional since they will be used for specific clustering roles such as CSV, and Quorum it is not required o Validate that each potential host server can see the disks and bring them online. o Note that only one server can have the disks online at a time, until they have been added to Cluster Shared Volumes.

Page 23

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Cluster Creation
Microsoft Windows clustering will be used to join the host servers together in a highly available configuration that will allow both servers to run virtual machines to support a production environment. Virtual machines workloads should be balanced across both hosts and careful attention should be paid to insure that the combined resources of all virtual machines do not exceed those available on N-1 cluster nodes. Staying below this threshold will allow any single server to be taken out of the cluster and minimize the impact to your production servers. A policy of monitoring resource utilization such as CPU, Memory, and Disk (space, and I/O) will help keep the cluster running at optimal levels, and allow for proper planning to add additional resources as needed. Using the Failover Cluster Manager, run the cluster validation wizard to assess the two physical host servers as potential cluster candidates and address any errors. The cluster validation wizard checks for available cluster compatible host servers, storage, and networking (Figure 28) Make sure the intended cluster storage is online to only one of the cluster nodes Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes since it causes the validation to issue a warning during network detection due to all nodes sharing the same IP address Address any issues that are flagged during the validation

Figure 28) Cluster Validation Wizard

Use the Failover Cluster Manager to create a cluster with the two physical host servers. o You will need a cluster name and IP address

Page 24

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


Figure 29 shows the Failover Cluster Manager with the two hosts visible

Figure 29) Failover Cluster Manager Add the disks to Cluster Shared Volumes Using Hyper-V Manager, set the default paths for VM creation to use the Cluster Shared Volumes

Virtual Machine Setup and Configuration


Setup and configuration of new virtual machines should be performed using the Failover Cluster Manager utility. This will automatically make the virtual machine highly available and able to Live Migrate between each cluster member. The operating system can be installed on a virtual machine by using a variety of methods. A straightforward approach is to modify the VM DVD drive settings to specify an image file that points to the Windows installation ISO image. Then start the VM to begin the install. Other deployment methods such as a VHD file with a Sysprepd image, WDS server, or SCCM are acceptable as well. With the operating system installed and the VM running, the following actions should be performed before installing application software: Run Windows Update Install the integration services in the VM. Despite the latest Windows Server builds having integration services built in, it is important to ensure the Hyper-V child and parent run the same version Activate Windows Hyper-V supports Dynamic Memory in virtual machines. This allows flexibility in the assignment of resources to VMs. However some applications may experience performance related issues if the virtual machines memory settings are not configured correctly. It is suggested to research how dynamic memory may effect specific applications before implementing.

Page 25

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Summary
Upon completing implementation steps, an operational, highly available Microsoft Hyper-V failover cluster helps to form a high-performing, interoperable, and reliable IBM private cloud architecture. With Enterprise-class multi-level software and hardware fault tolerance is achieved by configuring a robust collection of industry-leading IBM System x servers, and storage systems and Juniper networking components to meet Microsofts Private Cloud Fast Track program guidelines. The programs unique framework promotes standardized and highly manageable cloud environments which help satisfy even the most challenging business critical virtualization demands.

IBM Reseller Option Kit


Getting your customers the operating system they want has never been easier. The IBM Reseller Option Kit (ROK) is a software delivery option that enables distributors and resellers to order Microsoft Windows Server products separately from IBM server hardware. Each IBM ROK package is tuned for IBM servers but not yet installed. This product is purchased as a server option like RAM, hard drives or processors. The install-ready reseller kit provides the Windows Server license separately from IBMbranded servers with all the benefits and reliability of an IBM-provided Windows Server image. Tuned to run on IBM System x serversincludes certified and tested drivers and OS image as well as IBM ServerGuide, a tool that helps to simplify and automate installation and configuration. For more information follow the hyperlink below: http://www-01.ibm.com/common/ssi/rep_ca/7/897/enus212-317/unus212-317.pdf

Page 26

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Related Links
IBM Support: http://www.ibm.com/support IBM System x3650 M4 Installation and Users Guide: http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5089516 IBM Firmware update and best practices guide: http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082923 IBM Bootable Media Creator: http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC IBM Director Agent Download (Platform Agent) http://www-03.ibm.com/systems/software/director/downloads/agents.html

IBM DS3500 Storage Users Guide


http://www-03.ibm.com/systems/networking/hardware/ethernet/b-type/b48y/index.html

Juniper EX2200 Installation and Connection Guide


http://www.juniper.net/techpubs/en_US/release-independent/junos/topics/task/installation/ex2200installing-and-connecting.html Juniper EX2200 Hardware Guide http://www.juniper.net/techpubs/en_US/release-independent/junos/information-products/topiccollections/hardware/ex-series/ex2200/book-hw-ex2200.pdf Juniper EX2200 Hardware Setup Page

http://www.juniper.net/techpubs/en_US/release-independent/junos/informationproducts/pathway-pages/ex-series/ex2200/ex2200.html#installation
IBM Reseller Option Kit for Windows Server 2012 http://www-01.ibm.com/common/ssi/rep_ca/7/897/enus212-317/unus212-317.pdf

Page 27

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Bill of Materials
SBB Part Number 7915AC1 A1JT 81Y6892 94Y6676 94Y6679 90Y8928 44W3251 49Y4243 94Y6666 49Y1379 80Y9626 39Y7979 90Y6458 00Y6325 A1JU A2VN A2QL A2XB 4161 5768 A2EB 8923 A1ML 6263 A228 Feature Code Description IBM System x3650 M4 x3650 M4 PCIe Riser Card 1 (1 x8 FH/FL + 2 x8 FH/HL Slots) x3650 M4 PCIe Gen-III Riser Card 2 (1 x8 FH/FL + 2 x8 FH/HL Slots) Intel Xeon Processor E5-2690 8C 2.9GHz 20MB Cache 1600MHz 135W Addl Intel Xeon Processor E5-2690 8C 2.9GHz 20MB 135W W/Fan IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD IBM UltraSlim Enhanced SATA DVD-ROM (optional) Intel Ethernet Quad Port Server Adapter I340-T4 for IBM System x IBM System x 900W High Efficiency Platinum AC Power Supply 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM IBM Integrated Management Module Advanced Upgrade 4.3m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable IBM System x Gen-III Slides Kit Windows Server Datacenter 2012 2 Socket Quantity 2 2 2 2 2 4 2 2 4 16 2 4 2 2

1746C4A 68Y8434 68Y8433 49Y2048

2812 3630 3612 5220

IBM System Storage DS3524 Express 2GB Cache Upgrade 1Gb iSCSI 4 Port Daughter Card 600GB 10,000 rpm 6Gb SAS 2.5 HDD

1 2 2 24

6630HC1 40K5581 88Y6836

A13L 3798 A1AH

Juniper 24 Port 1Gb EX2200 Ethernet Switch for IBM System x 3m Blue Cat5e Cable 1000B-SX GbE 850nm 550m SFP

2 24 4

Page 28

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

Networking Worksheets
EX2200 Switch Layout (Switch1) Switch Ports Port 0 Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8 Port 9 Port 10 Port 11 Port 12 Port 13 Port 14 Port 15 Port 16 Port 17 Port 18 Port 19 Port 20 Port 21 Port 22 Port 23 Uplink1 Uplink2 Uplink3 Uplink4 Device Controller-A iSCSI Port 1 Server 1 iSCSI Port 1 Controller-B iSCSI Port 1 Server2 iSCSI Port 1 VLANs VLAN 10 VLAN 10 VLAN 10 VLAN 10

Server1 - Cluster Private/CSV Server2 - Cluster Private/CSV

VLAN 30 VLAN 30

Active Directory Server Server 1 Cluster Public/Management Server 2 Cluster Public/Management Server 1 VM Communication Server 2 VM Communication Storage Controller-A (Management)* Storage Controller-B (Management)* Server1 IMM* Switch1 Management*

VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50

LACP Link to Switch2 LACP Link to Switch2 Uplink to CorpNet Uplink to CorpNet

VLAN 50 VLAN 50 VLAN 50 VLAN 50

*Hardware management connections may be implemented separately

Page 29

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


EX2200 Switch Layout (Switch2)

Port 0 Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8 Port 9 Port 10 Port 11 Port 12 Port 13 Port 14 Port 15 Port 16 Port 17 Port 18 Port 19 Port 20 Port 21 Port 22 Port 23 Port 24 Uplink1 Uplink2 Uplink3 Uplink4

Controller-A iSCSI Port 2 Server 1 iSCSI Port 2 Controller-B iSCSI Port 2 Server 2 iSCSI Port 2

VLAN 20 VLAN 20 VLAN 20 VLAN 20

Server1 Migration Server2 Migration

Cluster Cluster

Private/Live VLAN 40 Private/Live VLAN 40

Active Directory Server Server 1 Cluster Public/Management Server 2 Cluster Public/Management Server 1 VM Communication Server 2 VM Communication Storage Controller-A (Management)* Storage Controller-B (Management)* Server2 IMM* Switch2 Management*

VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50 VLAN 50

LACP Link to Switch1 LACP Link to Switch1 Uplink to CorpNet Uplink to CorpNet

VLAN 50 VLAN 50 VLAN50 VLAN 50

*Hardware management connections may be implemented separately

Page 30

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


VLAN Layout

VLAN 10 (iSCSI) Controller-A iSCSI Port 1 Controller-B iSCSI Port 1 Server 1 iSCSI Port 1 Server2 iSCSI Port 1

IP Addresses 192.168.10.xx

VLAN 20 (iSCSI) Controller-A iSCSI Port 2 Controller-B iSCSI Port 2 Server 1 iSCSI Port 2 Server2 iSCSI Port 2

IP Addresses 192.168.20.xx

VLAN 30 (Cluster Priv/CSV) Server 1 Cluster Private/CSV Server2 Cluster Private/CSV

IP Addresses 192.168.30.xx

VLAN 40 (Cluster Priv/Live Migr) IP Addresses Server 1 Cluster Private / Live Migration 192.168.40.xx Server2 Cluster Private / Live Migration

Page 31

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

VLAN 50 (Cluster Pub/Mgmt/VM Comm) IP Addresses Server 1 (WS12 Team Cluster Public) 192.168.50.xx Server 1 (WS12 Team VM Comm) No Host Exposure Server 2 (WS12 Team Cluster Public) Server 2 (WS12 Team VM Comm) No Host Exposure Hyper-V Cluster IP Storage Controller-A (Mgmt Switch1) Storage Controller-A (Mgmt Switch2) Storage Controller-B (Mgmt Switch1) Storage Controller-B (Mgmt Switch2) EX2200 Switch Management Switch1 EX2200 Switch Management Switch2 IBM x3650 M4 IMM Server1 IBM x3650 M4 IMM Server2

Page 32

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x

The team who wrote this paper


Thanks to the authors of this paper working with the International Technical Support Organization, Raleigh Center. Scott Smith is an IBM System x Systems Engineer working at the IBM Center for Microsoft Technology. Over the past 15 years, Scott has worked to optimize the performance of IBM x86-based servers running the Microsoft Windows Server operating system and Microsoft application software. Recently his focus has been on Microsoft Hyper-V based solutions with IBM System x servers, storage, and networking. He has extensive experience in helping IBM customers understand the issues that they are facing and developing solutions that address them. Thanks to the following people for their contributions to this project: David Ye, IBM Solutions Architect Kent Swalin, IBM Infrastructure Architect Stephen Nicklis, IBM Product Marketing Manager George Rainovic, Juniper Networks Engineer

Trademarks and special notices


Copyright IBM Corporation 2012. All rights Reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. SET and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, or service names may be trademarks or service marks of others.

Page 33

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

Page 34

IBM Virtualization Reference Architecture for Microsoft Hyper-V on System x


Copyright 2012 by International Business Machines Corporation. IBM Systems and Technology Group Dept. U2SA 3039 Cornwallis Road Research Triangle Park, NC 27709 Produced in the USA September 2012 All rights reserved Warranty Information: For a copy of applicable product warranties, write to: Warranty Information, P.O. Box 12195, RTP, NC 27709, Attn: Dept. JDJA/B203. IBM makes no representation or warranty regarding third-party products or services including those designated as ServerProven or ClusterProven. Telephone support may be subject to additional charges. For onsite labor, IBM will attempt to diagnose and resolve the problem remotely before sending a technician. IBM, the IBM logo, System x, X-Architecture and System Storage are trademarks or registered trademarks of IBM Corporation in the United States and/or other countries. For a list of additional IBM trademarks, please see http://ibm.com/legal/copytrade.shtml. Intel and Xeon are registered trademarks of Intel Corporation. Microsoft, Windows, SQL Server, Hyper-V are registered trademarks of Microsoft Corporation in the United States and/or other countries. All other company/product names and service marks may be trademarks or registered trademarks of their respective companies. This document could include technical inaccuracies or typographical errors. IBM may make changes, improvements or alterations to the products, programs and services described in this document, including termination of such products, programs and services, at any time and without notice. Any statements regarding IBMs future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document is current as of the initial date of publication only, and IBM shall have no responsibility to update such information. Performance data for IBM and non-IBM products and services contained in this document was derived under specific operating and environmental conditions. The actual results obtained by any party implementing and such products or services will depend on a large number of factors specific to such partys operating environment and may vary significantly. IBM makes no representation that these results can be expected or obtained in any implementation of any such products or services. THE INFORMATION IN THIS DOCUMENT IS PROVIDED AS-IS WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR INFRINGEMENT. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM program or product in this document is not intended to state or imply that only that program or product may be used. Any functionally equivalent program or product, that does not infringe IBMs intellectually property rights, may be used instead. It is the users responsibility to evaluate and verify the operation of any non-IBM product, program or service. Information in this presentation concerning non-IBM products was obtained from the suppliers of these products, published announcement material or other publicly available sources. IBM has not tested these products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. The provision of the information contained herein is not intended to, and does not grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A.

Page 35

S-ar putea să vă placă și