Sunteți pe pagina 1din 56

Sydney CPOC

Nexus 7000, 5000 & UCS High Availability, Blade Switching, FCoE, Storage
CPOC Lab Customer Test Plan - Cisco Systems November, 18, 2009

Dates for Test Setup and Customer Visit (TBD by CPOC Scheduler)
Network Build (Setup) Start Date Nov, 18,2009

End Date Nov, 28,2009

Note: Lab operation hours are 9 a.m. to 5 p.m. Monday to Friday. List of Attendees, Customer Arrival/Testing (Required to obtain badge access)
Name Paul Qiu Company Cisco Cisco Xiao-hu Chen Peter Jung Position

CPOC Engineer

Cisco

CPOC Engineer

CPOC Engineer

Cisco Systems, Inc. Confidential

Page 2 of 56

CPOC Lab: Customer Test Plan

Table of Contents
INTRODUCTION ........................................................................................................................................ 4 EQUIPMENT ............................................................................................................................................... 7 CPOC TEST BED DIAGRAM.................................................................................................................... 8 TESTING STEPS AND RESULTS............................................................................................................. 9

CONCLUSIONS......................................................................................................................................... 56

TEST 1: ESTABLISHING BASELINE NETWORK CONNECTIVITY ....................................................................... 9 TEST 2: UCS MANAGER, EMC STORAGE AND VMWARE OVERVIEW ........................................................ 11 TEST 3: UCS MANAGER SERVICE PROFILE PROVISIONING ........................................................................ 25 TEST 4: UCS STATELESS MIGRATION ....................................................................................................... 35 TEST 5: SINGLE OS SAN BOOT ON THE UCS BLADE ................................................................................. 44 TEST 6: HIGH AVAILABILITY / REDUNDANCY / RESILIENCY TESTING ........................................................ 48

CPOC Lab: Customer Test Plan

Page 3 of 56

Cisco Systems, Inc. Confidential

Introduction
This CPOC provides an overview of the Cisco Unified Computing System is a set of pre-integrated data center components-- blade servers, adapters, fabric interconnects, and extenders--that are managed holistically under a common embedded management system. This approach results in far fewer system components than comparable data center platforms, and greatly simplified operations. This CPOC also provides an overview of Cisco Unified Computing System integrating to EMC storage, VMware ESX 4.0 and Nexus 1000. Nexus 7000 will the datacenter core and aggregation, Nexus 5000 is the access layers to provide High Availability, Storage and FCoE solution. In this CPOC we will test UCS and UCS Manager. These new hardware and software are designed specifically for data centers and focuses on performance, high availability, seamless management, The following lists key aspects of the UCS, Nexus 7010 and 5020: Unified Fabric - dramatic reduction in network adapters, blade-server switches, and cabling by passing all network traffic to parent fabric interconnects where it can be processed and managed centrally, improving performance and dramatically reducing the number of devices powered, cooled, secured, and managed. Embedded Multi-Role Management - management is embedded in the fabric interconnects, managing all attached systems as a single, redundant management domain. The UCS Manager coordinates all aspects of system configuration and operation with Service Profiles which contain abstracted server state. This stateless approach eliminates the need to use multiple, separate element managers for each system component. Additionally, the UCS Manager offers role-based access control that allows IT administrators to maintain responsibility and accountability for their respective domain policies within a single integrated management environment. By eliminating the traditional system of manual coordination and handoffs between disciplines, it is now possible to fully provision one or more workloads and associated network services in minutes, rather than hours or days. Cisco Extended Memory Technology - provides more than double the amount of memory (384 GB) compared to traditional 2-socket servers, maximizing performance and capacity for demanding virtualization and large-dataset workloads. VN Link Virtualization Support/Virtualization Adapter - virtual machines get virtual links that can be managed in the same manner as physical links. These virtual links can be centrally configured and managed without the complexity of traditional systems that interpose multiple switching layers in virtualized environments. Energy efficiency - The optimized design of UCS can reduce the number of components that need to be powered and cooled by more than 50 percent compared to traditional blade server environments. FCoE - Nexus 5000 extends Fiber Channel traffic over 10 Gigabit Ethernet networks, consolidating I/O onto one set of cables, eliminating redundant adapters, cables, and ports. A single card and set of cables connects servers to the Ethernet and Fiber Channel networks and also allows the use of a single cabling infrastructure within server racks. High Availability 1 Operating System - NX-OS is an extremely resilient operating system that offers such capabilities as hitless ISSU, stateful process restart, process memory protection as well as standards-based graceful restart procedures for many Layer 3 protocols. 2 Hardware - Nexus 7000 chassis has built in redundancy for almost every critical system component such as supervisor, system fan tray, fabric fan tray, power supply and fabric modules.
Cisco Systems, Inc. Confidential Page 4 of 56 CPOC Lab: Customer Test Plan

Business Impact:
Optimized for virtualization, UCS dramatically reduces datacenter total cost of ownership while simultaneously increasing IT agility and responsiveness. It delivers: Increased IT staff productivity and business agility through just-in-time provisioning and mobility support for both virtualized and non virtualized environments Reduced total cost of ownership at the platform, site, and organizational levels A seamless integrated system that is managed, serviced, and tested as a whole Scalability through a design for up to 320 discrete servers and thousands of virtual machines, and the ability to scale I/O bandwidth to match demand Open industry standards supported by a partner ecosystem of industry leaders Cisco Nexus 7000 and 5000 Series provides modularity, high availability, and integrated resilience specifically for mission-critical data centers. The Cisco Nexus 7000 and 5000 Series Switches comprise a modular data center-class product line designed to provide highly scalable 10 Gigabit Ethernet networks with future support for 40 and 100 Gigabit Ethernet interfaces. Purpose built to meet the requirements of the most mission-critical data centers; it delivers continuous system operation and virtualized, pervasive services. The Cisco Nexus 7000 and 5000 Series is based on a proven operating system (Cisco NX-OS), with enhanced features to deliver real-time system upgrades with exceptional manageability and serviceability. Its innovative design is built to support end-to-end data center connectivity, consolidating IP, storage, and interposes communication (IPC) networks on a single Ethernet fabric.

Technical Component covered:

UCS, NEXUS 7000, NEXUS 5000, NEXUS 1000, DATA CENTER, FCoE, VMware, ESX4.0, Storage, EMC, Windows 2008

CPOC Lab: Customer Test Plan

Page 5 of 56

Cisco Systems, Inc. Confidential

The following six tests will be undertaken in this CPOC: TEST 1: Establishing baseline network connectivity (1) Configure and test the routing and IP addressing provided to all the network devices (2) Passing traffic from the client PC to the server farm TEST 2: UCS Manager, EMC storage and VMware Overview (1) Overview of UCS Manager (2) Basic configuration of LUN, Storage Group and Host of EMC storage (3) Overview of VMware Vcenter, ESX4.0 and NEXUS 1000. TEST 3: UCS Manager Service Profile Provisioning (1) Verify different pools we created for the service profile (2) Verify all the settings for the Service Profile (3) Verify the Nexus 5010 VSAN and VLAN configuration (4) Verify the Fabric Interconnect configuration TEST 4: UCS Stateless Migration (1) Moving virtual machine through Nexus 1000 and VMware Vmotion (2) Assign the Service Profile to different blade server in UCS (3) Moving the virtual machine back to its original ESX4.0 host TEST 5: Single OS SAN Boot on the UCS blade (1) SAN Boot Windows 2008 server from UCS blade (2) Assign this Service Profile to different UCS blade TEST 6: High Availability / Redundancy / Resiliency testing (1) Fiber Channel failover test for the UCS (2) Ethernet Connection Failover test for the UCS (3) NEXUS 7000 and NEXUS 5000 VPC failover for the UCS

Cisco Systems, Inc. Confidential

Page 6 of 56

CPOC Lab: Customer Test Plan

Equipment
Equipment is detailed in the following sections.

Cisco Devices*
Item Core N7K-C7010 N7K-C7010-FAB-1 N7K-AC-6.0KW N7K-SUP1 N7K-148GT-11 N7K-132XP-12 SFP-10G-SR Aggregation WS-C6500 WS-X6704-10GE VS-S720-10G WS-SUP720-3BXL Access N5K-C5010 N5K-C5010P-BF N5K-M1600 N5K-M1404 Data Center N20-C6508 N10-S6100 Nexus 1000k Description 10 Slot Chassis Nexus 7000 10 Slot Chassis 46Gbps/Slot Fabric Module Nexus 7000 - 6.0KW AC Power Supply Module Nexus 7000 Supervisor Nexus 7000 - 48 Port 10/100/1000, RJ-45, 40G Fabric Nexus 7000 - 32 Port 10GbE, 80G Fabric 10GBASE-SR-SFP+ Module Catalyst 6500 Enhanced CEF720 4 port 10GE with DFC Supervisor Engine 720 10GE Supervisor Engine 720 Nexus 5010 Chassis Nexus 5010 Sup 6x10GE Ethernet Module for N5k 4x10GE + 4x1/2/4G FC Module Cisco UCS 5108 Cisco 6120 Fiber Interconnect Nexus 1000k QTY 2 4 4 4 2 2 16 1 1 1 1 2 2 2 2 1 2 2

Third Party Devices


Quantity 1 2 3 Device EMC Storage VMware Microsoft Software Revision ESX 4.0 Windows 2008

CPOC Lab: Customer Test Plan

Page 7 of 56

Cisco Systems, Inc. Confidential

CPOC Test Bed Diagram


The following topology will be used for this test.

Figure 1: Topology (Replace this with your network diagram.) In above diagram, we have UCS sitting in the server farm which has 8 server blades. We have VMware ESX 4.0 installed on two of those blades. Another blade could be installed single OS such as Linux or Windows 2008. EMC storage is sitting in right hand side SAN area. Two Nexus 5000 will be function as the data center access switches connecting the fiber channel and 10G Ethernet connections between networking core and SAN. Two Nexus 7000 configured with VPC will be the data center core and aggregation switch taking care of all the routing functions. On the top of the screen we have Cat6k and client PCs representing enterprise networks which most client PCs will be sitting in.

Cisco Systems, Inc. Confidential

Page 8 of 56

CPOC Lab: Customer Test Plan

Testing Steps and Results


Procedures and results are given in the following sections.

Test 1: Establishing baseline network connectivity


Goals of Test: The goal of this test is to successfully configure routing and provide all the network devices with IP addresses as per the CPOC test bed diagram. Data to Record: Capture the tested baseline configuration of all network devices. Estimated Time: Approximately 8 hours

Procedures
1.

The following procedures are used for this test:

Configure routing and IP addressing of all equipment as indicated in the diagram.

Please refer to the attached configuration.zip file for full configurations of each device

2 Ensure we can ping from VDC 2 from end to end

CPOC Lab: Customer Test Plan

Page 9 of 56

Cisco Systems, Inc. Confidential

Results

All of above tests have been done successfully. We have end to end connectivity from client PC to the data center servers.

Cisco Systems, Inc. Confidential

Page 10 of 56

CPOC Lab: Customer Test Plan

Test 2: UCS Manager, EMC storage and VMware Overview


Goals of Test: 1 Overview of UCS Manager 2 Basic configuration of LUN, Storage Group and Host of EMC storage 3 Overview of VMware Vcenter, ESX4.0 and NEXUS 1000 Data to Record: Detailed steps on above tests Estimated Time: 3 hours

Procedures
1.

The following procedures are used for this test: Overview of UCS Manager (1) Login to UCS manager through the browser

CPOC Lab: Customer Test Plan

Page 11 of 56

Cisco Systems, Inc. Confidential

(2) Click Equipment> Main Topology View, We will see the topology view of UCS

(3) Click Chassis1>FANS>FAN Module 1 to 8, we could check all the FAN module status

Cisco Systems, Inc. Confidential

Page 12 of 56

CPOC Lab: Customer Test Plan

(4) Click Chassis1>IO Modules >IO Module 1, we could check all the port status on the IO module 1; we can do the same for the IO module 2 as well.

(5) Click Chassis1 >PSUs >PSU 1, we could check the status for power supply unit 1 to 4

CPOC Lab: Customer Test Plan

Page 13 of 56

Cisco Systems, Inc. Confidential

(6) Click Chassis1 >Servers >Server 1 to 8, we could check all the server status:

(7) Click Chassis1 >Servers1 >Inventory, we could check Motherboard

Cisco Systems, Inc. Confidential

Page 14 of 56

CPOC Lab: Customer Test Plan

CPUs: each server blade has 4 CPU with 2.5GHz and 8 threads

Memory: each server blade has 48GB RAM, max could go up to 192GB

CPOC Lab: Customer Test Plan

Page 15 of 56

Cisco Systems, Inc. Confidential

HBA

NIC

Storage: each server blade has two local hard disk

Cisco Systems, Inc. Confidential

Page 16 of 56

CPOC Lab: Customer Test Plan

(8) Click Chassis1 >Fabric Interconnects > Fabric Interconnect A and B, we could check all the status of ports of 6120.

(9) Click Servers, we could check all the service policy we configured on the UCS

CPOC Lab: Customer Test Plan

Page 17 of 56

Cisco Systems, Inc. Confidential

(10) Click LAN, we could check all LAN related policies we configured on the UCS

(11) Click SAN, we could check all SAN related policies we configured on the UCS

Cisco Systems, Inc. Confidential

Page 18 of 56

CPOC Lab: Customer Test Plan

(12) Click Admin, we could check all Admin related policies we configured on the UCS, for example: User Management

2. Basic configuration of LUN, LUN Group and Host of EMC storage

(1) Browse to EMC storage web management interface Navisphere

CPOC Lab: Customer Test Plan

Page 19 of 56

Cisco Systems, Inc. Confidential

(2) We have created 150G LUN 21 to hold all the VMware images. 15G each of LUN 31, LUN32 and LUN33 will be used for SAN boot for different OS Systems

(3) We configured a Storage group CPOC-CHINA which includes all the LUNs we will use for this CPOC.

Cisco Systems, Inc. Confidential

Page 20 of 56

CPOC Lab: Customer Test Plan

(4) We also need to select all the hosts will be using those LUNS for this storage group:

(1) We can login into Vcenter. We can manage all ESX hosts through the Vcenter.

Overview of VMware Vcenter, ESX4.0 and NEXUS 1000:

CPOC Lab: Customer Test Plan

Page 21 of 56

Cisco Systems, Inc. Confidential

(2) We can verify or change all the configurations for ESX hosts , we also notice those ESX hosts sitting on the UCS blade, got 8 x2.5GHz CPU and 48G RAM

(3) We have virtual VMware windows 2003 servers running on the ESX hosts:

Cisco Systems, Inc. Confidential

Page 22 of 56

CPOC Lab: Customer Test Plan

(4) Cisco Nexus 1000 taking care of the routing for those VMware virtual machines:

CPOC Lab: Customer Test Plan

Page 23 of 56

Cisco Systems, Inc. Confidential

Results

All of above tests have been done successfully. We quickly went through UCS, EMC and VMware management systems.

Cisco Systems, Inc. Confidential

Page 24 of 56

CPOC Lab: Customer Test Plan

Test 3: UCS Manager Service Profile Provisioning


Goals of Test: 1 Verify different pools we created for the service profile 2 Verify all the settings for the Service Profile 3 Verify the Nexus 5010 VSAN and VLAN configuration 4 Verify the Fabric Interconnect configuration

Data to Record: Step by step record above five tests Estimated Time: 3 hours

Procedures

The following procedures are used for this test: 1. Verify different pools we created for the service profile

Also verify that we have created Mac Pools for our UCS CPOC:

(1) Verify that we have created LAN Pin Group mapping to E1/19 on Fabric and B

CPOC Lab: Customer Test Plan

Page 25 of 56

Cisco Systems, Inc. Confidential

Also verify that we have created SAN WWNN Pools and WWPN Pools for our UCS CPOC:

(2) Verify that we have created SAN Pin Group mapping to FC 2/3 on Fabric A and B

2.

Verify all the settings for the Service Profile

(1) Verify Service Profile POC_ESX_Host_1 configuration, it is associate to blade-4 at this moment

(2) Launch KVM console, we could see VMware ESX4.0 was installed on the host

Cisco Systems, Inc. Confidential

Page 26 of 56

CPOC Lab: Customer Test Plan

(3) Verify Service Profile for POC_ESX_Host_1 vHBA configuration, you can see vHBA1 and vHBA2 binding to Fabric A and B to achieve redundancy. vHBA1 and vHBA2 belong to the same SAN Pin group.

(4) Verify Service Profile for POC_ESX_Host_1 vNICs configuration, you can see vNIC1 and vNIC2 binding to Fabric A and B and Enable Failover to achieve redundancy. vNIC1 and vNIC2 belong to the same SAN Pin group.

CPOC Lab: Customer Test Plan

Page 27 of 56

Cisco Systems, Inc. Confidential

(5) Verify Service Profile for POC_ESX_Host_1 Boot Order configuration, notice the host is actually using SAN Boot from as primary and vHBA2 as secondary vHBA1

(6) Verify Service Profile for POC_ESX_Host_1 FSM status which you can see the host 100% provisioning successfully

3.

Verify the Nexus 5010 VSAN and VLAN configuration

(1) Verify VSAN configuration on the ce06-n5000-1, ce06-n5000-2 should has similar configuration: vsan database vsan database

vsan 600 name "China_CPOC" vsan 600 interface fc2/3 vsan 600 interface fc2/7

Cisco Systems, Inc. Confidential

Page 28 of 56

CPOC Lab: Customer Test Plan

interface fc2/3 interface fc2/7 no shutdown no shutdown

!Full Zone Database Section for vsan 600 zone name CHINA_CPOC vsan 600 member pwwn 50:06:01:62:3c:e0:62:76 member pwwn 20:00:00:11:88:88:88:4f zone name CHINA_CPOC2 vsan 600

member pwwn 50:06:01:62:3c:e0:62:76 member pwwn 20:00:00:11:88:88:88:3f

zone name CHINA_CPOC3 vsan 600

member pwwn 20:00:00:11:88:88:88:0f

member pwwn 50:06:01:62:3c:e0:62:76 zoneset name POC vsan 600 member CHINA_CPOC member CHINA_CPOC2 member CHINA_CPOC3 zoneset activate name POC vsan 600 ce06-n5000-1# show flogi database vsan 600 INTERFACE fc2/3 fc2/3 fc2/3 fc2/3 fc2/7 VSAN FCID

--------------------------------------------------------------------------------------------------------------------------------------------------------------PORT NAME

NODE NAME

600 0xc10000 20:43:00:0d:ec:b4:c7:00 22:58:00:0d:ec:b4:c7:01 600 0xc10001 20:00:00:11:88:88:88:4f 20:00:00:00:88:88:88:4f 600 0xc10002 20:00:00:11:88:88:88:3f 20:00:00:00:88:88:88:5f 600 0xc100ef 50:06:01:62:3c:e0:62:76 50:06:01:60:bc:e0:62:76 600 0xc10003 20:00:00:11:88:88:88:0f 20:00:00:00:88:88:88:2f

Total number of flogi = 5.


CPOC Lab: Customer Test Plan

Page 29 of 56

Cisco Systems, Inc. Confidential

ce06-n5000-1# show zoneset name POC active zoneset name POC vsan 600 zone name CHINA_CPOC vsan 600

* fcid 0xc100ef [pwwn 50:06:01:62:3c:e0:62:76]

* fcid 0xc10001 [pwwn 20:00:00:11:88:88:88:4f] zone name CHINA_CPOC2 vsan 600

* fcid 0xc10002 [pwwn 20:00:00:11:88:88:88:3f] zone name CHINA_CPOC3 vsan 600

* fcid 0xc100ef [pwwn 50:06:01:62:3c:e0:62:76]

* fcid 0xc10003 [pwwn 20:00:00:11:88:88:88:0f] * fcid 0xc100ef [pwwn 50:06:01:62:3c:e0:62:76]

(2) Verify VLAN configuration on the ce06-n5000-1, ce06-n5000-2 should be similar configuration: vlan 101 vlan 102 vlan 103 name CPOC_FlashNet name CPOC_HostNet name CPOC_PCNet

vlan 601-604

port-channel load-balance ethernet source-ip interface port-channel30 switchport mode trunk

switchport trunk allowed vlan 604 interface Ethernet1/1

description connection to ce03-n7000-1 e1/19 switchport mode trunk

Cisco Systems, Inc. Confidential

Page 30 of 56

CPOC Lab: Customer Test Plan

switchport trunk allowed vlan 604 channel-group 30 mode active interface Ethernet1/2

description connection to ce07-n7000-1 e1/19 switchport mode trunk switchport trunk allowed vlan 604 channel-group 30 mode active

interface Ethernet1/19

description connection to fiber-interconnect-a e1/19 switchport mode trun

4.

Note: All the configuration was pushed through UCS manager to fabric interconnect A and B. CLI mainly is for troubleshooting perspective. Normally we do not need to login to CLI to check everything. We can not change any configuration through CLI at all. M ore detail configuration please check configuration. zip file: ce06-ucsmgr-1-A(nxos)# sh run switchname ce06-ucsmgr-1-A vlan 600 fcoe vlan 101-103,201-203,301-303,401-403

Verify Fabric Interconnect configuration

vlan 601-604 vlan 4044 vlan 4047

name fcoe-vsan-600

name SAM-vlan-management name SAM-vlan-boot

CPOC Lab: Customer Test Plan

Page 31 of 56

Cisco Systems, Inc. Confidential

vsan database vsan 600

interface vethernet1211

switchport trunk allowed vlan 102,601-604 bind interface Ethernet1/1/3 pinning server sticky border-interface Ethernet1/19 pinning server pinning-failure link-down

interface vethernet1214

switchport trunk allowed vlan 102,601-604 bind interface Ethernet1/1/3 pinning server sticky border-interface Ethernet1/19 pinning server pinning-failure link-down

interface vethernet1266

switchport trunk native vlan 102 bind interface Ethernet1/1/1

switchport trunk allowed vlan 102 pinning server sticky border-interface Ethernet1/19 pinning server pinning-failure link-down interface vethernet1267

switchport trunk native vlan 102 bind interface Ethernet1/1/1

switchport trunk allowed vlan 102 pinning server sticky border-interface Ethernet1/19 pinning server pinning-failure link-down interface vethernet1270

switchport trunk allowed vlan 102,601-604 bind interface Ethernet1/1/4 pinning server sticky border-interface Ethernet1/19 pinning server pinning-failure link-down

interface vethernet1271
Cisco Systems, Inc. Confidential Page 32 of 56 CPOC Lab: Customer Test Plan

switchport trunk allowed vlan 102,601-604 bind interface Ethernet1/1/4 pinning server sticky border-interface Ethernet1/19 pinning server pinning-failure link-down nterface vfc1059 no shutdown

description server 1/3, VHBA bind interface vethernet9251

interface vfc1264 no shutdown

description server 1/1, VHBA bind interface vethernet9456

interface vfc1274 no shutdown

description server 1/4, VHBA bind interface vethernet9466

interface vethernet9251

switchport access vlan 600 pinning server

bind interface Ethernet1/1/3

interface vethernet9456

switchport access vlan 600 pinning server

bind interface Ethernet1/1/1

interface vethernet9466

switchport access vlan 600 pinning server

bind interface Ethernet1/1/4

CPOC Lab: Customer Test Plan

Page 33 of 56

Cisco Systems, Inc. Confidential

vsan database

vsan 600 interface vfc1059 vsan 600 interface vfc1264 vsan 600 interface vfc1274 vsan 600 interface fc2/3

interface fc2/3 no shutdown

interface Ethernet1/19

switchport mode trunk

switchport trunk allowed vlan 1,101-103,201-203,301-303,401-403 switchport trunk allowed vlan add 501-502,600-604 pinning border no shutdown

-------------------------------------------------------------------------------SERVER INTERFACE VSAN FCID vfc971 vfc977 PORT NAME EXTERNAL NODE NAME INTERFACE

ce06-ucsmgr-1-A(nxos)# sh npv flogi-table

--------------------------------------------------------------------------------

500 0x160002 20:00:00:00:22:22:22:2e 20:00:00:00:11:11:11:18 fc2/1 500 0x160003 20:00:00:00:22:22:22:2c 20:00:00:00:11:11:11:17 fc2/1

vfc1059 600 0xc10002 20:00:00:11:88:88:88:3f 20:00:00:00:88:88:88:5f fc2/3 vfc1264 600 0xc10003 20:00:00:11:88:88:88:0f 20:00:00:00:88:88:88:2f fc2/3 vfc1274 600 0xc10001 20:00:00:11:88:88:88:4f 20:00:00:00:88:88:88:4f fc2/3

vfc1210 500 0x160005 20:00:00:00:22:22:22:2a 20:00:00:00:11:11:11:16 fc2/2

Results
All above three tests have been done successfully. We have verified the UCS service profile configuration, N5000 and fabric-interconnection configurations.

Cisco Systems, Inc. Confidential

Page 34 of 56

CPOC Lab: Customer Test Plan

Test 4: UCS Stateless Migration


Goals of Test: 1 Moving virtual machine through Nexus 1000 and VMware Vmotion 2 Assign the Service Profile to different blade server in UCS 3 Moving the virtual machine back to its original ESX4.0 host Data to Record: Step by step record above 3 tests Estimated Time: 3 hours

Procedures

The following procedures are used for this test:

1. Moving virtual machine through Nexus 1000 and VMware Vmotion

(1) Login into ce05-server-3, we will try to ping, telnet , ftp and remote desktop connection to 10.10.60.100 (It is the virtual server sitting ESX-host1 which is on UCS blade) We will monitor all the connections and ping loss throughout this migration process

CPOC Lab: Customer Test Plan

Page 35 of 56

Cisco Systems, Inc. Confidential

(2) Current N1k on the ESX host is the active supervisor, we need to fail it over to standby

After reload, it shows standby mode which means it is safe to be powered off since secondary N1k module is currently the active supervisor on another ESX host.

We will check the ce05-server-3, no ping or connections affected. It proves N1k active standby module failover has no impact to user traffics.

Cisco Systems, Inc. Confidential

Page 36 of 56

CPOC Lab: Customer Test Plan

(3) We will vmotion virtual machine w2k3-1 from current ESX host to another ESX host:

(4) Once we finished Vmotion, we will see the virtual machine moved to the second ESX host.

CPOC Lab: Customer Test Plan

Page 37 of 56

Cisco Systems, Inc. Confidential

(5) When we checking the ping result and connection status from ce05-server-3, we might see one ping timeout or none. All the telnet, FTP and remote desktop connections are still here

(6) Since the first ESX server does not have any virtual machine or N1k actively running on it, now we could safely shut it down through UCS KVM console:

Cisco Systems, Inc. Confidential

Page 38 of 56

CPOC Lab: Customer Test Plan

2. Assign the Service Profile to different blade server in UCS

(1) Before we assign the service profile to different blade server, we need to shut down the server totally:

(2) After the server powered off, we will disassociate the Service Profile from the Blade server.

CPOC Lab: Customer Test Plan

Page 39 of 56

Cisco Systems, Inc. Confidential

(3) We will re-associate the service profile to an available blade server:

(4) We will check the FSM status, until it reach 100% then it means re-association successfully, this will take around 3 minutes time.

Cisco Systems, Inc. Confidential

Page 40 of 56

CPOC Lab: Customer Test Plan

(5) After FSM reach 100% which means the service profile re-associate to a new server blade, we will try to boot the server up:

(6) You will see server will boot up successfully as original ESX4.0 host in the new blade server:

CPOC Lab: Customer Test Plan

Page 41 of 56

Cisco Systems, Inc. Confidential

3. Moving the virtual machine back to its original ESX4.0 host

(1) Open Vcenter, we will see the ESX host automatically connect it back to Vcenter and Nexus 1000 boot it up as standby supervisor as well

(2) We will vmotion virtual machine w2k3-1 from secondary ESX host back to the previous ESX host:

Cisco Systems, Inc. Confidential

Page 42 of 56

CPOC Lab: Customer Test Plan

(3) We will notice we have 1 or none ping lost. All the telnet, ftp and remote desktop connections are still maintaining successfully:

Results
All above three tests have been done successfully. (1) Moving virtual machine through Nexus 1000 and VMware Vmotion (2) Assign the Service Profile to different blade server in UCS (3) Moving the virtual machine back to its original ESX4.0 host During the whole process, connections from Client PCs all kept up and we had 1 or none ping lost during the whole migration process

CPOC Lab: Customer Test Plan

Page 43 of 56

Cisco Systems, Inc. Confidential

Test 5: Single OS SAN Boot on the UCS blade


Goals of Test: 1 SAN Boot Windows 2008 server from UCS blade 2 Assign this Service Profile to different UCS blade Data to Record: Step by step record above test Estimated Time: 3 hours

Procedures
1.

The following procedures are used for this test:

(1) From service profile perspective, it is exactly same as TEST 3. UCS does not care if you install Linux, Windows or WMware ESX 4.0, it is the similar configure for the profile:

SAN Boot Windows 2008 server from UCS blade

(2) When the server booting up, press F2 checking the BIOS, make sure Quiet Boot is Disabled:

Cisco Systems, Inc. Confidential

Page 44 of 56

CPOC Lab: Customer Test Plan

(3) Making sure the HA1PortID showing in Boot Option 1 or 2 in the BIOS settings:

(4) To verify which LUN we will be using to boot, we can press Ctrl-Q or ALT-Q during the booting up process, we can see LUN 2 is the boot LUN

HBA port ID is 0800EF

CPOC Lab: Customer Test Plan

Page 45 of 56

Cisco Systems, Inc. Confidential

The only difference is we installed windows 2008 server on this server blade.

(5) Above steps are same for SAN boot for ESX4.0, Linux or Windows

2.

(1) When we are moving the Windows 2008 from one blade server to another blade server inside UCS, we will follow the same procedure of TEST 3. The only difference is we just simply shutdown the windows 2008. Disassociate the service profile, associate to a new blade server:

Assign this Service Profile to different UCS blade

Cisco Systems, Inc. Confidential

Page 46 of 56

CPOC Lab: Customer Test Plan

(2) Wait for the FSM reach 100%

(3)After we moved to the new server, Windows 2008 server will be automatically powered up.

Results

We have successfully tested San boot Windows 2008 on UCS. Also we could move this Service Profile to
different UCS blade.

CPOC Lab: Customer Test Plan

Page 47 of 56

Cisco Systems, Inc. Confidential

Test 6: High Availability / Redundancy / Resiliency testing


Goals of Test: 1 Fiber Channel failover test for the UCS 2 Ethernet Connection Failover test for the UCS 3 NEXUS 7000 and NEXUS 5000 VPC failover for the UCS Data to Record: Step by step record above test Estimated Time: 3 hours

Procedures
1.

The following procedures are used for this test:

(1) Login into ce05-server-3, we will try to ping, telnet , ftp and remote desktop connection to 10.10.60.100 (It is the virtual server sitting ESX-host1 which is on UCS blade)

Fiber Channel failover test for the UCS

We will monitor all the connections and ping loss throughout all following failover tests.

Cisco Systems, Inc. Confidential

Page 48 of 56

CPOC Lab: Customer Test Plan

(2) From the Vcenter we can se the fabric channel going through fabric interconnect B is the active path:

(3) The fabric channel going through fabric interconnect A is the standby path:

CPOC Lab: Customer Test Plan

Page 49 of 56

Cisco Systems, Inc. Confidential

ce06-n5000-2(config)# inter fc 2/3 ce06-n5000-2(config-if)# shut

(6) We will shut down the active FC path from ce06-n5000-2 to trigger the failover

(5) We can see the previous active path as been marked as DEAD and standby path taking over as ACTIVE

Cisco Systems, Inc. Confidential

Page 50 of 56

CPOC Lab: Customer Test Plan

(6) When we checking the ping result and connection status from ce05-server-3, we might see no ping timeout. All the telnet, FTP and remote desktop connections are still here. (7) We will no shut the dead FC path from ce06-n5000-2 ce06-n5000-2(config)# inter fc 2/3 ce06-n5000-2(config-if)# no shut

(8) The dead path will be recovered as standby path after the failover

CPOC Lab: Customer Test Plan

Page 51 of 56

Cisco Systems, Inc. Confidential

2.

ce06-n5000-2(config)# inter e 1/19 ce06-n5000-2(config-if)# shut

(1) Shut down the active Ethernet path on ce06-n5000-2:

Ethernet Connection Failover test for the UCS

(2) When we checking the ping result and connection status from ce05-server-3, we will see two pings timeout. All the telnet, FTP and remote desktop connections are still here.

(3) No shut the dead Ethernet path on ce06-n5000-2: ce06-n5000-2(config)# inter e 1/19 ce06-n5000-2(config-if)# no shut

Cisco Systems, Inc. Confidential

Page 52 of 56

CPOC Lab: Customer Test Plan

(4) When we checking the ping result and connection status from ce05-server-3, we will see five pings timeout. All the telnet, FTP and remote desktop connections are still here.

3.

(1) Verify the active path is going through the port-channel of ce06-n5000-2 e1/1, we will shut down this interface to trigger the failover ce06-n5000-2# sh inter e 1/1 | inc rat 1 minute input rate 4727352 bits/sec, 421 packets/sec

NEXUS 7000 and NEXUS 5000 VPC failover for the UCS

ce06-n5000-2# conf t

1 minute output rate 4720864 bits/sec, 410 packets/sec

CPOC Lab: Customer Test Plan

Page 53 of 56

Cisco Systems, Inc. Confidential

Enter configuration commands, one per line. End with CNTL/Z. ce06-n5000-2(config)# inter e 1/1 ce06-n5000-2(config-if)# shut

ce06-n5000-2# sh inter e 1/2 | inc rate

1 minute input rate 3510712 bits/sec, 322 packets/sec

1 minute output rate 3500896 bits/sec, 302 packets/sec (2) When we checking the ping result and connection status from ce05-server-3, we might see no ping timeout. All the telnet, FTP and remote desktop connections are still here.

(3) We will shut down e1/1 on ce06-n5000-2, we will trigger the traffic going back through e1/1 on ce06-n5000-2 e1/1 ce06-n5000-2(config-if)# inter e 1/1 ce06-n5000-2(config-if)# no shut

ce06-n5000-2# sh inter e 1/1 | inc rate

1 minute input rate 6026024 bits/sec, 526 packets/sec

1 minute output rate 6023728 bits/sec, 523 packets/sec

Cisco Systems, Inc. Confidential

Page 54 of 56

CPOC Lab: Customer Test Plan

(4) When we checking the ping result and connection status from ce05-server-3, we might see no ping timeout. All the telnet, FTP and remote desktop connections are still here.

(7) There are other scenarios of VPC failovers which we have tested thoroughly in another Prebuild Static Test N7k-5k-2k demo. We use smartbit test tool calculate very accurate failover convergence time. Here we will not test one by one again.

Results
(1) (2) (3)

We have successfully tested above 3 failover scenarios: Fiber Channel failover test for the UCS Ethernet Connection Failover test for the UCS

NEXUS 7000 and NEXUS 5000 VPC failover for the UCS

CPOC Lab: Customer Test Plan

Page 55 of 56

Cisco Systems, Inc. Confidential

Conclusions
This successful test should provide an overview of Cisco Unified Computing System is a set of preintegrated data center components- blade servers, adapters, fabric interconnects. This CPOC also provides an overview of Cisco Unified Computing System integrating to EMC storage, VMware ESX 4.0 and Nexus 1000. Nexus 7000 will the datacenter core and aggregation, Nexus 5000 is the access layers to provide High Availability, Storage and FCoE solution. We tested UCS and UCS Manager. These new hardware and software are designed specifically for data centers and focuses on performance, high availability, seamless management We also tested several failover scenarios for N7k, N5k and UCS.

Cisco Systems, Inc. Confidential

Page 56 of 56

CPOC Lab: Customer Test Plan

S-ar putea să vă placă și