Sunteți pe pagina 1din 46

VMware vSphere Storage Appliance

Evaluation Guide
Tec h n i c a l W h iTe Pa P e R

VMware vSphere Storage Appliance Evaluation Guide

Table of contents About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Hardware and Infrastructure Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Server Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Hard-Disk Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Hardware RAID Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Network Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Two-Node Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Three-Node Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 VMware vSphere 5.0 Evaluation Worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Section 1: Install the VSA Manager Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Section 2: Enable the VSA Plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Section 3: Configure the VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Section 4: Verify that the NFS Datastores Are Available on the VMware ESXi Hosts . 29 Section 5: Manage and Monitor the VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Section 6: Create a Virtual Machine on a VSA Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Section 7: Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Section 8: Availability Features of the VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Product Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Help and Support During the Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 VMware vSphere and vCenter Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 VMware Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Providing Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Tech n i c al Wh iTe PaPe R / 2

VMware vSphere Storage Appliance Evaluation Guide

About This Guide


The purpose of the VMware vSphere Storage Appliance Evaluation Guide is to support a self-guided, hands-on evaluation of the VMware vSphere Storage Appliance (VSA) features. It highlights the VSA features usable by all customers of VMware vSphere 5.0 (vSphere).

Intended Audience
This guide covers evaluation cases that are suitable for the following IT professionals: They have an existing VMware virtualization environment, which includes shared storage, but still wish to evaluate the VSA. They have an existing VMware virtualization environment but do not have a storage array to provide shared storage. With the VSA, administrators create shared datastores using the local storage on the VMware ESXi hosts. These customers can then make use of many additional vSphere features that require shared storage, such as VMware vSphere vMotion (vMotion) and vSphere High Availability (HA). They do not have an existing environment but wish to deploy such an environment with shared storage using the VSA, and they use vSphere features that require shared storage, such as vMotion and HA.

Tech n i c al Wh iTe PaPe R / 3

VMware vSphere Storage Appliance Evaluation Guide

Hardware and Infrastructure Requirements


The VSA can be deployed as a two-node or three-node configuration. In the two-node configuration, the VSA installation automatically configures a special VSA cluster service on the VMware vCenter Server to help the VSA storage cluster determine proper cluster membership in the event of a node or communications failure. Always refer to the VSA Installation and Configuration Guide for the latest information on hardware and infrastructure requirements.

Server Requirements
For the 1.0 release of VSA, only the following subset of hosts on the standard VMware vSphere Hardware Compatibility List (HCL) is supported: Dell PowerEdge R510 server (Intel Xeon Processor 5500 series) Dell PowerEdge R710 server (Intel Xeon Processor 5500 series) HP DL380 G7 server (Intel Xeon Processor E5600 series) Each host in the VSA storage cluster must be configured with the following: 6GB RAM (minimum), 24GB RAM (recommended) 2GHz (or higher) processors 4 x network adaptor ports spread across one, two, three or four physical network adaptors 8 x small form factor (SFF) disks (details below) 1 x hardware RAID controller (details below) NOTE: Because the VSA installer also creates an HA cluster, the admission control policy for memory will reserve one-half of each hosts physical memory in a two-node configuration and one-third of each hosts physical memory in a three-node configuration. This will directly impact the number of virtual machines that can be deployed on a VSA cluster. The hosts must be configured with a fresh installation of VMware ESXi 5.0, with the default network configuration created by the ESXi installation. NOTE: Additional hosts will be added to the HCL over time, so refer to it for updates.

Hard-Disk Requirements
Each server in the VSA storage cluster must be configured with one of the following disk configurations: 8 x SFF, 15K revolutions per minute (RPM), serial attached SCSI (SAS), two drives 8 x SFF, 10K RPM serial advanced technology attachment (SATA) drives Although the VSA can also use four- or six-disk configurations, the recommendation is to use eight for optimal performance. These disks will be attached to a hardware RAID controller listed as follows. NOTE: Disks cannot be added after the VSA has been deployed, so care must be taken to correctly size storage requirements before installation.

Hardware RAID Requirements


Based on the set of supported servers for the VSA 1.0 release, the following hardware RAID controllers are supported: Dell PowerEdge R510/R710 server: PERC H700 Integrated RAID with 512MB NVRAM Dell PowerEdge R510/R710 server: PERC 6/I HP DL385 G7 server: HP Smart Array 410i controller with 256MB NVRAM

Tech n i c al Wh iTe PaPe R / 4

VMware vSphere Storage Appliance Evaluation Guide

The hardware RAID controller is necessary, as the administrator will create a mirrored RAID 1+0 volume (commonly referred to as RAID 10) on each ESXi 5.0 host. This volume is used to install ESXi 5.0 on, and the remaining space is used to create a single VMFS-5 volume to be used by the VSA. Additional controllers will be added to the HCL over time, so refer to the HCL for updates. The VSA configuration requires a RAID 10 configuration to provide redundancy at the storage level. Each host in the VSA storage cluster must have a supported hardware RAID controller connected to eight physical disks. These disks must be configured into single or multiple RAID 10 volumes, which are presented as install devices for the ESXi 5.0 host.
PSA/VMFS/vSCSI HP Smart Array or LSI MegaRAID Partition Table ESXi Image VMFS

ESXi

RAID 10

SCSI Disk

SCSI Disk

Physical Hardware

SAS RAID Adapter RAID 10 Array

Figure 1. VMware ESXi 5 .0 Server Configuration with Local Storage Configured as a RAID 10 Array

The initial storage configuration must be done outside of vSphere on the physical server. Users will typically need to reboot the host, and access the configuration menu of the RAID controller using the appropriate key strokes. These will vary from server to server. Figure 2 is an example screen showing a correctly configured HP server using the HP Smart Array P410i RAID controller. It is recommended that users refer to the appropriate vendor documentation on how to correctly use the BIOS-based RAID configuration utility of a particular host.

Figure 2 . HP Smart Array P410i RAID Controller Configuration Screen

Tech n i c al Wh iTe PaPe R / 5

VMware vSphere Storage Appliance Evaluation Guide

Network Requirements
Each ESXi host in the VSA storage cluster should have a minimum of four physical network adaptor ports. The VSA installer expects a network configuration identical to a fresh default installation of ESXi. This means the network adaptors must be labeled vmnic0, vmnic1, vmnic2 and vmnic3. One of these uplinks must be assigned to the vSwitch that is used by the ESXi management network and the virtual machine network port group of the ESXi 5.0 host. If you have made any changes to the default network configuration during installation, these changes must be reversed or a new installation must be performed. In Figure 3, there are two dual-port network adaptors. During the VSA deployment, three virtual machine port groups will be created, one for the VSA front-end network (VSA and VSA cluster management, and NFS server), one for the VSA back-end network (cluster communication and storage synchronization) and one for vMotion (called the feature network).

ESXi

VSA Front-End PG Virtual Switch 0 NIC Team

VSA Back-End PG Virtual Switch 1 NIC Team

e1000 vmic

e1000 vmic

e1000 vmic

e1000 vmic

Physical Hardware

Dual-ported 1 GigE NIC

Dual-ported 1 GigE NIC

GigE Physical Switch

GigE Physical Switch

Figure 3. vSphere Storage Appliance Network Configuration Requirement

Figure 4 is an example of what vSwitch0 should look like from the vSphere UI before the VSA installation is initiated. This is a standard/default configuration following ESXi installation. The management network is already on a VLAN.

Figure 4. Default Network Configuration from a Fresh VMware ESXi 5 .0 Installation

Tech n i c al Wh iTe PaPe R / 6

VMware vSphere Storage Appliance Evaluation Guide

The VSA installer will automatically add a second network adaptor port, which will provide optimal dual redundancy for both network adaptors and physical switches. This forms a network adaptor team on the vSwitch. This vSwitch is used for the front-end networking of the VSA (in other words, VSA node and cluster management IP addresses, and the NFS volume IP address). The administrator will provide IP addresses for each of these during the installation. The two remaining network adaptors will be automatically combined to form a new network adaptor team on a new vSwitch. These will be used for VSA back-end networking and vSphere features network (for example, vMotion). The back-end interfaces will use the 192.168 network by default, so the administrator will need to make sure that these addresses are unused on the back-end network, or administrators can chose an alternate IP address for the features network during installation. The front-end and back-end networks are virtual machine networks. They are used by all the deployed VSA members. The vSphere feature (vMotion) IP address can be provided as a static IP during the installation or can be configured to pick up an IP address using DHCP. This will be determined by the administrator during installation. The VSA management IP addresses should be on the same subnet as the vCenter Server that is being used to manage the ESXi 5.0 hosts. When the VSA cluster installation is complete, each ESXi network configuration should look similar to the following. Figure 5 shows that the front-end networking and vMotion network are on a different VLAN than the back-end network.

Figure 5. VMware ESXi 5 .0 Network Configuration After a VSA Cluster Installation

Tech n i c al Wh iTe PaPe R / 7

VMware vSphere Storage Appliance Evaluation Guide

Populate one of the following tables with the IP addresses you plan to use in the cluster, based on the configuration you plan to implement. Two-Node Configuration There are 9, 11 or 13 IP addresses needed (depending on the DHCP for the features network and choosing a nondefault IP for the back-end network):
NAME IP AddRESS

VSA Cluster IP Address VSA Cluster Service IP Address vCenter Server IP Address ESXi Host 0 Management IP Address VSA Host 0 Management IP Address VSA Host 0 NFS IP Address VSA Host 0 vMotion/Feature IP Address (leave blank if using DHCP) VSA Host 0 Back-End Network IP Address (leave blank if using default 192.168.0.x) ESXi Host 1 Management IP Address VSA Host 1 Management IP Address VSA Host 1 NFS IP Address VSA Host 1 vMotion/Feature IP Address (leave blank if using DHCP) VSA Host 1 Back-End Network IP Address (leave blank if using default 192.168.0.x) Front-End VLAN ID (leave blank if no VLAN) Back-End/Features Network VLAN ID (leave blank if no VLAN)
Table 1. Network Details for a Two-Node VSA Cluster

____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____

192.168.____.____

____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____

192.168.____.____

________

________

Tech n i c al Wh iTe PaPe R / 8

VMware vSphere Storage Appliance Evaluation Guide

Three-Node Configuration There are 11, 14 or 17 IP addresses needed (depending on the DHCP for the features network and choosing a nondefault IP for the back-end network):
NAME IP AddRESS

VSA Cluster IP Address vCenter Server IP Address ESXi Host 0 Management IP Address VSA Host 0 Management IP Address VSA Host 0 NFS IP Address VSA Host 0 vMotion/Feature IP Address (leave blank if using DHCP) VSA Host 0 Back-End Network IP Address (leave blank if using default 192.168.0.x) ESXi Host 1 Management IP Address VSA Host 1 Management IP Address VSA Host 1 NFS IP Address VSA Host 1 vMotion/Feature IP Address (leave blank if using DHCP) VSA Host 1 Back-End Network IP Address (leave blank if using default 192.168.0.x) ESXi Host 2 Management IP Address VSA Host 2 Management IP Address VSA Host 2 NFS IP Address VSA Host 2 NFS IP Address (leave blank if using DHCP) VSA Host 2 Back-End Network IP Address (leave blank if using default 192.168.0.x) Front-End VLAN ID (leave blank if no VLAN) Back-End/Features Network VLAN ID (leave blank if no VLAN)
Table 2. Network Details for a Three-Node VSA Cluster

____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____

192.168.____.____

____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____

192.168.____.____

____.____.____.____ ____.____.____.____ ____.____.____.____ ____.____.____.____

192.168.____.____

________

________

Tech n i c al Wh iTe PaPe R / 9

VMware vSphere Storage Appliance Evaluation Guide

Assumptions
To successfully utilize this guide, the following is assumed: Server hardware has been validated against the HCL, and all hosts that make up the VSA cluster are of a similar configuration. The local storage on each ESXi host has been correctly configured into one or more RAID 10 logical volumes. There is no audit check for this setting by the installer, so the assumption is that the customer has done this configuration correctly. Two independent VLAN or network segments are available: Network segment/VLAN A Used for the ESXi management network, the front-end networking of the VSA, and vMotion Network segment/VLAN B Used for the VSA back-end networking A vCenter Server 5.0 system used for managing the VSA is on the same subnet/VLAN/network segment as the VSA management IP addresses.

Considerations
There are a number of considerations to keep in mind when deploying a VSA cluster. vCenter is required to install the VSA cluster. For that reason, it must exist before the VSA cluster is deployed. And it must be deployed outside of the VSA to align with the best practice of not placing an administrative utility on the infrastructure that is being administrated. For instance, if there were a fatal cluster infrastructure failure, it would also bring down the vCenter if it resided on one of the VSA datastores, meaning that there would be no window into what caused the failure and no tool to troubleshoot the issue. The choice of running as a two-node or three-node cluster must be made at the time of installation and cannot be changed later. For example, if three nodes are likely to be required eventually for a configuration, but only two nodes are needed now, three nodes must be installed initially in the VSA cluster instead of installing two nodes now and expanding to three nodes later. The disk capacity of the physical hosts must be determined at deployment time and cannot be changed after installation. Capacity needs must be planned before the VSA is installed. In addition, the VSA currently supports RAID 10 configurations in the hardware RAID controller, which reduces available storage by half. And because the VSA datastores are also mirrored across appliances, which reduces available storage by half again, the usable storage in the VSA datastores is roughly 25% the size of the physical storage. So to determine how much storage will be available to virtual machines, physical storage from all ESXi servers in the VSA cluster must be added, and this sum must then be divided by four. Further discussion of this appears in the VSA Installation and Configuration Guide. The vSphere HA cluster will reserve 33% of CPU and memory in a three-node configuration and 50% of CPU and memory in a two-node configuration. This must be considered when sizing the VSA cluster for virtual machine deployments. A VSA cluster currently does not support memory overcommitment, because swapping to VSA datastores can make the VSA cluster unstable. Therefore, each virtual machine that is deployed must reserve all of its memory. This is a consideration when determining how many virtual machine workloads can run in the VSA cluster, and physical memory on the ESXi servers must be sized appropriately. This is discussed in greater detail in the VSA Administration Guide.

Tech n i c al Wh iTe PaPe R / 1 0

VMware vSphere Storage Appliance Evaluation Guide

VMware vSphere 5.0 Evaluation Worksheet


You can use the following worksheet to organize your evaluation process.
H A R d WA R E C H E C K L I S T:

All servers have been validated against the Hardware Compatibility List (HCL) for the VSA.

All servers have the same CPU family.

All RAID controllers on the servers have been validated against the HCL for the VSA.

All disks on the servers have been validated against the HCL for the VSA.

Local storage on the servers has been set up into a RAID 10 configuration.

There are four physical network adaptor ports on each of the servers.

S O F T WA R E C H E C K L I S T:

VMware vSphere ESXi 5.0 is installed.

VMware vCenter Server is installed.

VMware vSphere Client is installed.

VSA software installer is on the vCenter Server.

Tech n i c al Wh iTe PaPe R / 11

VMware vSphere Storage Appliance Evaluation Guide

Section 1: Install the VSA Manager Software


Download the VSA software to your vCenter Server. VSA currently supports vCenter and VSA Manager on Windows 2008 R2 (64-bit). After the software has been downloaded, start the install. You will first be prompted for a language for the installation. In this example, English (United States) has been chosen.

After you click OK, the VMware vSphere Storage Appliance Manager splash screen is displayed:

This is soon followed by the InstallShield Wizard:

Tech n i c al Wh iTe PaPe R / 12

VMware vSphere Storage Appliance Evaluation Guide

After the .msi has been extracted successfully, the InstallShield Wizard displays the welcome screen:

Initially the Next button might not be displayed, but after a few moments it will be available:

Click the Next button to proceed with the installation.

Tech n i c al Wh iTe PaPe R / 13

VMware vSphere Storage Appliance Evaluation Guide

This will bring you to the End-User Patent Agreement:

Click Next to continue. This will bring you to the License Agreement. Select the option I accept the terms in the license agreement; click Next to continue:

Tech n i c al Wh iTe PaPe R / 14

VMware vSphere Storage Appliance Evaluation Guide

The next screen will ask you to provide vCenter Server Information. The IP address or host name as well as the HTTPS port field should be automatically populated. The IP address here has been deliberately obfuscated. Click Next to proceed to the next screen.

The next screen in the InstallShield Wizard requires that you insert a license key. The license here has been deliberately obfuscated.

Tech n i c al Wh iTe PaPe R / 15

VMware vSphere Storage Appliance Evaluation Guide

You are now ready to begin the install. Click the Install button to continue.

When the installation completes, the InstallShield Wizard Completed screen is displayed. Click the Finish button to close the wizard.

The VSA Manager installation is now completed. The next step is the actual installation of the VSA cluster. This is done via the VMware vSphere Client and will be discussed in the next sections.

Tech n i c al Wh iTe PaPe R / 1 6

VMware vSphere Storage Appliance Evaluation Guide

Section 2: Enable the VSA Plug-in


After you have logged into vCenter via the vSphere Client, select the Datacenter icon. Then check to see if there is a VSA Manager tab available. If there is a VSA Manager tab visible, the plug-in is already enabled. You can proceed to Section 3.

If you do not see this tab when the Datacenter tab is selected, select Plug-ins from the vCenter main menu, followed by Manage Plug-ins. This will open the Plug-in Manager.

Tech n i c al Wh iTe PaPe R / 17

VMware vSphere Storage Appliance Evaluation Guide

Check the status of the VSA Manager. You might find that it is disabled initially. To enable it, simply click the VSA Manager plug-in name, right click and select the Enable option.

This will enable the VSA Manager plug-in. You should observe the VSA Manager tab appearing in vCenter when the Datacenter icon is selected. When the VSA Manager tab is visible, you can proceed.

Tech n i c al Wh iTe PaPe R / 1 8

VMware vSphere Storage Appliance Evaluation Guide

Section 3: Configure the VSA


To begin the configuration of the VSA, click the VSA Manager tab. You should receive a security alert in relation to the security certificate. Click Yes to proceed.

The plug-in will now load. You will be presented with the first screen of the VSA installer. We are going to do a New Installation. The installer also enables VMware vSphere vMotion (vMotion) and vSphere High Availability (VMware HA). Click Next to continue:

Tech n i c al Wh iTe PaPe R / 1 9

VMware vSphere Storage Appliance Evaluation Guide

The next screen is the feature review screen. It has information about the VSA cluster as well as the other features that will be enabled on the cluster, such as vMotion and VMware HA.

This installer will build a VMware HA cluster from the VMware ESXi 5.0 hosts that are chosen for the VSA cluster. The installer will also enable vMotion, which will enable virtual machines to be moved between hosts in the cluster. Click next to continue. You might have an environment with multiple datacenters. On this screen of the VSA installer, you must choose the appropriate datacenter. There is only a single datacenter (called VSA_Eval) in this configuration.

In the 1.0 release of the VSA, there can be only a single VSA cluster per vCenter Server instance. Therefore, if you want to use multiple VSAs, you must deploy multiple vCenter Servers. Click Next after you have chosen the desired datacenter.

Tech n i c al Wh iTe PaPe R / 20

VMware vSphere Storage Appliance Evaluation Guide

The next screen of the VSA installer is where you choose which ESXi 5.0 hosts participate in the VSA cluster. You can choose to deploy VSA in a two-node or three-node configuration. After it has been installed, a two-node configuration cannot be updated to three-node without a complete reinstallation. During this phase of the installation, the installer carries out a host audit. If any of the hosts do not have a valid configuration, they will not be eligible for selection. In this particular example, two of the hosts in the datacenter have a CPU speed of less than 2GHz, so these cannot be chosen. Other reasons, such as not having enough physical network adaptors, not having local storage, or the fact that the host is already running virtual machines, will make an ESXi 5.0 host ineligible for selection.

Notice the unsupported hardware message. This might be due to the fact that you do not have enough physical network adaptors available on the host. After you have selected two or three supported and compliant nodes, click Next to continue.

Tech n i c al Wh iTe PaPe R / 21

VMware vSphere Storage Appliance Evaluation Guide

As previously mentioned, if you create a two-node cluster, you cannot come back at a later date and expand it into a three-node configuration. There is no facility in the 1.0 release of VSA to add nodes to the cluster. NOTE: Although this evaluation guide utilizes three ESXi hosts for the VSA cluster, you can still evaluate the VSA using only two ESXi nodes. The two-node configuration will use a special VSA cluster service installed on the vCenter Server and will require an additional IP address. The next screen is the networking screen. At this point, refer back to the selection that you made earlier for the IP addresses that you will be using for the VSA cluster. In this example, a three-node cluster has been selected for configuration. If you were setting up a two-node cluster, an additional field would appear, the IP address for the VSA cluster service. The IP addresses used here for the VSA nodes should be on the same subnet as the vCenter Server. By means of a helpful feature of the installer, after you provide the VSA cluster IP address, the remaining IP addresses will be automatically populated with increments of that IP address. Therefore, if you can assign a contiguous range of IP addresses to the VSA cluster, it will make this step much easier and quicker. The following is an example using the IP address 10.20.196.115 for the VSA cluster:

Tech n i c al Wh iTe PaPe R / 2 2

VMware vSphere Storage Appliance Evaluation Guide

If we next check one of the ESXi hosts, we see the IP addresses for the management, NFS and back-end networks automatically populated:

The vSphere feature IP address (vMotion network) will use DHCP by default for an IP address. However, if you uncheck the Use DHCP flag, it will also be automatically populated with the incremental of the IP address provided. At this point you should also supply the VLAN IDs (if any) of the management VLAN and the back-end VLAN. The VLAN IDs will also be propagated to the other hosts in the configuration, so you only must input them once. Select another host to verify. The VLAN IDs and back-end IP address automatically increment.

When the networking addresses have been correctly populated, click Next.

Tech n i c al Wh iTe PaPe R / 2 3

VMware vSphere Storage Appliance Evaluation Guide

The next screen of the installer relates to how the disks belonging to the VSA should be formatted. There are two options: initialize the disk blocks on first access or initialize all disk blocks before using them.

The respective advantages of each option are as follows: First access: The installation time will be quicker, but the initial performance of the device will be slower until all of the disks are initialized. Immediately: The installation time will be longer, but there wont be any performance degradation from the appliance, because it wont need to initialize blocks on first access. After you have made your choice, click Next. This will take you to the Ready to Install screen. From there, you can check to see if the network settings are correct. IP addresses have been deliberately obfuscated in this screenshot.

Tech n i c al Wh iTe PaPe R / 24

VMware vSphere Storage Appliance Evaluation Guide

If everything looks correct, click the Install button to begin the configuration of the ESXi hosts and the deployment of the appliances to create a VSA storage cluster. A popup will appear, stating that existing data will be deleted. Click Yes.

The following are configuration steps to be carried out: Configure NFS, back-end and vMotion networking on all ESXi hosts. Configure a vSphere HA cluster with Enhanced vMotion Compatibility (EVC) to ensure that vMotion correctly works between all hosts. Deploy vSphere Storage Appliances. Install and configure the VSA cluster. Synchronize the mirror copies of the datastores across different VSA members. Mount the datastores from the VSA cluster nodes to the ESXi 5.0 hosts.

In the task manager, you can see tasks relating to the creation of the vSphere HA cluster:

Tech n i c al Wh iTe PaPe R / 2 5

VMware vSphere Storage Appliance Evaluation Guide

You should also be able to see the vSphere High Availability cluster being created. In addition to vSphere High Availability, EVC is enabled. This is why it is a requirement to have only hosts within the same CPU family in the VSA cluster.

You can safely ignore the warnings against the ESXi hosts. They are there because there is currently no shared storage to configure heartbeat datastores for the vSphere HA cluster. When the VSA cluster is up and running correctly, and it is presenting shared storage to all of the ESXi hosts, these warnings will disappear. The network changes are quite significant and are carried out on all ESXi 5.0 hosts in the VSA storage cluster. The following are the main networking changes: VSA front-end virtual machine port group added to vSwitch0 vSwitch1 created with two remaining network adaptors VSA back-end virtual machine port group added to vSwitch1 VSA-vMotion (feature network) added to vSwitch1 When the vSphere High Availability cluster has been created and the network has been configured successfully, the VSA installer will move on to deploying the appliances. Here you can see that the Deploying vSphere Storage Appliance step is underway. Notice in the inventory that there are now three new virtual machines in the configuration. Each of these is a VSA. There will be one VSA deployed per ESXi 5.0 host. Because this is a three-node configuration, there will be three VSAs in total.

You might ask what advantage a vSphere HA cluster gives you when you are already using VSA. vSphere High Availability will restart virtual machines on other ESXi hosts in the cluster if one of the ESXi hosts fails. So although VSA gives you availability from a storage perspective, vSphere HA gives you availability from an ESXi host perspective.

Tech n i c al Wh iTe PaPe R / 26

VMware vSphere Storage Appliance Evaluation Guide

When the appliances are up and running, the cluster framework starts to initialize. Setup mirroring between the respective nodes in the cluster also initializes, so that if one node fails, another node can take over the duties of presenting the datastore via NFS. These are the actions that are taking place during the Installing VSA cluster step. There is very little to observe during this phase, so you must wait until it completes.

After the cluster framework is in place, and the datastores are successfully being mirrored, you will see the final step, Mounting the datastores:

Tech n i c al Wh iTe PaPe R / 27

VMware vSphere Storage Appliance Evaluation Guide

In the vCenter Recent Tasks view, you will now start to see operations related to the mounting of the NFS volumes from the VSA onto the ESXi 5.0 hosts.

When all of this completes successfully, the VSA Installer screen should look like the following:

That completes the installation of the VSA. The next step will be to verify that you can successfully see these NFS datastores on your ESXi 5.0 hosts. Click the Close button. This will automatically place you in the VSA Manager.

Tech n i c al Wh iTe PaPe R / 2 8

VMware vSphere Storage Appliance Evaluation Guide

Section 4: Verify that the NFS datastores Are Available on the ESXi Hosts
Before going any further, check that the NFS datastores from each of the VSAs have been mounted to each of the ESXi 5.0 hosts. If there are three nodes in your cluster, there should be three VSAs. So there should be three NFS datastores mounted on each ESXi host. In the UI, select one of the ESXi 5.0 hosts from the vCenter inventory. Then select the Configuration tab. Next, select Storage from the Hardware list. You should observe that there are now three datastores mounted to that ESXi 5.0 host, one datastore from each of the VSAs. You can verify that they are from the VSAs by checking the IP address against the IP address that was used for NFS when installing the VSA.

NOTE: You will also observe an alert against the local datastore. This is because the VSA appliance is consuming almost all of the available storage on the local datastore. This is to be expected. Also significant is the size of the NFS share. It is approximately half the size of the local storage of the ESXi host. This is because the other half is used for mirroring purposes. This is discussed in more detail in the Considerations section.

Tech n i c al Wh iTe PaPe R / 2 9

VMware vSphere Storage Appliance Evaluation Guide

Section 5: Manage and Monitor the VSA


Lets now return to the VSA Manager. Click the Datacenter object in the vCenter Inventory. Select the VSA Manager tab. The Datastores view displays the list of datastores currently being exported from the VSA cluster, and which VSA member is doing the export. Because this is a three-node configuration, there are three datastores visible, one from each VSA cluster member. If this were a two-node configuration, there would be only two datastores visible. If you select an individual datastore, the properties, including the location of the replica (mirror) of the datastore, are displayed in the lower part of the window. At this time, no virtual machines have been deployed to the datastore, so the Capacity icon shows it as completely empty.

Tech n i c al Wh iTe PaPe R / 30

VMware vSphere Storage Appliance Evaluation Guide

Another view is the Appliances view. Select the Appliances button, located to the left of the center of the window. This view will give you an appliance-centric view as opposed to a datastores-centric view. Again, this is a three-node configuration, so there are three members visible. If this were a two-node setup, only two members would be displayed. Select the individual appliances to get a more detailed view in the lower part of the window:

Finally, we have the Map view. This diagrammatically represents the relationship between ESXi hosts, VSAs and the NFS datastores that are exported from the appliances and mounted on the ESXi hosts. Using the Map Relationships check-boxes to the right of the window, you can choose which objects to include in the map:

Tech n i c al Wh iTe PaPe R / 3 1

VMware vSphere Storage Appliance Evaluation Guide

Section 6: Create a Virtual Machine on a VSA datastore


To look at the performance of the VSA cluster during various operations and how resilient it is to failures, it is recommended that you deploy a virtual machine to one of the datastores. In this example, I have deployed a virtual machine running Microsoft Windows 2008 R2 as the guest operating system (OS). This virtual machine has two disks. One of the disks, the boot disk, is on one VSA datastore; another disk, the data disk, is on another VSA datastore. I have also installed Iometer (http://www.iometer.org) onto the guest OS so a certain amount of I/O load can be driven to the virtual machines data disk. Disk 1: Boot disk resides on VSADs-0

Disk 2: Data disk resides on VSADs-2

Tech n i c al Wh iTe PaPe R / 32

VMware vSphere Storage Appliance Evaluation Guide

After Iometer is installed, launch it and set the Disk Target to the second disk. The purpose is to generate an I/O load to the second virtual machine disk, which in turn will generate I/O to the underlying datastore, VSADs-2. It doesnt matter which I/O load you choose in the Access Specifications; the objective is simply to generate I/O. Click the green flag icon to start I/O. Then go to the Results Display tab. Move the Update Frequency (seconds) slider tab to 2 seconds as in this example, and you should observe I/O going to the disk as in the following screenshot:

With I/O running, we can now proceed to look at the resilience features of the VSA.

Tech n i c al Wh iTe PaPe R / 33

VMware vSphere Storage Appliance Evaluation Guide

Section 7: Maintenance Mode


At this point, I/O is going to the virtual machines second drive, which in this case is on the VSADs-2 datastore. We will now place the appliance that is exporting this datastore into maintenance mode and observe the effect on the Iometer application in the virtual machine. Once again, this exercise is being carried out on a three-node configuration. If you are using a two-node configuration, the views will differ slightly. First, determine which VSA cluster member is presenting datastore VSADs-2. Return to the VSA Manager > Datastores view.

From here we can clearly see that VSADs-2 is running on appliance VSA-2. Click the Appliances button to go to the VSA view. Next, select VSA-2, right click and select Enter Appliance Maintenance Mode

Tech n i c al Wh iTe PaPe R / 3 4

VMware vSphere Storage Appliance Evaluation Guide

When prompted, click Yes to confirm that you wish to put the VSA into maintenance mode:

When the VSA enters maintenance mode, the following popup appears:

After a few moments, click the Refresh Page link in the VSA Manager > Appliances view. You should observe that the status for VSA-2 changes to Offline. You should also notice that it is no longer responsible for hosting/exporting datastores and that the VSADs-2 datastore is now hosted/exported by a new appliance, VSA-1 in this case.

Tech n i c al Wh iTe PaPe R / 35

VMware vSphere Storage Appliance Evaluation Guide

Next, click the Datastores button to view the status of the datastores. You should see that there are two datastores in a degraded state. This is because VSA-2 would have provided both a primary for one datastore and a replica/mirror for another datastore. Therefore, two datastores are affected by this appliances being placed into maintenance mode, so two datastores are degraded.

Before we placed VSA-2 into maintenance mode, we saw that the primary datastore of VSADs-2 was running on it. Now when we look at the datastores, we can see that VSADs-2 is being exported from appliance VSA-1. By placing appliance VSA-2 into maintenance mode, the replica of the VSADs-2 datastore on VSA-1 is promoted to a primary and the export is now done from VSA-1.

Tech n i c al Wh iTe PaPe R / 3 6

VMware vSphere Storage Appliance Evaluation Guide

Lets now check the virtual machine to see whether Iometer continues to run without any issues, by clicking back on the console of the virtual machine running Iometer.

Iometer continues to run without incident. There should not have been any errors (the last indicator in the Iometer UI), showing that the transition of I/O from the primary to the replica datastore occurred without any impact to the virtual machine or to the applications running in the virtual machine. To complete this section, take the VSA-2 appliance out of maintenance mode, powering it on in the vCenter inventory (placing a VSA into maintenance mode powers it off):

Tech n i c al Wh iTe PaPe R / 37

VMware vSphere Storage Appliance Evaluation Guide

Use the Refresh Page link to update the VSA Manager UI. After the appliance is powered on, it does not automatically rejoin the cluster but will in fact remain in maintenance mode.

To leave maintenance mode and have the appliance rejoin the cluster, right click it and select Exit Appliance Maintenance Mode:

Use the Refresh Page link to update the VSA Manager UI if VSA-2 isnt being updated in a timely fashion. If you have left VSA-2 in maintenance mode for a long time, you will have to wait for the data to synchronize between the primary and the mirror volumes. This can take a few minutes. The synchronization can be monitored in the vCenter task bar.

After a while, all VSAs will be back online, as will be all the datastores.

Tech n i c al Wh iTe PaPe R / 3 8

VMware vSphere Storage Appliance Evaluation Guide

When the synchronization completes, refresh the page and observe that the responsibility for exporting datastores is now balanced across all VSAs once again.

All datastores are now back online after the primaries and the mirrors have synchronized. Two resynchronization operations must take place because two datastores (primary and replica) were affected by placing a VSA into maintenance mode.

Tech n i c al Wh iTe PaPe R / 39

VMware vSphere Storage Appliance Evaluation Guide

Section 8: Availability Features of the VSA


In this final section of the evaluation guide, we cause a worst-case scenario on one of the ESXi hosts participating in the VSA cluster. This will have a catastrophic effect on the VSA running on that ESXi host. The purpose of this is to highlight how well the VSA handles this scenario. NOTE: You will need access to your ESXi server console when doing this lab, because the availability test causes an outage that will require resetting your host after the event. At this stage, Iometer should still be running in the virtual machine, sending I/O to the second disk. Verify that this is the case. If it is not running, restart it.

Now, as we did previously, check which datastore the virtual machine is using for its disks. This can be found by selecting the virtual machine in the vCenter inventory and choosing the Edit Settings option. In this example, the first disk is on datastore VSADs-0.

Tech n i c al Wh iTe PaPe R / 40

VMware vSphere Storage Appliance Evaluation Guide

The second disk is on VSADs-2. This is the datastore that is receiving I/O from Iometer running in the virtual machine.

The next step is to determine which VSA is exporting the datastores. Return to the VSA Manager UI by clicking the datacenter object in the vCenter inventory and then selecting the VSA Manager tab. Select the Appliances button, check the Hosted Datastores column and determine which VSA is exporting the datastores used by the virtual machine.

In this setup, VSADs-0 is exported from VSA-1, and VSADs-2 is exported from VSA-2. Now check the host column, and find which ESXi host the VSAs reside on. In this setup, appliance VSA-1 is on host h03-h380-15. pml.local, and appliance VSA-2 is on h03-h380-16.pml.local. Because it is datastore VSADs-2 that is receiving I/O from Iometer, we will now create an outage on the ESXi host on which the appliance that exports this datastore resides. Because VSADs-2 is exported from VSA-2, in this setup, VSA-2 is on the ESXi host h03-h380-16.pml.local.

Tech n i c al Wh iTe PaPe R / 41

VMware vSphere Storage Appliance Evaluation Guide

Now we will crash this ESXi host and see how the VSA handles it. To cause the outage, use PuTTY or an equivalent tool to SSH onto the ESXi host h03-h380-16.pml.local. If SSH is not enabled, you can enable it via the Configuration > Security Profile settings in the vSphere Client. Alternatively, you can connect directly to the server console and log in that way. Use the root credentials to log in.

To verify that this is indeed the correct ESXi host, we can use the esxtop command, followed by the v option to display the virtual machines running on this ESXi host. For this example, we should observe that appliance VSA-2 is the only virtual machine on this ESXi host. If any other virtual machines are on this ESXi hostfor instance, the virtual machine that is running Iometermove the other virtual machines to another ESXi host using vMotion. Remember that the VSA installer also built a vMotion network between all of your ESXi hosts. For simplicity, we want to show how we handle losing a VMDK as opposed to losing a whole virtual machine, which would then involve vSphere HA restarting the virtual machine on another host. That is not something we want to demonstrate in this evaluation guide. This is all you want to see from the esxtop v option for the purposes of this exercisethe appliance that is presenting a datastore on which the Iometer virtual machine is sending I/O:

Now we are ready to initiate the outage on the ESXi host. Type q to exit esxtop.

Tech n i c al Wh iTe PaPe R / 42

VMware vSphere Storage Appliance Evaluation Guide

To do this, keep your virtual machine running Iometer. To create a crash on the ESXi host, the following command can be used from the PuTTY root SSH session: # vsish -e set /reliability/crashMe/Panic 1 This will cause an outage on the ESXi host on which it is run. We will then see how VSA handles this outage. You should observe a slight degradation in performance while the VSA cluster switches from the primary to the mirror datastore. But after the failover has successfully taken place, I/O should return to its previous performance. NOTE: If you are uncomfortable causing a crash such as this on the host, you can try an alternate method such as removing power. The objective here is to show how the VSA can handle such a catastrophic event. There will be no outage on the Iometer virtual machine, because the VSA cluster has handled the outage by exporting the mirror copy of the datastore from a different VSA in the cluster (very similar to how we saw maintenance mode perform). In the vCenter inventory, the ESXi host (h03-h380-15.pml.local) and the VSA-1 appliance appear to be not responding and disconnected, respectively:

However, there has been no adverse impact on the virtual machine that was using the datastore from that VSA. The failover mechanisms built into VSA can handle such catastrophic failures. Check the VSA Manager UI once more. Again, because we have lost a complete VSA, two datastores are in a degraded state. You should observe the following from VSA and datastore perspectives:

Tech n i c al Wh iTe PaPe R / 43

VMware vSphere Storage Appliance Evaluation Guide

VSA-2 is now in an offline state, and presentation of its datastore VSADs-2 has again moved to another appliance.

Verify that Iometer is still running in the virtual machine and is sending I/O to the second disk with no errors.

Even with an ESXi host failure that takes a VSA member out of the cluster, there is no adverse effect on the NFS datastores or on the virtual machines running on those datastores.

Tech n i c al Wh iTe PaPe R / 4 4

VMware vSphere Storage Appliance Evaluation Guide

Conclusion
We have now completed all the tasks in this evaluation guide. However, if this failure was a catastrophic, nonrecoverable hardware error, the VSA has a procedure that enables you to replace the broken server with a new server without interrupting the datastores on the remaining cluster members. You can now reset your server and allow it to reboot. It will rejoin the cluster and automatically restart VSA-1. After the primary and mirror volumes have synchronized, the whole VSA cluster will be back online. In this evaluation guide, you have seen the following: The VSA using local storage on ESXi hosts and presenting them as shared storage in the form of NFS datastores The simple and easy-to-follow installation steps The availability of features such as vSphere HA and vMotion, which are automatically configured by the VSA installer The management interface for the VSA The resilience features of the VSA when an outage occurs Remember to revisit the Considerations section if you are deciding whether to deploy the VSA in a production environment. Also, refer to the Release Notes, VSA Installation and Configuration Guide and VSA Administration Guide, as these will provide additional guidance on how to correctly deploy a VSA. This completes the evaluation guide. Although they are outside the scope of this document, you might also like to use this setup to test out vMotion and vSphere HA, both of which are automatically configured by the VSA installer.

Product documentation
For detailed information regarding installation, configuration, administration and usage of VMware vSphere, refer to the online documentation: http://www.vmware.com/support/pubs/vs_pubs.html.
TA S K S d O C U M E N TAT I O N R E Q U I R E d

Install and configure vCenter Server 5.0 Install and configure ESXi 5.0 Install and configure VSA 1.0

VMware ESXi Installable and vCenter Server Setup Guide VMware ESXi Embedded and vCenter Server Setup Guide VMware ESXi Installable and vCenter Server Setup Guide VMware ESXi Embedded and vCenter Server Setup Guide HCL VSA Release Notes VSA Installation and Configuration Guide VSA Administration Guide

Administrate VSA 1.0

Tech n i c al Wh iTe PaPe R / 45

VMware vSphere Storage Appliance Evaluation Guide

Help and Support during the Evaluation


This guide provides an overview of the steps required to ensure a successful evaluation of VMware vSphere Storage Appliance. It is not meant to substitute for product documentation. Refer to the online product documentation for VSA for more detailed information (see the following for links). You can also consult the online VMware knowledge base if you have any additional questions. If you require further assistance, contact a VMware sales representative or channel partner.

VMware vSphere and vCenter Resources


Product documentation: http://www.vmware.com/support/pubs/ Online support: http://www.vmware.com/support/ Support offerings: http://www.vmware.com/support/services Education services: http://mylearn1.vmware.com/mgrreg/index.cfm Support knowledge base: http://kb.vmware.com VMware Contact Information For additional information or to purchase VMware vSphere, the VMware global network of solutions providers is ready to assist you. If you would like to contact VMware directly, you can reach a sales representative at 1-877-4VMWARE (650-475-5000 outside North America) or email sales@vmware.com. When emailing, include the state, country and company name from which you are inquiring. You can also visit http://www.vmware.com/ vmwarestore/. Providing Feedback We appreciate your feedback on the material included in this guide. In particular, we would be grateful for any guidance on the following topics: How useful was the information in this guide? What other specific topics would you like to see covered? Overall, how would you rate this guide? Send your feedback to the following address: tmdocfeedback@vmware.com, with VMware vSphere Storage Appliance Evaluation Guide in the subject line. Thank you for your help in making this guide a valuable resource.

About the Author


Cormac Hogan is a senior technical marketing manager in the Infrastructure Product Marketing group at VMware. He was one of the first VMware employees at our EMEA headquarters in Cork, Ireland, back in April 2005. He spent two years as the technical support escalation engineer for storage before moving into a support readiness training role, where he developed training materials and delivered training to technical support and VMware support partners. He is currently responsible for technical marketing of core vSphere storage technologies, including the VSA.

VMware, inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www .vmware .com
Copyright 2011 VMware, Inc . All rights reserved . This product is protected by U .S . and international copyright and intellectual property laws . VMware products are covered by one or more patents listed at http://www .vmware .com/go/patents . VMware is a registered trademark or trademark of VMware, Inc . in the United States and/or other jurisdictions . All other marks and names mentioned herein may be trademarks of their respective companies . Item No: VMW-WP-vSPHR-STOR-APP-EVAL-USLET-101-WEB

S-ar putea să vă placă și