Documente Academic
Documente Profesional
Documente Cultură
Foundation 1.2
18-Feb-2015
Notice
Copyright
Copyright 2015 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention
Description
variable_value
ncli> command
user@host$ command
root@host# command
The commands are executed as the root user in the hypervisor host
(vSphere or KVM) shell.
> command
output
Target
Username
Password
Nutanix Controller VM
admin
admin
vSphere client
ESXi host
administrator
nutanix/4u
Interface
Target
Username
Password
ESXi host
root
nutanix/4u
KVM host
root
nutanix/4u
SSH client
Nutanix Controller VM
nutanix
nutanix/4u
Version
Last modified: February 18, 2015 (2015-02-18 15:23:03 GMT-8)
Contents
Release Notes................................................................................................................... 5
Release 1.2.......................................................................................................................................... 5
1: Overview................................................................................................... 6
Imaging Nodes..................................................................................................................................... 6
Summary: Imaging a Cluster.................................................................................................... 6
Summary: Imaging a Node.......................................................................................................7
Supported Hypervisors.........................................................................................................................7
4: Imaging a Node..................................................................................... 25
Installing a Hypervisor....................................................................................................................... 25
Installing ESXi......................................................................................................................... 28
Installing Hyper-V.................................................................................................................... 29
Installing the Controller VM............................................................................................................... 33
5: Foundation Portal..................................................................................36
Release Notes
Release 1.2
This release includes the following changes and enhancements:
Hypervisor imaging support has been expanded to include Hyper-V and KVM (as well as ESXi). HyperV imaging is limited to a maximum of 20 nodes.
When using a flat switch (no routing tables), a multi-homing option has been added that allows you
to specify a production IP configuration (addresses across subnets) without being on the production
network. This essentially allows you to use different subnets for IPMI, hypervisor, and Controller VM
which enables you to run cluster create with the intended production IPs during imaging. This is an
enhancement from previous Foundation releases where you had the limitation of imaging everything in
one subnet only and were forced to use cluster_init.html after putting the machine in the production
rack.
The ability to specify Controller VM IP information has been added.
The ability to specify the amount of Controller VM RAM has been added. This is especially useful if a
user would like to use advanced features such as deduplication.
Support has been added to create a cluster after imaging the nodes. This allows you to use Foundation
to perform the cluster configuration steps previously done through the cluster_init.html page.
Support has been added to image bare metal nodes with the use of the MAC address of the IPMI
interface.
A ping test feature was added to let the user ping the specified IPs in order to check for potential
conflicts before starting an imaging session.
Foundation can now image up to 20 nodes simultaneously. Previous releases were limited to a
maximum of eight nodes simultaneously.
Clicking the aggregate progress bar at the top of the progress monitor page now displays the
Foundation system.log contents in the log pane.
Online help documentation is now available from the Foundation GUI (requires Internet access).
Foundation version 1.2 requires the download of a new Foundation VM, which is now distributed in the
form of an OVF. In addition, Phoenix version 1.2 ISOs are compatible only with Foundation 1.2 VMs or
software packages. Phoenix version 1.1 and 1.0 ISOs are not supported with Foundation 1.2.
1
Overview
Nutanix installs the Nutanix Operating System (NOS) Controller VM and the KVM hypervisor at the factory
before shipping each node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes
or to use any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides
step-by-step instructions on how to image nodes (install a hypervisor and then the NOS Controller VM)
after they have been physically installed at a site.
Note: Only Nutanix sales engineers, support engineers, and partners are authorized to perform
a field installation. Field installation can be used to cleanly install new nodes (blocks) in a cluster
or to install a different hypervisor on a single node. It should not be used to upgrade the
hypervisor or switch hypervisors of nodes in an existing cluster. (You can use Foundation to
re-image nodes in an existing cluster that you no longer want by first destroying the cluster.)
Imaging Nodes
A field installation can be performed for a cluster (that is multiple nodes which can be configured as one or
more clusters) or a single node.
Supported Hypervisors
This table lists the hypervisor releases that can be installed on Nutanix models through this method.
Model (Series)
ESXi1
Hyper-V2
KVM3
NX-1000
NX-3050
NX-6000
NX-7000
NX-2000
NX-3000
(1) The supported ESXi releases are 5.0 U2 and U3, 5.1 U1 and U2, and 5.5. (2) The supported Hyper-V
release is Server 2012 R2. (3) KVM support is transparent because it is included automatically as part of
the Phoenix ISO.
2
Preparing Installation Environment
Imaging a cluster in the field requires first installing certain tools and setting the environment to run those
tools.
Video: Click here to see a video demonstration of this procedure (MP4 format). This demonstrates
the procedure for Foundation release 1.1. Some steps in release 1.1 differ from the current
procedure described here.
Installation is performed from a workstation (laptop or desktop machine) with access to the IPMI interfaces
of the nodes in the cluster. Configuring the environment for installation requires setting up network
connections, installing Oracle VM VirtualBox on the workstation, downloading ISO images, and using
VirtualBox to configure various parameters. To prepare the environment for installation, do the following:
1. Connect the first 1GbE network interface of each node (middle RJ-45 interface) to a 1GbE Ethernet
switch. The IPMI LAN interfaces of the nodes must be in failover mode (factory default setting).
Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing
tables). A flat switch is often recommended to protect against configuration errors that could
affect the production environment. Foundation includes a multi-homing feature that allows you
to image the nodes using production IP addresses despite being connected to a flat switch (see
Imaging a Cluster on page 14).
2. Connect the installation workstation (laptop or desktop machine used for this installation) to the same
1GbE switch as the nodes.
The installation workstation requires at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of
disk space (preferably SSD), and a physical (wired) network adapter.
3. Go to the Foundation portal (see Foundation Portal on page 36) and download the following files to a
temporary directory on the installation workstation.
Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version#
release, for example Foundation_VM-1.2.ovf.
Foundation_VM-version#-disk1.vmdk. This is the Foundation VM VMDK file for the version# release,
for example Foundation_VM-1.2-disk1.vmdk.
Oracle VM VirtualBox is a free open source tool used to create a virtualized environment on the
workstation.
4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
5. Create a new folder called VirtualBox VMs in your home directory.
On a Windows system, this is typically C:\Users\user_name\VirtualBox VMs.
6. Copy the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files to the VirtualBox
VMs folder that you created in step 5.
7. Start Oracle VM VirtualBox.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
c. Enter the root password (nutanix/4u) and then click Authenticate.
d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.
Note: A reboot is necessary for the changes to take effect.
g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.
14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as
follows:
Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
address now and setting it again when the workstation is on a different (typically private)
network for the installation (see Imaging a Cluster on page 14).
a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.
VMware: http://www.vmware.com/support.html
Microsoft (Hyper-V free): http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx
MSDN (subscription): http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052
File Name
MD5 Sum
ESXi 5.0 U2
VMware-VMvisorInstaller-5.0.0.update02-914586.x86_64.iso
fa6a00a3f0dd0cd1a677f69a236611e2
ESXi 5.0 U3
VMware-VMvisor391496b995db6d0cf27f0cf79927eca6
Installer-5.0.0.update03-1311175.x86_64.iso
ESXi 5.1 U1
VMware-VMvisor2cd15e433aaacc7638c706e013dd673a
Installer-5.1.0.update01-1065491.x86_64.iso
ESXi 5.1 U2
VMware-VMvisor6730d6085466c513c04e74a2c2e59dc8
Installer-5.1.0.update02-1483097.x86_64.iso
Hypervisor
Version
File Name
MD5 Sum
ESXi 5.5
VMware-VMvisorInstaller-5.5.0-1331820.x86_64.iso
9aaa9e0daa424a7021c7dc13db7b9409
Windows
Server
2012 R2
(datacenter)
en_windows_server_2012_r2_vl_x64_
dvd_3319595.iso
fb101ed6d7328aca6473158006630a9d
Windows
Server
2012 R2
(datacenter)
SW_DVD9_Windows_Svr_Std_and_
DataCtr_2012_R2_64Bit_English_-3_
MLF_X19-53588.ISO
b52450dd5ba8007e2934f5c6e6eda0ce
Windows
Server 2012
R2 (free)
9600.16384.WINBLUE_RTM.130821-1623_
X64FRE_SERVERHYPERCORE_EN-USIRM_SHV_X64FRE_EN-US_DV5.ISO
9c9e0d82cb6301a4b88fd2f4c35caf80
(SHA1: A73FC07C1B9F560F960F1
C4A5857FAC062041235)
3
Imaging a Cluster
This procedure describes how to install a selected hypervisor and the NOS Controller VM on all the new
nodes in a cluster from an ISO image on a workstation.
Before you begin:
Physically install the Nutanix cluster at your site. See the Physical Installation Guide for your model type
for installation instructions.
Set up the installation environment (see Preparing Installation Environment on page 8).
Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.
Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.
Have ready the appropriate naming, IP address, and netmask information needed for installation. You
can use the following table to record the information prior to installation.
Note: The Foundation IP address set previously assumed a public network in order to
download the appropriate files. If you are imaging the cluster on a different (typically private)
network in which the current address is no longer correct, repeat step 15 in Preparing
Installation Environment on page 8 to configure a new static IP address for the Foundation VM.
Value
Global Parameters
IPMI netmask
IPMI gateway (IP address)
IPMI username (default is ADMIN)
IPMI password (default is ADMIN)
Hypervisor netmask
Hypervisor gateway
Hypervisor name server (DNS server IP address)
CVM (Controller VM) netmask
CVM gateway
CVM memory (16 GB by default)
Foundation VM Parameters
IPMI IP address
Imaging a Cluster | Field Installation Guide | Foundation | 14
Parameter
Value
hypervisor IP address
CVM IP address
Node-Specific Parameters
Starting IP address for IPMI address range
Starting IP address for hypervisor address range
Starting IP address for CVM address range
To install the hypervisor and Controller VM on the cluster nodes, do the following:
Video: Click here to see a video demonstration of this procedure (MP4 format).
1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.
Note: See Preparing Installation Environment on page 8 if Oracle VM VirtualBox is not started
or the Foundation VM is not running currently. You can also start the Foundation GUI by
opening a web browser and entering http://localhost:8000/gui/index.html.
The Foundation screen appears. The screen contains three sections: global parameters at the top,
node information in the middle, and ISO image information at the bottom. Upon opening the Foundation
screen, Foundation begins searching the network for unconfigured Nutanix nodes and displays
information in the middle section about the discovered nodes. The discovery process can take several
minutes (or longer) if there are many nodes on the network. Wait for the discovery process to complete
before proceeding.
Note: Foundation discovers unconfigured nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes,
you must first destroy the existing cluster in order for Foundation to discover those nodes.
Note: To display the help documentation in a separate browser tab or window, select Help
from the gear icon
pull-down list at the top right of the screen. (Select About to display
the Foundation version.) You need Internet access to view the help documentation. If you
cannot access the help contents, either view Foundation from your host browser if it has
Internet access or copy the help link URL to a browser on any system with Internet access.
d. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password as you type it.
e. Hypervisor Netmask: Enter the hypervisor netmask value.
f. Hypervisor Gateway: Enter an IP address for the gateway.
g. Hypervisor Name Server: Enter the IP address of the DNS name server.
h. CVM Netmask: Enter the Controller VM netmask value.
i. CVM Gateway: Enter an IP address for the gateway.
j. CVM Memory: Select a memory size for the Controller VM from the pull-down list.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, or 32 GB. The default setting
represents the recommended amount for the model type. Assigning more memory than the default
might be appropriate when using advanced features such as deduplication.
Note: Use the default memory setting unless Nutanix support recommends a different
setting.
3. In the upper middle section of the screen, configure the installation as follows:
a. If you are using a flat switch (no routing tables) for installation, check the Multi-Homing box.
The Multi-Homing line appears when the box is checked (and disappears when the box is
unchecked). The purpose of the multi-homing feature is to allow the Foundation VM to configure final
production IP addresses for IPMI, hypervisor, and Controller VM while using an unmanaged switch.
Enter unique IP addresses for the Foundation VM to use for communicating with IPMI,
hypervisor, and Controller VM components respectively. Make sure that the IPs are on the
matching IPMI, hypervisor, and Controller VM subnets configured in the top section of the screen
(step 2).
If this box is not checked, Foundation requires that either all addresses are on the same subnet
or that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.
b. To create a cluster after imaging the nodes, click the Create cluster box.
Four Create Cluster lines appear when the box is checked (and disappear when the box is
unchecked). Enter the following information in the indicated fields:
c. To image one or more bare metal nodes (that is, nodes without any NOS software), click the Add
bare metal nodes link.
An Add Bare Metal line appears when the box is checked (and disappears when the box is
unchecked). Enter the following information in the indicated fields:
How many blocks?: Enter the number of blocks to be added that contain bare metal nodes.
How many nodes per block?: Enter the number of bare metal nodes in each block.
Click the Add button to the right of the fields. This will add that number of blocks (and nodes per
block) to the node listing (see next step).
d. If for any reason you want to look again for unconfigured nodes, click the Retry discovery link.
This repeats the discovery process that occurred when you opened the Foundation GUI.
Note: You can retry discovery and reset all field entries to the default state by selecting
Reset Configuration from the gear icon
e. To check which IPMI IP addresses are active and reachable, click the Ping Scan link.
This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP columns (see next
step). A
(success) or
(failure) icon appears next to that field to indicate the ping test result for
each node. This feature is most useful when imaging a previously unconfigured set of nodes. None
of the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing
infrastructure.
Note: When re-imaging a configured set of nodes using the same network configuration,
failure to ping indicates a networking issue.
4. In the lower middle section of the screen, configure the nodes as follows:
This section displays information about the discovered nodes. The size of this section varies and can
be quite large when many blocks are discovered. It includes columns for the block ID, node, IPMI
Mac address, IPMI IP address, hypervisor IP address, CVM IP address, and hypervisor host name. A
section is displayed for each discovered block with lines for each node in that block. If you added bare
metal blocks in the previous step, those blocks also appear.
a. If there are discovered blocks you do not want to image, uncheck the box in the Block ID column for
those blocks.
All discovered blocks (and bare metal blocks) are checked by default. Foundation will image all
checked blocks. You can exclude individual nodes by unchecking the box in the Node field for those
nodes. To exclude and remove a block from the display, click the red X on the far right.
b. If you configured bare metal nodes, enter the MAC address of the IPMI interface for each node in
the IPMI MAC Address field.
This field is editable for bare metal nodes only; a value of N/A appears for all other nodes. The MAC
address of the IPMI interface normally appears on a label on the back of each node. (Make sure you
enter the MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".)
The MAC address appears in the standard form of six two-digit hexadecimal numbers separated by
colons, for example 00:25:90:D9:01:98.
Caution: Any existing data on the node will be destroyed during imaging. If you are using
the bare metal option to re-image a previously used node, do not proceed until you have
saved all the data on the node that you want to keep.
To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP
address in that field.
To specify the IPMI addresses automatically, enter a starting IP address in the top line of the IPMI
IP column. The entered address is assigned to the IPMI port of the first node, and consecutive
IP addresses (starting from the entered address) are assigned automatically to the remaining
nodes. Discovered nodes are sorted first by block ID and then by position, so IP assignments are
sequential. If you do not want all addresses to be consecutive, you can change the IP address for
specific nodes by updating the address in the appropriate fields for those nodes.
Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255,
because such addresses are commonly reserved by network administrators.
A host name is automatically generated for each host ( NTNX-unique_identifier). If these names
are acceptable, do nothing in this field.
Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The
automatically generated names are longer than 15 characters, which would result in the
same truncated name for multiple hosts in a Windows environment. Therefore, do not
use the automatically generated names when the hypervisor is Hyper-V.
To specify the host names manually, go to the line for each node and enter the desired name in
that field.
To specify the host names automatically, enter a base name in the top line of the Hypervisor
Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first
node, and the base name with "-2", "-3", and so on is assigned automatically as the host names
of the remaining nodes. You can specify different names for selected nodes by updating the entry
in the appropriate field for those nodes.
g. If you enabled cluster create (checked the Create cluster box), do one of the following in the
Cluster Create column:
Note: This field sets which nodes should be included in the cluster. Check boxes appear in
this field only when the Create cluster box is checked.
To select nodes individually, check the box for each node to be included in the cluster.
To select all nodes, check the box at the top of the column. You can de-select a specific node by
unchecking the box for that node.
a. In the Phoenix ISO Image field, select the Phoenix ISO image you downloaded previously from the
pull-down list (see Preparing Installation Environment on page 8).
Note: Click the Refresh link to display the current list of available images in the ~/
foundation/isos/[phoenix|hypervisor] folder. If the desired Phoenix or hypervisor ISO
image you downloaded is not listed, it might have been downloaded to the wrong directory
(see Preparing Installation Environment on page 8).
b. In the Hypervisor ISO Image field, select the hypervisor ISO image you downloaded previously
from the pull-down list (see Preparing Installation Environment on page 8).
Note: A hypervisor ISO is required to install ESXi or Hyper-V, but KVM is included in the
Phoenix ISO. To use KVM, select KVM (no ISO required) from the pull-down list.
6. When all the fields are correct, click the Run Installation button at the bottom of the screen.
The imaging process begins. Nodes are imaged in parallel, and the imaging process takes about 45
minutes.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.
First, the IPMI port addresses are configured. If IPMI port addressing is successful, the nodes are
imaged. The IPMI port configuration processing can take several minutes or longer depending on the
size of the cluster. You can watch server progress by clicking on the aggregate progress bar at the top,
which displays the service.log contents in the pane on the right of the screen.
When processing moves to node imaging (and subsequent cluster creation if enabled), the GUI displays
dynamic status messages and a progress bar for each node. A blue bar indicates good progress; a red
bar indicates a problem. Processing messages appear during each stage. Click on the progress bar for
a node to display the log file for that node (on the right). Click the Refresh link to refresh the displayed
log file contents.
When processing is complete, a green check mark appears next to the node name if IPMI configuration
and imaging (and cluster creation) was successful or a red x appears if it was not. At this point, do one
of the following:
Status: There is a green check mark next to every node. This means IPMI configuration and imaging
(both hypervisor and NOS Controller VM) across all the nodes in the cluster was successful, and
cluster creation was successful (if enabled).
Status: At least one node has a red check mark next to the IPMI address field. This means
the installation failed at the IPMI configuration step. To correct this problem, see Fixing IPMI
Configuration Problems on page 21.
Status: At least one node has a red check mark next to the hypervisor address field. This means
IPMI configuration was successful across the cluster but imaging failed. The default per-node
installation timeout is 30 minutes, so you can expect all the nodes (in each run of up to 20 nodes) to
finish successfully or encounter a problem in that amount of time. To correct this problem, see Fixing
Imaging Problems on page 22.
2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button
at the bottom of the screen.
3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.
4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at
the bottom of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass those
nodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this case
you must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP
Address on page 40).
In the following example, a node failed to image successfully because it exceeded the
installation timeout period. (This was because the IPMI port cable was disconnected during
installation.) The progress bar turned red and a message about the problem was written to
the log.
Clicking the Back to Configuration link at the top redisplays the original Foundation
screen updated to show 192.168.20.102 failed to image successfully. After fixing the
problem, click the Image Nodes button to image that node again. (You can also retry
imaging by clicking the Retry Imaging Failed Nodes link at the top of the status bar
page.)
4
Imaging a Node
This procedure describes how to install the NOS Controller VM and selected hypervisor on a new or
replacement node from an ISO image on a workstation (laptop or desktop machine).
Before you begin: If you are adding a new node, physically install that node at your site. See the Physical
Installation Guide for your model type for installation instructions.
Imaging a new or replacement node can be done either through the IPMI interface (network connection
required) or through a direct attached USB (no network connection required). In either case the installation
is divided into two steps:
1. Install the desired hypervisor version (see Installing a Hypervisor on page 25).
2. Install the NOS Controller VM and provision the hypervisor (see Installing the Controller VM on
page 33).
Installing a Hypervisor
This procedure describes how to install a hypervisor on a single node in a cluster in the field.
Note: This procedure is for ESXi or Hyper-V only. It is not needed for KVM, because KVM is
included in the Phoenix ISO (see Installing the Controller VM on page 33).
To install a hypervisor on a new or replacement node in the field, do the following:
1. Connect the IPMI port on that node to the network.
A 1 or 10 GbE port connection is not required for imaging the node.
2. Assign an IP address (static or DHCP) to the IPMI interface on the node.
To assign a static address, see Setting IPMI Static IP Address on page 40.
3. Download the desired hypervisor ISO image (ESXi or Hyper-V) to a temporary folder on a workstation.
Customers must provide the ESXi or Hyper-V ISO image; it is not provided by Nutanix. Check with your
VMware or Microsoft representative, or download an ISO image from a VMware or Microsoft support
site:
VMware: http://www.vmware.com/support.html
Microsoft (Hyper-V free): http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx
MSDN (subscription): http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052
File Name
MD5 Sum
ESXi 5.0 U2
VMware-VMvisorInstaller-5.0.0.update02-914586.x86_64.iso
fa6a00a3f0dd0cd1a677f69a236611e2
ESXi 5.0 U3
VMware-VMvisor391496b995db6d0cf27f0cf79927eca6
Installer-5.0.0.update03-1311175.x86_64.iso
ESXi 5.1 U1
VMware-VMvisor2cd15e433aaacc7638c706e013dd673a
Installer-5.1.0.update01-1065491.x86_64.iso
ESXi 5.1 U2
VMware-VMvisor6730d6085466c513c04e74a2c2e59dc8
Installer-5.1.0.update02-1483097.x86_64.iso
ESXi 5.5
VMware-VMvisorInstaller-5.5.0-1331820.x86_64.iso
9aaa9e0daa424a7021c7dc13db7b9409
Windows
Server
2012 R2
(datacenter)
en_windows_server_2012_r2_vl_x64_
dvd_3319595.iso
fb101ed6d7328aca6473158006630a9d
Windows
Server
2012 R2
(datacenter)
SW_DVD9_Windows_Svr_Std_and_
DataCtr_2012_R2_64Bit_English_-3_
MLF_X19-53588.ISO
b52450dd5ba8007e2934f5c6e6eda0ce
Windows
Server 2012
R2 (free)
9600.16384.WINBLUE_RTM.130821-1623_
X64FRE_SERVERHYPERCORE_EN-USIRM_SHV_X64FRE_EN-US_DV5.ISO
9c9e0d82cb6301a4b88fd2f4c35caf80
(SHA1: A73FC07C1B9F560F960F1
C4A5857FAC062041235)
Installing ESXi
Before you begin: Complete Installing a Hypervisor on page 25.
1. Click Continue at the installation screen and then accept the end user license agreement on the next
screen.
Installing Hyper-V
Before you begin: Complete Installing a Hypervisor on page 25.
1. Press any key when the Press any key to boot from CD or DVD prompt appears.
2. Select Windows Setup [EMS Enabled] in the Windows Boot Manager screen.
Note: Do not click the Install now button. It will be used later in the procedure.
b. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk
and then run the clean command:
select disk number
clean
c. Create and format a primary partition (size 1024 and file system fat32).
create partition primary size=1024
select partition 1
format fs=fat32 quick
d. Create and format a second primary partition (default size and file system ntfs).
create partition primary
select partition 2
format fs=ntfs quick
e. Assign the drive letter "C" to the DOM install partition volume.
list volume
This displays a table of logical volumes and their associated drive letter, size, and file system type.
Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which
is the DOM install partition) is drive letter "C", go to the next step.
Otherwise, do one of the following:
If drive letter "C" is assigned currently to another volume, enter the following commands to
remove the current "C" drive volume and reassign "C" to the DOM install partition volume:
select volume cdrive_volume_id#
remove
select volume dom_install_volume_id#
assign letter=c
If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the
DOM install partition volume:
select volume dom_install_volume_id#
assign letter=c
9. The language selection screen reappears. Again, just click the Next button.
10. The install screen reappears. This time click the Install now button.
11. In the operating system screen, select Windows Server 2012 Datacenter (Server Core Installation)
and then click the Next button.
This causes a reboot and the firstboot script to run, after which the host will reboot two more times.
This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor
progress, log into the VM after the initial reboot and enter the command notepad D:\first_boot.log .
This displays a (static) snapshot of the log file. Repeat this command as desired to see an updated
version of the log file.
Note: A d:\firstboot_fail file appears when this process fails. If that file is not present, the
process is continuing (if slowly).
If you are imaging a U-node, select both Clean Install Hypervisor and Clean Install SVM
If you are imaging an X-node, select Clean Install Hypervisor only.
A U-node is a fully configured node which can be added to a cluster. Both the Controller VM and the
hypervisor must be installed in a new U-node. An X-node does not includes a NIC card or disks; it is
the appropriate model when replacing an existing node. The disks and NIC are transferred from the
old node, and only the hypervisor needs to be installed on the X-node.
Caution: Do not select Clean Install SVM if you are replacing a node (X-node) because
this option cleans the disks as part of the process, which means existing data will be lost.
c. When all the fields are correct, click the Start button.
The node restarts with the new image. After the node starts, additional configuration tasks run and
then the host restarts again. During this time, the host name is installing-please-be-patient. Wait
approximately 20 minutes until this stage completes before accessing the node.
Caution: Do not restart the host until the configuration is complete.
5
Foundation Portal
The Foundation portal site provides access to many of the files required to do a field installation.
The Foundation (or Phoenix) files screen for that release appears. (For Phoenix, you must first select a
hypervisor before the files screen appears.)
5. Access or download the desired files from this screen.
Foundation Files
The following table describes the files in the foundation-1.2 directory.
File Name
Description
VirtualBox-4.3.10-xxxxx-OSX.dmg
VirtualBox-4.3.10-xxxxx-OSX.md5sum.txt
VirtualBox-4.3.10-xxxxx-Win.exe
VirtualBox-4.3.10-xxxxx-Win.md5sum.txt
Foundation-1.2_VM subdirectory
File Name
Description
Foundation_VM-1.2.ovf
Foundation_VM-1.2-disk1.vmdk
Foundation_VM-1.2-disk1.md5sum.txt
docs subdirectory
Field_Installation_Guide-v1_2.pdf docs
Phoenix Files
The following table describes the files in the phoenix-1.2 and phoenix-1.3 directories. The NOS release
4.0.x files are in the phoenix-1.3 directory, while the NOS release 3.5.x files are in the phoenix-1.2
directory.
Caution: Phoenix release 1.2 is the earliest supported release with Foundation 1.2; do not use an
earlier Phoenix release.
File Name
Description
ESXi subdirectory
phoenix-1.3_ESX_NOS-4.0.1-stable.iso
phoenix-1.3_ESX_NOS-4.0.1.md5sum.txt
phoenix-1.2_ESX_NOS-3.5.4.iso
phoenix-1.2_ESX_NOS-3.5.4.md5sum.txt
phoenix-1.2_ESX_NOS-3.5.3.1.iso
phoenix-1.2_ESX_NOS-3.5.3.1.md5sum.txt
phoenix-1.2_ESX_NOS-3.5.2.iso
phoenix-1.2_ESX_NOS-3.5.2.md5sum.txt
phoenix-1.2_ESX_NOS-3.5.1.iso
phoenix-1.2_ESX_NOS-3.5.1.md5sum.txt
phoenix-1.2_ESX_NOS-3.1.3.1.iso
File Name
Description
phoenix-1.2_ESX_NOS-3.1.3.1.md5sum.txt
HyperV subdirectory
phoenix-1.3_HYPERV_NOS-4.0.1-stable.iso
phoenix-1.3_HYPERV_NOS-4.0.1-stable.md5sum.txt
phoenix-1.2_HYPERV_NOS-3.5.4.iso
phoenix-1.2_HYPERV_NOS-3.5.4.md5sum.txt
phoenix-1.2_HYPERV_NOS-3.5.3.1.iso
phoenix-1.2_HYPERV_NOS-3.5.3.1.md5sum.txt
KVM subdirectory
phoenix-1.3_KVM_NOS-4.0.1-stable.iso
phoenix-1.3_KVM_NOS-4.0.1-stable.md5sum.txt
phoenix-1.2_KVM_NOS-3.5.4.iso
phoenix-1.2_KVM_NOS-3.5.4.md5sum.txt
phoenix-1.2_KVM_NOS-3.5.3.1.iso
phoenix-1.2_KVM_NOS-3.5.3.1.md5sum.txt
phoenix-1.2_KVM_NOS-3.1.3.1.iso
phoenix-1.2_KVM_NOS-3.1.3.1.md5sum.txt
6
Setting IPMI Static IP Address
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.
To configure a static IP address for the IPMI port on a node, do the following:
1. Connect a VGA monitor and USB keyboard to the node.
2. Power on the node.
3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.
4. Click the IPMI tab to display the IPMI screen.
7. Select Configuration Address Source, press Enter, and then select Static in the pop-up window.
8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on that node in
the pop-up window.
9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the pop-up
window.
10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's network
gateway in the pop-up window.
11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup
mode.