Documente Academic
Documente Profesional
Documente Cultură
10
1. Virtualization Management.................................................................................. 4
Storage Overview...................................................................................................................................................... 6
Virtualization Management Web Console Interface.................................................................................... 7
ii
NVIDIA GRID Virtual GPU Support on AHV.................................................................................... 43
Windows VM Provisioning...................................................................................................................................45
Nutanix VirtIO for Windows................................................................................................................... 45
Installing Windows on a VM...................................................................................................................55
PXE Configuration for AHV VMs...................................................................................................................... 56
Configuring the PXE Environment for AHV VMs........................................................................... 57
Configuring a VM to Boot over a Network...................................................................................... 58
Uploading Files to DSF for Microsoft Windows Users............................................................................ 59
Enabling Load Balancing of vDisks in a Volume Group..........................................................................59
Performing Power Operations on VMs.......................................................................................................... 60
Copyright.......................................................................................................................68
License......................................................................................................................................................................... 68
Conventions............................................................................................................................................................... 68
Default Cluster Credentials................................................................................................................................. 68
Version......................................................................................................................................................................... 69
iii
1
VIRTUALIZATION MANAGEMENT
Nutanix nodes with AHV include a distributed VM management service responsible for storing
VM configuration, making scheduling decisions, and exposing a management interface.
Snapshots
Snapshots are consistent for failures. They do not include the VM's current memory image, only
the VM configuration and its disk contents. The snapshot is taken atomically across the VM
configuration and disks to ensure consistency.
If multiple VMs are specified when creating a snapshot, all of their configurations and disks are
placed into the same consistency group. Do not specify more than 8 VMs at a time.
If no snapshot name is provided, the snapshot is referred to as "vm_name-timestamp", where
the timestamp is in ISO-8601 format (YYYY-MM-DDTHH:MM:SS.mmmmmm).
VM Disks
A disk drive may either be a regular disk drive or a CD-ROM drive.
By default, regular disk drives are configured on the SCSI bus, and CD-ROM drives are
configured on the IDE bus. The IDE bus supports CD-ROM drives only; regular disk drives are
not supported on the IDE bus. You can also configure CD-ROM drives to use the SCSI bus. By
default, a disk drive is placed on the first available bus slot.
Disks on the SCSI bus may optionally be configured for passthrough on platforms that support
iSCSI. When in passthrough mode, SCSI commands are passed directly to DSF over iSCSI.
When SCSI passthrough is disabled, the hypervisor provides a SCSI emulation layer and treats
the underlying iSCSI target as a block device. By default, SCSI passthrough is enabled for SCSI
devices on supported platforms.
If you do not specify a storage container when creating a virtual disk, it is placed in the storage
container named "default". You do not need to create the default storage container.
Host Maintenance
When a host is in maintenance mode, it is marked as unschedulable so that no new VM
instances are created on it. Subsequently, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails (for example, because there are insufficient resources available
elsewhere in the cluster), the host remains in the "entering maintenance mode" state, where it
is marked unschedulable, waiting for user remediation. You can shut down VMs on the host or
move them to other nodes. Once the host has no more running VMs it is in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the
cluster. After exiting maintenance mode, those VMs are automatically returned to the original
host, eliminating the need to manually move them.
Storage Overview
Acropolis uses iSCSI and NFS for storing VM files.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment
variables are set to anything other than en_US.UTF-8, reconnect with an SSH
configuration that does not import or change any locale settings.
Note:
By default, the admin user password does not have an expiry date, but you can change the
password at any time.
When you change the admin user password, you must update any applications and scripts
using the admin user credentials for authentication. Nutanix recommends that you create a user
assigned with the admin role instead of using the admin user for authentication. The Prism Web
Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.
Procedure
1. Log on to the Controller VM with SSH by using the management IP address of the Controller
VM and the following credentials.
2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.
You must shut down the Controller VM to shut down a node. When you shut down the
Controller VM, you must put the node in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the
cluster. After exiting maintenance mode, those VMs are automatically returned to the original
host, eliminating the need to manually move them.
Note the value of Hypervisor address for the node you want to shut down.
c. Put the node into maintenance mode.
nutanix@cvm$ acli host.enter_maintenance_mode Hypervisor address
[wait="{ true | false }" ]
Replace Hypervisor address with the value of Hypervisor address for the node you want
to shut down. Value of Hypervisor address is either the IP address of the AHV host or the
host name.
Specify wait=true to wait for the host evacuation attempt to finish.
d. Shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
Procedure
Replace cvm_name with the name of the Controller VM that you found from the preceding
command.
5. If the node is in maintenance mode, log on to the Controller VM and take the node out of
maintenance mode.
nutanix@cvm$ acli
If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848,
10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176,
8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037,
9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886,
8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627,
4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947,
9976, 9977, 10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
Note:
• You must ensure that at any given time, the cluster has a minimum of three nodes
(never-schedulable or otherwise) in function. Note that to add your first never-
schedulable node to your Nutanix cluster, the cluster must comprise of at least three
schedulable nodes.
• You can add any number of never-schedulable nodes to your Nutanix cluster.
• If you want a node that is already a part of the cluster to work as a never-
schedulable node, you must first remove that node from the cluster and then add
that node as a never-schedulable node.
• If you no longer need a node to work as a never-schedulable node, remove the node
from the cluster.
Procedure
Note: Perform this step only if you want a node that is already a part of the cluster to work as
a never-schedulable node.
For information about how to remove a node from a cluster, see the Modifying a Cluster
topic in the Prism Web Console Guide.
Replace uuid-of-the-node with the UUID of the node you want to add asa never-schedulable
node.
The never-schedulable-node is an optional parameter and is required only if you want to add
a never-schedulable node.
If you no longer need a node to work as a never-schedulable node, remove the node from
the cluster.
If you want the never-schedulable node to now work as a schedulable node, remove the
node from the cluster and add the node back to the cluster without using the never-
schedulable-node parameter as follows.
nutanix@cvm$ cluster add-node node-uuid=uuid-of-the-node
Note: For information about how to add a node (other than a never-schedulable node) to a
cluster, see the Expanding a Cluster topic in the Prism Web Console Guide.
Procedure
1. Run the Nutanix Cluster Checks (NCC) by using one of the following ways.
• From the Prism web console Health page, select Actions > Run Checks. Select All checks
and click Run.
• Log in to a Controller VM and use the ncc CLI.
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before proceeding.
If you are unable to resolve the issues, contact Nutanix support for assistance.
3. Open Configure CVM from the gear icon in the web console.
The Configure CVM dialog box is displayed.
Procedure
2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/
sysconfig/network file.
HOSTNAME=my_hostname
3. Use the text editor to replace the host name in the /etc/hostname file.
The host name is updated in the Prism web console after a few minutes.
Tip: Although it is not required for the root user to have the same password on all hosts, doing
so makes cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
Procedure
The password you choose must meet the following complexity requirements:
• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde
(~), exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.
• In configurations without high-security requirements, the password must contain:
Warning: Modifying any of the settings listed here may render your cluster inoperable.
Warning: You must not run any commands on a Controller VM that are not covered in the
Nutanix documentation.
• Settings and contents of any Controller VM, including the name and the virtual hardware
configuration (except memory when required to enable certain features).
AHV Settings
Note: G6/Skylake platforms do not have workload memory requirements for Controller VM and
vCPU configurations, unlike the G4/G5 platforms. G6/Skylake platforms do have Controller VM
memory configuration requirements and recommendations for features. See CVM Memory
Configurations for Features on page 21.
Note: If the AOS upgrade process detects that any node hypervisor host has total physical
memory of 64 GB or greater, it automatically upgrades any Controller VM in that node with less
than 32 GB memory by 4 GB. The Controller VM is upgraded to a maximum 32 GB.
If the AOS upgrade process detects any node with less than 64 GB memory size, no
memory changes occur.
For nodes with ESXi hypervisor hosts with total physical memory of 64 GB, the
Controller VM is upgraded to a maximum 28 GB. With total physical memory greater
than 64 GB, the existing Controller VM memory is increased by 4 GB.
Note: G6/Skylake platforms do not have workload memory requirements for Controller VM and
vCPU configurations, unlike the G4/G5 platforms. G6/Skylake platforms do have Controller VM
memory configuration requirements and recommendations for features. See CVM Memory
Configurations for Features on page 21.
• The Foundation imaging process sets the number of vCPUs allocated to each Controller VM
according to your platform model.
Workload Exceptions
Note: Upgrading to 5.1 requires a 4GB memory increase, unless the CVM memory already has 32
GB.
If all the data disks in a platform are SSDs, the node is assigned the High Performance workload
except for the following exceptions.
• Klas Voyager 2 uses SSDs but due to workload balance, this platform workload default is
VDI.
• Cisco B-series is expected to have large remote storage and two SSDs as a local cache for
the hot tier, so this platform workload is VDI.
- - HX1310 C220-M4L - -
- - HX2710-E Hyperflex - -
HX220C-
M4S
- - HX3510- - - -
FG
- - HX3710-F - - -
NX-8035- - HX5510-C - - -
G5
NX-6035- - - - - -
G5
NX-6155- - - - - -
G5
NX-8150- - - - - -
G5
The following tables show the minimum amount of memory and vCPU requirements and
recommendations for the Controller VM on each node for platforms that do not follow the
default.
XC730xd-24 32 20 8
XC6320-6AF
XC630-10AF
HX-3500 28 8
HX-5500
HX-7500
If each Controller VM in your cluster includes 32 GB of memory, you can enable and use all AOS
features listed here (deduplication, redundancy factor 3, and so on) for each platform type
(high performance, all flash, storage heavy, and so on).
The table shows the extra memory needed plus the minimum Controller VM memory if you are
using or enabling a listed feature. Controller VM memory required = (minimum CVM memory for
the node + memory required to enable features) or 32 GB CVM memory per node, whichever is
less.
For example, to use capacity tier deduplication, each Controller VM would need at least 32 GB
(20 GB default + 12 GB for the feature).
To use performance tier deduplication and redundancy factor 3, each Controller VM
would need a minimum 28 GB (20 GB default + 8 GB for the features). However, 32 GB is
recommended in this case.
• Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you
configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the
hosts during the imaging process.
Open vSwitch Do not modify the OpenFlow tables that are associated with the
default OVS bridge br0.
VLANs Add the Controller VM and the AHV host to the same VLAN.
By default, the Controller VM and the hypervisor are assigned
to VLAN 0, which effectively places them on the native VLAN
configured on the upstream physical switch.
Do not add any other device, including guest VMs, to the VLAN to
which the Controller VM and hypervisor host are assigned. Isolate
guest VMs on one or more separate VLANs.
OVS bonded port Aggregate the 10 GbE interfaces on the physical host to an OVS
(bond0) bond on the default OVS bridge br0 and trunk these interfaces on
the physical switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode.
Note: The mixing of bond modes across AHV hosts in the same
cluster is not recommended and not supported.
1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make
interfaces (physical host) sure that the guest VMs do not use the VLAN over which the
Controller VM and hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity,
follow the hypervisor manufacturer’s switch port and networking
configuration guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to
bridge br0, either individually or in a second bond. Use them on
other bridges.
IPMI port on the Do not trunk switch ports that connect to the IPMI interface.
hypervisor host Configure the switch ports as access ports for management
simplicity.
Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur
as implementations scale upward (see Knowledge Base article
KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-
blocking switches with larger buffers for production workloads.
Use an 802.3-2012 standards–compliant switch that has a low-
latency, cut-through design and provides predictable, consistent
traffic latency regardless of packet size, traffic pattern, or the
features enabled on the 10 GbE interfaces. Port-to-port latency
should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on
switch ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated
buffer for each port.
Controller VM Do not remove the Controller VM from either the OVS bridge br0
or the native Linux bridge virbr0.
This diagram shows the recommended network configuration for an Acropolis cluster. The
interfaces in the diagram are connected with colored lines to indicate membership to different
VLANs:
Figure 3:
• An internal port with the same name as the default bridge; that is, an internal port named
br0. This is the access port for the hypervisor host.
• A bonded port named bond0. The bonded port aggregates all the physical interfaces
available on the node. For example, if the node has two 10 GbE interfaces and two 1 GbE
interfaces, all four interfaces are aggregated on bond0. This configuration is necessary for
Foundation to successfully image the node regardless of which interfaces are connected to
the network.
Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with
Desired Interfaces on page 29.
The following diagram illustrates the default factory configuration of OVS on an Acropolis node:
The Controller VM has two network interfaces. As shown in the diagram, one network interface
connects to bridge br0. The other network interface connects to a port on virbr0. The
Controller VM uses this bridge to communicate with the hypervisor host.
Procedure
• To show interface properties such as link speed and status, log on to the Controller VM, and
then list the physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces
Replace bridge with the name of the bridge for which you want to view uplink information.
Omit the --bridge_name parameter if you want to view uplink information for the default
OVS bridge br0.
Output similar to the following is displayed:
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2 eth1 eth0
lacp: off
lacp-fallback: false
lacp_speed: slow
• To show the bridges on the host, log on to any Controller VM with SSH and list the bridges:
nutanix@cvm$ manage_ovs show_bridges
• To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then
list the configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace bridge with a name for the bridge. Bridge names must not exceed six (6) characters.
The output does not indicate success explicitly, so you can append && echo success to the
command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 create_single_bridge &&
echo success'
Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond,
so the disassociation is necessary to help prevent any unpredictable performance issues that
might result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that
you aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate
OVS bridge.
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace bridge with the name of the bridge on which you want to create the bond. Omit the
--bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:
• A comma-separated list of the interfaces that you want to include in the bond. For
example, eth0,eth1.
• A keyword that indicates which interfaces you want to include. Possible keywords:
Note: If the bridge on which you want to create the bond does not exist, you must first create
the bridge. For information about creating an OVS bridge, see Creating an Open vSwitch
Bridge on page 28. The following example assumes that a bridge named br1 exists on
every host in the cluster.
VLAN Configuration
You can set up a segmented virtual network on an Acropolis node by assigning the ports on
Open vSwitch bridges to different VLANs. VLAN port assignments are configured from the
Controller VM that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations
on page 23. For information about assigning guest VMs to a VLAN, see the Web Console
Guide.
2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you
want the host be on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
5. Verify connectivity to the IP address of the AHV host by performing a ping test.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you
are logged on to the Controller VM through its public interface. To change the VLAN ID, log on to
the internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a
VLAN, do the following:
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10
new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
type="pci" />
<source bridge="br0" />
<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.
Procedure
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess |
kTrunked}] [trunked_networks=networks]
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see
the "VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App
Mobility Fabric Guide.
CAUTION: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor
can be multihomed provided that one interface is on the same subnet as the Controller VM.
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
For information about how to log on to a Controller VM, see Controller VM Access on
page 8.
3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see
Assigning an Acropolis Host to a VLAN on page 30.
5
VIRTUAL MACHINE MANAGEMENT
The following topics describe various aspects of virtual machine management in an AHV
cluster.
• SCSI: 256
• PCI: 6
• IDE: 4
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample
output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the
bridge name and bond name are br0 and br0-up, respectively.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --
bond_name br0-up update_uplinks'
The command removes the bond and then re-creates the bond with only the 10 GbE
interfaces.
5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge
called br1 (bridge names must not exceed 6 characters.).
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 create_single_bridge'
6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example,
aggregate them to a bond named br1-up.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name
br1-up update_uplinks'
7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the
guest VMs, and associate the new bridge with the network. For example, create a network
named vlan10.br1 on VLAN 10.
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign
interfaces on the guest VMs to the network.
For information about assigning guest VM interfaces to a network, see "Creating a VM" in the
Prism Web Console Guide.
Procedure
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess |
kTrunked}] [trunked_networks=networks]
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see
the "VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App
Mobility Fabric Guide.
Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory
online. If the memory is not online, you cannot use the new memory. Perform the following
procedure to make the memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/device/system/memory/memoryXXX/state
2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more
memory to that VM so that the final memory is greater than 3 GB, results in a memory-
overflow condition. To resolve the issue, restart the guest OS (CentOS 7.2) with the following
setting:
swiotlb=force
CPU OS Limitations
1. On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo,
you might have to bring the CPUs online. For each hot-plugged CPU, run the following
command to bring the CPU online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Procedure
Replace vm-name with the name of the VM and new_memory_size with the memory size.
Replace vm-name with the name of the VM and n with the number of CPUs.
Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported
version, you must power cycle the VM that was instantiated and powered on before the
upgrade, so that it is compatible with the memory and CPU hot-plug feature. This power-cycle
has to be done only once after the upgrade. New VMs created on the supported version shall
have the hot-plug compatibility by default.
Procedure
Replace vm_name with the name of the VM on which you want to enable vNUMA or vUMA.
Replace x with the values for the following indicated parameters:
Note: You can configure either pass-through or a vGPU for a guest VM but not both.
This guide describes the concepts and driver installation information. For the configuration
procedures, see the Prism Web Console Guide.
Supported GPUs
The following GPUs are supported:
Limitations
GPU pass-through support has the following limitations:
• HA is not supported for VMs with GPU pass-through. If the host fails, VMs that have a GPU
configuration are powered off and then powered on automatically when the node is back up.
• Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is
necessary when the BIOS, BMC, and the hypervisor on the host are being upgraded. During
these upgrades, VMs that have a GPU configuration are powered off and then powered on
automatically when the node is back up.
• VM pause and resume are not supported.
• You cannot hot add VM memory if the VM is using a GPU.
• Hot add and hot remove support is not available for GPUs.
• You can change the GPU configuration of a VM only when the VM is turned off.
• The Prism web console does not support console access for VMs that are configured with
GPU pass-through. Before you configure GPU pass-through for a VM, set up an alternative
means to access the VM. For example, enable remote access over RDP.
Removing GPU pass-through from a VM restores console access to the VM through the
Prism web console.
• Make sure that NVIDIA GRID Virtual GPU Manager (the host driver) and the NVIDIA GRID
guest operating system driver are at the same version.
• The GPUs must run in graphics mode. If any M60 GPUs are running in compute mode, switch
the mode to graphics before you begin. See the gpumodeswitch User Guide.
• If you are using NVIDIA vGPU drivers on a guest VM and you modify the vGPU profile
assigned to the VM (in the Prism web console), you might need to reinstall the NVIDIA guest
drivers on the guest VM.
Procedure
1. If you do not have an NVIDIA GRID licensing server, set up the licensing server.
See the Virtual GPU License Server User Guide.
2. Download the guest and host drivers (both drivers are included in a single bundle) from the
NVIDIA Driver Downloads page. For information about the supported driver versions, see
Virtual GPU Software R384 for Nutanix AHV Release Notes.
3. Install the host driver (NVIDIA GRID Virtual GPU Manager) on the AHV hosts. See Installing
NVIDIA GRID Virtual GPU Manager (Host Driver) on page 44.
4. In the Prism web console, configure a vGPU profile for the VM.
To create a VM, see Creating a VM (AHV) in the "Virtual Machine Management" chapter of
the Prism Web Console Guide. To allocate vGPUs to an existing VM, see the "Managing a VM
(AHV)" topic in that Prism Web Console Guide chapter.
a. Download the NVIDIA GRID guest operating system driver from the NVIDIA portal, install
the driver on the guest VM, and then restart the VM.
b. Configure vGPU licensing on the guest VM.
This step involves configuring the license server on the VM so that the VM can request
the appropriate license from the license server. For information about configuring vGPU
licensing on the guest VM, see the NVIDIA GRID vGPU User Guide.
Note: VMs using a GPU must be powered off if their parent host is affected by this install. If
left running, the VMs are automatically powered off when the driver installation begins on their
parent host, and then powered on after the installation is complete.
Procedure
a. Copy the RPM package to a web server to which you can connect from a Controller VM
on the AHV cluster.
b. Copy the RPM package to any Controller VM in the cluster on which you want to install
the driver.
3. If the RPM package is available on a web server, install the driver from the server location.
nutanix@cvm$ install_host_package -u url
4. If the RPM package is available on the Controller VM, install the driver from the location to
which you uploaded the driver.
nutanix@cvm$ install_host_package -r rpm
Replace rpm with the path to the driver on the Controller VM.
5. At the confirmation prompt, type yes to confirm that you want to install the driver.
If some of the hosts in the cluster do not have GPUs installed on them, you are prompted,
once for each such host, to choose whether or not you want to install the driver on those
hosts. Specify whether or not you want to install the host driver by typing yes or no.
Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for paravirtual devices that enhance stability and
performance of VMs on AHV.
Nutanix VirtIO is available in two formats:
• Operating system:
Procedure
1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.
2. Use the filter search to find the latest Nutanix Virtio package.
• You can choose the ISO, if you are creating a new Windows VM. The installer is available
on the ISO if your VM does not have internet access.
• You can choose the MSI, if you are updating drivers in a Windows VM.
The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.
Note: To automatically install Nutanix VirtIO, see Installing Nutanix VirtIO for Windows on
page 46.
Procedure
1. Go to the Nutanix Support Portal and navigate to Downloads > Tools & Firmware.
2. Use the filter search to find the latest Nutanix Virtio ISO.
3. Download the latest VirtIO for Windows ISO to your local machine.
Note: Nutanix recommends extracting the VirtIO ISO into the same VM where you will load
Nutanix VirtIO for easier installation.
5. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.
a. TYPE: CD-ROM
b. OPERATION: CLONE FROM IMAGE SERVICE
c. BUS TYPE: IDE
d. IMAGE: Select the Nutanix VirtIO ISO
e. Click Add.
7. Log into the VM and navigate to Control Panel > Device Manager.
Open the devices and select the specific Nutanix drivers for download. For each device, right
click and Update Driver Software into the drive containing the VirtIO ISO. For each device,
follow the wizard instructions until you receive installation confirmation.
Procedure
1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.
Note: The installer is available on the ISO if your VM does not have internet access.
a. Upload the ISO to the cluster as described in Configuring Images in the Web Console
Guide.
b. Mount ISO image into CD-ROM of each VM where you want to upgrade in the cluster.
3. If you are updating drivers in a Windows VM, select the appropriate 32-bit or 64-bit MSI.
4. Upgrade drivers.
• For SCSI drivers for SCSI boot disks, manually upgrade the drivers with the vendor's
instructions.
• For all other drivers, run the Nutanix VirtIO MSI installer (the preferred installation
method) and follow the wizard instructions.
Note: Running the Nutanix VirtIO MSI installer upgrades all drivers.
The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.
Creating a Windows VM on AHV with Nutanix VirtIO (New and Migrated VMs)
• Upload your Windows Installer ISO to your cluster as described in the Configuring Images
section of the Web Console Guide document.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Configuring Images section
of the Web Console Guide document.
Procedure
a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated
fields.
The current CD-ROM opens in a new window.
b. OPERATION: CLONE FROM IMAGE SERVICE
c. BUS TYPE: IDE
d. IMAGE: Select the Windows OS Install ISO.
e. Click Update.
6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.
a. TYPE: CD-ROM
b. OPERATION:CLONE FROM IMAGE SERVICE
c. BUS TYPE: IDE
d. IMAGE: Select the Nutanix VirtIO ISO.
e. Click Add.
a. TYPE: DISK
b. OPERATION: ALLOCATE ON STORAGE CONTAINER
c. BUS TYPE: SCSI
d. STORAGE CONTAINER: Select the appropriate storage container.
e. SIZE: Enter the number for the size of the hard drive (in GiB).
f. Click Add to add the disk driver.
8. If you are migrating a VM, create a disk from the disk image. Click Add New Disk and
complete the indicated fields.
a. TYPE: DISK
b. OPERATION: CLONE FROM IMAGE
c. BUS TYPE: SCSI
d. CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you
created previously.
e. Click Add to add the disk driver.
9. (Optional) After you have migrated or created a VM, add a network interface card (NIC).
Click Add New NIC and completing the indicated fields.
a. VLAN ID: Choose the VLAN ID according to network requirements and enter the IP
address if required.
b. Click Add.
Installing Windows on a VM
Procedure
6. Select the desired language, time and currency format, and keyboard information.
10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.
13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress that can take several minutes.
14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows Setup completes installation.
• An unmanaged network does not perform IPAM functions and gives VMs direct access to an
external Ethernet network. Therefore, the procedure for configuring the PXE environment
for AHV VMs is the same as for a physical machine or a VM that is running on any other
hypervisor. VMs obtain boot file information from the DHCP or PXE server on the external
network.
• A managed network intercepts DHCP requests from AHV VMs and performs IP address
management (IPAM) functions for the VMs. Therefore, you must add a TFTP server and the
required boot file information to the configuration of the managed network. VMs obtain boot
file information from this configuration.
A VM that is configured to use PXE boot boots over the network on subsequent restarts until
the boot order of the VM is changed.
Procedure
1. Log on to the Prism web console, click the gear icon, and then click Network Configuration in
the menu.
The Network Configuration dialog box is displayed.
2. On the Virtual Networks tab, click the pencil icon shown for the network for which you want
to configure a PXE environment.
The VMs that require the PXE boot information must be on this network.
a. Select the Configure Domain Settings check box and do the following in the fields shown
in the domain settings sections:
• In the TFTP Server Name field, specify the host name or IP address of the TFTP server.
If you specify a host name in this field, make sure to also specify DNS settings in the
Domain Name Servers (comma separated), Domain Search (comma separated), and
Domain Name fields.
• In the Boot File Name field, specify the boot file that the VMs must use.
b. Click Save.
Procedure
3. Create a VM.
Replace vm with a name for the VM, and replace num_vcpus and memory with the number
of vCPUs and amount of memory that you want to assign to the VM, respectively.
For example, create a VM named nw-boot-vm.
Replace vm with the name of the VM and replace network with the name of the network.
If the network is an unmanaged network, make sure that a DHCP server and the boot file
that the VM requires are available on the network. If the network is a managed network,
configure the DHCP server to provide TFTP server and boot file information to the VM. See
Configuring the PXE Environment for AHV VMs on page 57.
For example, create a virtual interface for VM nw-boot-vm and place it on a network named
network1.
<acropolis> vm.nic_create nw-boot-vm network=network1
Replace vm with the name of the VM and mac_addr with the MAC address of the virtual
interface that the VM must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM
uses the virtual interface with MAC address 00-00-5E-00-53-FF.
<acropolis> vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF
Replace vm_list with the name of the VM. Replace host with the name of the host on which
you want to start the VM.
For example, start the VM named nw-boot-vm on a host named host-1.
<acropolis> vm.on nw-boot-vm host="host-1"
Procedure
1. Authenticate by using Prism username and password or, for advanced users, use the public
key that is managed through the Prism cluster lockdown user interface.
2. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start
browsing the DSF data store.
Note: The root directory displays storage containers and you cannot change it. You can only
upload files to one of the storage containers and not directly to the root directory. To create
or delete storage containers, you can use the prism user interface.
Note:
Perform the following procedure to enable load balancing of vDisks by using aCLI.
Procedure
• Enable vDisk load balancing if you are creating a new volume group.
<acropolis> vg.create vg_name load_balance_vm_attachments=true
Note: If you are modifying an existing volume group, you must first detach all the VMs that
are attached to that volume group and then enable vDisk load balancing.
Note: You can also perform these power operations by using the V3 API calls. For more
information, see developer.nutanix.com.
Procedure
Generated Events
The following events are generated by an AHV cluster.
Event Description
VM.CREATE A VM is created.
VM.DELETE A VM is deleted.
When a VM that is powered on is deleted,
in addition to the VM.DELETE notification, a
VM.OFF event is generated.
VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
When a VM is migrated, in addition to the
VM.MIGRATE notification, a VM.UPDATE
event is generated.
Event Description
SUBNET.CREATE A virtual network is created.
SUBNET.DELETE A virtual network is deleted.
SUBNET.UPDATE A virtual network is updated.
Creating a Webhook
Send the Nutanix cluster an HTTP POST request whose body contains the information essential
to creating a webhook (the events for which you want the listener to receive notifications, the
listener URL, and other information such as a name and description of the listener).
Note: Each POST request creates a separate webhook with a unique UUID, even if the data in
the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not send
another request with changes. Instead, update the webhook. See Updating a Webhook on
page 65.
To create a webhook, send the Nutanix cluster an API request of the following form:
Procedure
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate
values for the following parameters:
The Nutanix cluster responds to the API request with a 200 OK HTTP response that contains the
UUID of the webhook that is created. The following response is an example:
{
"status": {
"state": "PENDING"
},
"spec": {
. . .
"uuid": "003f8c42-748d-4c0b-b23d-ab594c087399"
}
The notification contains metadata about the entity along with information about the type of
event that occurred. The event type is specified by the event_type parameter.
Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created
successfully.
Procedure
• To show a single webhook, send the Nutanix cluster an API request of the following form:
GET https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify
appropriate values for the following parameters:
Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update
the name, listener URL, event list, and description.
PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID
of the webhook you want to update, respectively. For a description of the parameters, see
Creating a Webhook on page 63.
Deleting a Webhook
Procedure
DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID
of the webhook you want to update, respectively.
Notification Format
An event notification has the same content and format as the response to the version 3.0
REST API call associated with that event. For example, the notification generated when a VM is
powered on has the same format and content as the response to a REST API call that powers
on a VM. However, the notification also contains a notification version, and event type, and an
entity reference, as shown:
{
"version":"1.0",
"data":{
"metadata":{
"status": {
"name": "string",
"providers": {},
.
.
For VM.DELETE and SUBNET.DELETE, the UUID of the entity is included but not the metadata.
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
AHV | Copyright | 68
Interface Target Username Password
Version
Last modified: March 15, 2019 (2019-03-15T16:56:19-07:00)
AHV | Copyright | 69