Sunteți pe pagina 1din 69

AHV 5.

10

AHV Administration Guide


March 15, 2019
Contents

1. Virtualization Management.................................................................................. 4
Storage Overview...................................................................................................................................................... 6
Virtualization Management Web Console Interface.................................................................................... 7

2.  Node Management................................................................................................. 8


Controller VM Access...............................................................................................................................................8
Admin Access to Controller VM..............................................................................................................8
Shutting Down a Node in a Cluster (AHV)...................................................................................................10
Starting a Node in a Cluster (AHV).................................................................................................................. 11
Adding a Never-Schedulable Node.................................................................................................................. 13
Changing CVM Memory Configuration (AHV)............................................................................................. 14
Changing the Acropolis Host Name.................................................................................................................15
Changing the Acropolis Host Password......................................................................................................... 15
Nonconfigurable AHV Components................................................................................................................. 16

3. Controller VM Memory Configurations..........................................................18


CVM Memory Configurations (G5/Broadwell)..............................................................................................18
Platform Workload Translation (G5/Broadwell).............................................................................. 19
CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge)..................................................... 20
CVM Memory Configurations for Features.................................................................................................... 21

4. Host Network Management..............................................................................23


Prerequisites for Configuring Networking.....................................................................................................23
AHV Networking Recommendations...............................................................................................................23
Layer 2 Network Management with Open vSwitch.................................................................................. 25
About Open vSwitch................................................................................................................................. 25
Default Factory Configuration............................................................................................................... 26
Viewing the Network Configuration.................................................................................................... 27
Creating an Open vSwitch Bridge....................................................................................................... 28
Configuring an Open vSwitch Bond with Desired Interfaces....................................................29
VLAN Configuration.................................................................................................................................. 30
Changing the IP Address of an Acropolis Host..........................................................................................33

5. Virtual Machine Management.......................................................................... 35


Supported Guest VM Types for AHV..............................................................................................................35
Virtual Machine Network Management.......................................................................................................... 35
Configuring 1 GbE Connectivity for Guest VMs..............................................................................35
Configuring a Virtual NIC to Operate in Access or Trunk Mode.............................................. 36
Virtual Machine Memory and CPU Hot-Plug Configurations................................................................. 38
Hot-Plugging the Memory and CPUs on Virtual Machines (AHV).......................................... 39
Virtual Machine Memory Management (vNUMA)......................................................................................40
Enabling vNUMA on Virtual Machines............................................................................................... 40
GPU and vGPU Support........................................................................................................................................41
Supported GPUs...........................................................................................................................................41
GPU Pass-Through for Guest VMs........................................................................................................41

ii
NVIDIA GRID Virtual GPU Support on AHV.................................................................................... 43
Windows VM Provisioning...................................................................................................................................45
Nutanix VirtIO for Windows................................................................................................................... 45
Installing Windows on a VM...................................................................................................................55
PXE Configuration for AHV VMs...................................................................................................................... 56
Configuring the PXE Environment for AHV VMs........................................................................... 57
Configuring a VM to Boot over a Network...................................................................................... 58
Uploading Files to DSF for Microsoft Windows Users............................................................................ 59
Enabling Load Balancing of vDisks in a Volume Group..........................................................................59
Performing Power Operations on VMs.......................................................................................................... 60

6.  Event Notifications...............................................................................................62


Generated Events.................................................................................................................................................... 62
Creating a Webhook..............................................................................................................................................63
Listing Webhooks....................................................................................................................................................65
Updating a Webhook............................................................................................................................................ 65
Deleting a Webhook.............................................................................................................................................. 66
Notification Format................................................................................................................................................ 66

Copyright.......................................................................................................................68
License......................................................................................................................................................................... 68
Conventions............................................................................................................................................................... 68
Default Cluster Credentials................................................................................................................................. 68
Version......................................................................................................................................................................... 69

iii
1
VIRTUALIZATION MANAGEMENT
Nutanix nodes with AHV include a distributed VM management service responsible for storing
VM configuration, making scheduling decisions, and exposing a management interface.

Snapshots
Snapshots are consistent for failures. They do not include the VM's current memory image, only
the VM configuration and its disk contents. The snapshot is taken atomically across the VM
configuration and disks to ensure consistency.
If multiple VMs are specified when creating a snapshot, all of their configurations and disks are
placed into the same consistency group. Do not specify more than 8 VMs at a time.
If no snapshot name is provided, the snapshot is referred to as "vm_name-timestamp", where
the timestamp is in ISO-8601 format (YYYY-MM-DDTHH:MM:SS.mmmmmm).

VM Disks
A disk drive may either be a regular disk drive or a CD-ROM drive.
By default, regular disk drives are configured on the SCSI bus, and CD-ROM drives are
configured on the IDE bus. The IDE bus supports CD-ROM drives only; regular disk drives are
not supported on the IDE bus. You can also configure CD-ROM drives to use the SCSI bus. By
default, a disk drive is placed on the first available bus slot.
Disks on the SCSI bus may optionally be configured for passthrough on platforms that support
iSCSI. When in passthrough mode, SCSI commands are passed directly to DSF over iSCSI.
When SCSI passthrough is disabled, the hypervisor provides a SCSI emulation layer and treats
the underlying iSCSI target as a block device. By default, SCSI passthrough is enabled for SCSI
devices on supported platforms.
If you do not specify a storage container when creating a virtual disk, it is placed in the storage
container named "default". You do not need to create the default storage container.

Virtual Networks (Layer 2)


Each VM network interface is bound to a virtual network. Each virtual network is bound to a
single VLAN; trunking VLANs to a virtual network is not supported. Networks are designated by
the L2 type (vlan) and the VLAN number. For example, a network bound to VLAN 66 would be
named vlan.66.
Each virtual network maps to virtual switch br0. The user is responsible for ensuring that the
specified virtual switch exists on all hosts, and that the physical switch ports for the virtual
switch uplinks are properly configured to receive VLAN-tagged traffic.
A VM NIC must be associated with a virtual network. It is not possible to change this
association. To connect a VM to a different virtual network, it is necessary to create a new NIC.
While a virtual network is in use by a VM, it cannot be modified or deleted.

AHV |  Virtualization Management | 4


Managed Networks (Layer 3)
A virtual network can have an IPv4 configuration, but it is not required. A virtual network
with an IPv4 configuration is a managed network; one without an IPv4 configuration is an
unmanaged network. A VLAN can have at most one managed network defined. If a virtual
network is managed, every NIC must be assigned an IPv4 address at creation time.
A managed network can optionally have one or more non-overlapping DHCP pools. Each pool
must be entirely contained within the network's managed subnet.
If the managed network has a DHCP pool, the NIC automatically gets assigned an IPv4 address
from one of the pools at creation time, provided at least one address is available. Addresses in
the DHCP pool are not reserved. That is, you can manually specify an address belonging to the
pool when creating a virtual adapter. If the network has no DHCP pool, you must specify the
IPv4 address manually.
All DHCP traffic on the network is rerouted to an internal DHCP server, which allocates IPv4
addresses. DHCP traffic on the virtual network (that is, between the guest VMs and the
Controller VM) does not reach the physical network, and vice versa.
A network must be configured as managed or unmanaged when it is created. It is not possible
to convert one to the other.

Figure 1: Acropolis Networking Architecture

Host Maintenance
When a host is in maintenance mode, it is marked as unschedulable so that no new VM
instances are created on it. Subsequently, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails (for example, because there are insufficient resources available
elsewhere in the cluster), the host remains in the "entering maintenance mode" state, where it
is marked unschedulable, waiting for user remediation. You can shut down VMs on the host or
move them to other nodes. Once the host has no more running VMs it is in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the
cluster. After exiting maintenance mode, those VMs are automatically returned to the original
host, eliminating the need to manually move them.

AHV |  Virtualization Management | 5


Limitations

Number of online VMs per host 128

Number of online VM virtual disks per host 256

Number of VMs per consistency group 8


(with snapshot.create)

Number of VMs to edit concurrently 64


(for example, with vm.create/delete and power operations)

Storage Overview
Acropolis uses iSCSI and NFS for storing VM files.

Figure 2: Acropolis Storage Example

iSCSI for VMs


Each disk which maps to a VM is defined as a separate iSCSI target. The Nutanix scripts
work with libvirtd in the kernel to create the necessary iSCSI structures in Acropolis.
These structures map to vDisks created in the Nutanix storage container specified by the
administrator. If no storage container is specified, the script uses the default storage container
name.

Storage High Availability with I/O Path Optimization


Unlike with Microsoft Hyper-V and VMware ESXi clusters, in which the entire traffic on a node is
rerouted to a randomly selected healthy Controller VM when the local Controller VM becomes
unavailable, in an Acropolis cluster, a rerouting decision is taken on a per-vDisk basis. When
the local Controller VM becomes unavailable, iSCSI connections are individually redirected to a
randomly selected healthy Controller VM, resulting in distribution of load across the cluster.
Instead of maintaining live, redundant connections to other Controller VMs, as is the case with
the Device Mapper Multipath feature, AHV initiates an iSCSI connection to a healthy Controller
VM only when the connection is required. When the local Controller VM becomes available,

AHV |  Virtualization Management | 6


connections to other Controller VMs are terminated and the guest VMs reconnect to the local
Controller VM.

NFS Datastores for Images


Nutanix storage containers can be accessed by the Acropolis host as NFS datastores. NFS
datastores are used to manage images which may be used by multiple VMs, such as ISO files.
When mapped to a VM, the script maps the file in the NFS datastore to the VM as a iSCSI
device, just as it does for virtual disk files.
Images must be specified by absolute path, as if relative to the NFS server. For example, if a
datastore named ImageStore exists with a subdirectory called linux, the path required to
access this set of files would be /ImageStore/linux. Use the nfs_ls script to browse the
datastore from the Controller VM:
nutanix@cvm$ nfs_ls --long --human_readable /ImageStore/linux
-rw-rw-r-- 1 1000 1000 Dec 7 2012 1.6G CentOS-6.3-x86_64-LiveDVD.iso
-rw-r--r-- 1 1000 1000 Jun 19 08:56 523.0M archlinux-2013.06.01-dual.iso
-rw-rw-r-- 1 1000 1000 Jun 3 19:22 373.0M grml64-full_2013.02.iso
-rw-rw-r-- 1 1000 1000 Nov 29 2012 694.3M ubuntu-12.04.1-amd64.iso

Virtualization Management Web Console Interface


Many of the virtualization management features can be managed from the Prism GUI.
In virtualization management-enabled clusters, you can do the following through the web
console:

• Configure network connections


• Create virtual machines
• Manage virtual machines (launch console, start/shut down, take snapshots, migrate, clone,
update, and delete)
• Monitor virtual machines
• Enable VM high availability
For more information about these features, see the Web Console Guide.

AHV |  Virtualization Management | 7


2
NODE MANAGEMENT
Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console
or nCLI. Nutanix recommends using these interfaces whenever possible and disabling Controller
VM SSH access with password or key authentication. Some functions, however, require logging
on to a Controller VM with SSH. Exercise caution whenever connecting directly to a Controller
VM as the risk of causing cluster issues is increased.

Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment
variables are set to anything other than en_US.UTF-8, reconnect with an SSH
configuration that does not import or change any locale settings.

Admin Access to Controller VM


You can access the Controller VM as the admin user (admin user name and password) with
SSH. For security reasons, the password of the admin user must meet complexity requirements.
When you log on to the Controller VM as the admin user for the first time, you are prompted to
change the default password. Following are the default credentials of the admin user:

• User name: admin


• Password: Nutanix/4u
The password must meet the following complexity requirements:

• At least 8 characters long


• At least 1 lowercase letter
• At least 1 uppercase letter
• At least 1 number
• At least 1 special character
• At least 4 characters difference from the old password
• Must not be among the last 5 passwords
• Must not have more than 2 consecutive occurrences of a character
• Must not be longer than 199 characters
After you have successfully changed the password, the new password is synchronized across
all Controller VMs and interfaces (Prism web console, nCLI, and SSH).

Note:

AHV |  Node Management | 8


• As an admin user, you cannot access nCLI by using the default credentials. If you
are logging in as the admin user for the first time, you must SSH to the Controller
VM or log on through the Prism web console. Also, you cannot change the default
password of the admin user through nCLI. To change the default password of the
admin user, you must SSH to the Controller VM or log on through the Prism web
console.
• When you make an attempt to log in to the Prism web console for the first time
after you upgrade to AOS 5.1 from an earlier AOS version, you can use your existing
admin user password to log in and then change the existing password (you are
prompted) to adhere to the password complexity requirements. However, if you
are logging in to the Controller VM with SSH for the first time after the upgrade as
the admin user, you must use the default admin user password (Nutanix/4u) and
then change the default password (you are prompted) to adhere to the password
complexity requirements.

By default, the admin user password does not have an expiry date, but you can change the
password at any time.
When you change the admin user password, you must update any applications and scripts
using the admin user credentials for authentication. Nutanix recommends that you create a user
assigned with the admin role instead of using the admin user for authentication. The Prism Web
Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.

Table 1: Controller VM Credentials

Interface Target User Name Password

SSH client Nutanix Controller VM admin Nutanix/4u


nutanix nutanix/4u
Prism web console Nutanix Controller VM admin Nutanix/4u

Accessing the Controller VM Using the Admin Account

About this task


Perform the following procedure to log on to the Controller VM by using the admin user with
SSH for the first time.

Procedure

1. Log on to the Controller VM with SSH by using the management IP address of the Controller
VM and the following credentials.

• User name: admin


• Password: Nutanix/4u
You are now prompted to change the default password.

2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:

AHV |  Node Management | 9


New password:
Retype new password:
Password changed.

The password must meet the following complexity requirements:

• At least 8 characters long


• At least 1 lowercase letter
• At least 1 uppercase letter
• At least 1 number
• At least 1 special character
• At least 4 characters difference from the old password
• Must not be among the last 5 passwords
• Must not have more than 2 consecutive occurrences of a character
• Must not be longer than 199 characters
For information about logging on to a Controller VM by using the admin user account
through the Prism web console, see Logging Into The Web Console in the Prism Web
Console guide.

Shutting Down a Node in a Cluster (AHV)


Before you begin
Shut down guest VMs that are running on the node, or move them to other nodes in the cluster.

About this task

CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.

You must shut down the Controller VM to shut down a node. When you shut down the
Controller VM, you must put the node in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the
cluster. After exiting maintenance mode, those VMs are automatically returned to the original
host, eliminating the need to manually move them.

AHV |  Node Management | 10


Procedure

1. If the Controller VM is running, shut down the Controller VM.

a. Log on to the Controller VM with SSH.


b. List all the hosts in the cluster.
acli host.list

Note the value of Hypervisor address for the node you want to shut down.
c. Put the node into maintenance mode.
nutanix@cvm$ acli host.enter_maintenance_mode Hypervisor address
[wait="{ true | false }" ]

Replace Hypervisor address with the value of Hypervisor address for the node you want
to shut down. Value of Hypervisor address is either the IP address of the AHV host or the
host name.
Specify wait=true to wait for the host evacuation attempt to finish.
d. Shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now

2. Log on to the AHV host with SSH.

3. Shut down the host.


root@ahv# shutdown -h now

Starting a Node in a Cluster (AHV)


About this task

Procedure

1. Log on to the AHV host with SSH.

2. Find the name of the Controller VM.


root@ahv# virsh list --all | grep CVM

Make a note of the Controller VM name in the second column.

3. Determine if the Controller VM is running.

• If the Controller VM is off, a line similar to the following should be returned:


- NTNX-12AM2K470031-D-CVM shut off

Make a note of the Controller VM name in the second column.


• If the Controller VM is on, a line similar to the following should be returned:
- NTNX-12AM2K470031-D-CVM running

AHV |  Node Management | 11


4. If the Controller VM is shut off, start it.
root@ahv# virsh start cvm_name

Replace cvm_name with the name of the Controller VM that you found from the preceding
command.

5. If the node is in maintenance mode, log on to the Controller VM and take the node out of
maintenance mode.
nutanix@cvm$ acli

<acropolis> host.exit_maintenance_mode AHV-hypervisor-IP-address

Replace AHV-hypervisor-IP-address with the IP address of the AHV hypervisor.


<acropolis> exit

6. Log on to another Controller VM in the cluster with SSH.

7. Verify that all services are up on all Controller VMs.


nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848,
10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176,
8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037,
9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886,
8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627,
4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947,
9976, 9977, 10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]

AHV |  Node Management | 12


MinervaCVM UP [10174, 10200, 10201, 10202,
10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
APLOS UP [10343, 10368, 10369, 10370,
10502, 10503]
Lazan UP [10377, 10402, 10403, 10404]
Orion UP [10409, 10449, 10450, 10474]
Delphi UP [10418, 10466, 10467, 10468]

Adding a Never-Schedulable Node


Add a never-schedulable node if you want to add a node to increase data storage on your
Nutanix cluster, but do not want any AHV VMs to run on that node.

About this task


You can add a never-schedulable node if you want to add a node to increase data storage on
your Nutanix cluster, but do not want any AHV VMs to run on that node. AOS never schedules
any VMs on a never-schedulable node, whether at the time of deployment of new VMs, during
the migration of VMs from one host to another (in the event of a host failure), or during any
other VM operations. Therefore, a never-schedulable node configuration ensures that no
additional compute resources such as CPUs are consumed from the Nutanix cluster. In this
way, you can meet the compliance and licensing requirements of your virtual applications. For
example, if you have added a never-schedulable node and you are paying for licenses of user
VMs only on a few sockets on a set of hosts within a cluster, the user VMs never get scheduled
on a never-schedulable node and the never-schedulable node functions as a storage-only node,
thereby ensuring that you are not in violation of your user VM licensing agreements.
Note the following points about a never-schedulable node configuration.

Note:

• You must ensure that at any given time, the cluster has a minimum of three nodes
(never-schedulable or otherwise) in function. Note that to add your first never-
schedulable node to your Nutanix cluster, the cluster must comprise of at least three
schedulable nodes.
• You can add any number of never-schedulable nodes to your Nutanix cluster.
• If you want a node that is already a part of the cluster to work as a never-
schedulable node, you must first remove that node from the cluster and then add
that node as a never-schedulable node.
• If you no longer need a node to work as a never-schedulable node, remove the node
from the cluster.

Perform the following procedure to add a never-schedulable node.

Procedure

1. (Optional) Remove the node from the cluster.

Note: Perform this step only if you want a node that is already a part of the cluster to work as
a never-schedulable node.

For information about how to remove a node from a cluster, see the Modifying a Cluster
topic in the Prism Web Console Guide.

AHV |  Node Management | 13


2. Log on to a Controller VM in the cluster with SSH.

3. Add a node as a never-schedulable node.


nutanix@cvm$ ncli -h true cluster add-node node-uuid=uuid-of-the-node never-
schedulable-node=true

Replace uuid-of-the-node with the UUID of the node you want to add asa never-schedulable
node.
The never-schedulable-node is an optional parameter and is required only if you want to add
a never-schedulable node.
If you no longer need a node to work as a never-schedulable node, remove the node from
the cluster.
If you want the never-schedulable node to now work as a schedulable node, remove the
node from the cluster and add the node back to the cluster without using the never-
schedulable-node parameter as follows.
nutanix@cvm$ cluster add-node node-uuid=uuid-of-the-node

Note: For information about how to add a node (other than a never-schedulable node) to a
cluster, see the Expanding a Cluster topic in the Prism Web Console Guide.

Changing CVM Memory Configuration (AHV)


About this task
You can increase memory reserved for each Controller VM in your cluster by using the 1-click
Controller VM Memory Upgrade available from the Prism web console. Increase memory size
depending on the workload type or to enable certain AOS features. See the Controller VM
Memory Configurations topic in the Acropolis Advanced Administration Guide.

Procedure

1. Run the Nutanix Cluster Checks (NCC) by using one of the following ways.

• From the Prism web console Health page, select Actions > Run Checks. Select All checks
and click Run.
• Log in to a Controller VM and use the ncc CLI.
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding.
If you are unable to resolve the issues, contact Nutanix support for assistance.

2. Log on to the web console for any node in the cluster.

3. Open Configure CVM from the gear icon in the web console.
The Configure CVM dialog box is displayed.

AHV |  Node Management | 14


4. Select the Target CVM Memory Allocation memory size and click Apply.
The values available from the drop-down menu can range from 16 GB to the maximum
available memory in GB.
AOS applies memory to each Controller VM that is below the amount you choose.
If a Controller VM was already allocated more memory than your choice, it remains at the
memory amount. For example, selecting 28 GB upgrades any Controller VM currently at 20
GB. A Controller VM with a 48 GB memory allocation remains unmodified.

Changing the Acropolis Host Name


About this task
In the examples in this procedure, replace the variable my_hostname with the name that you
want to assign to the AHV host.
To change the name of an Acropolis host, do the following:

Procedure

1. Log on to the AHV host with SSH.

2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/
sysconfig/network file.
HOSTNAME=my_hostname

3. Use the text editor to replace the host name in the /etc/hostname file.

4. Change the host name displayed by the hostname command:


root@ahv# hostname my_hostname

5. Log on to the Controller VM with SSH.

6. Restart the Acropolis service on the Controller VM.


nutanix@CVM$ genesis stop acropolis; cluster start

The host name is updated in the Prism web console after a few minutes.

Changing the Acropolis Host Password


About this task

Tip: Although it is not required for the root user to have the same password on all hosts, doing
so makes cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.

Perform these steps on every Acropolis host in the cluster.

Procedure

1. Log on to the AHV host with SSH.

2. Change the root password.


root@ahv# passwd root

AHV |  Node Management | 15


3. Respond to the prompts, providing the current and new root password.
Changing password for root.
New password:
Retype new password:
Password changed.

The password you choose must meet the following complexity requirements:

• In configurations with high-security requirements, the password must contain:

• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde
(~), exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.
• In configurations without high-security requirements, the password must contain:

• At least eight characters.


• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde
(~), exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least three characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last 10 passwords.
In both types of configuration, if a password for an account is entered three times
unsuccessfully within a 15-minute period, the account is locked for 15 minutes.

Nonconfigurable AHV Components


The components listed here are configured by the Nutanix manufacturing and installation
processes. Do not modify any of these components except under the direction of Nutanix
Support.

Warning: Modifying any of the settings listed here may render your cluster inoperable.

Warning: You must not run any commands on a Controller VM that are not covered in the
Nutanix documentation.

AHV |  Node Management | 16


Nutanix Software

• Settings and contents of any Controller VM, including the name and the virtual hardware
configuration (except memory when required to enable certain features).

Note: Controller VM users - Do not create additional Controller VM user accounts. It is


recommended to use the default accounts: 'admin' or 'nutanix'; or use sudo to elevate to the
'root' account if required.

AHV Settings

• Hypervisor configuration, including installed packages


• iSCSI settings
• Open vSwitch settings
• Taking snapshots of the Controller VM
• Creating user accounts on AHV hosts

AHV |  Node Management | 17


3
CONTROLLER VM MEMORY
CONFIGURATIONS
Controller VM memory allocation requirements differ depending on the models and the features
that are being used.

Note: G6/Skylake platforms do not have workload memory requirements for Controller VM and
vCPU configurations, unlike the G4/G5 platforms. G6/Skylake platforms do have Controller VM
memory configuration requirements and recommendations for features. See CVM Memory
Configurations for Features on page 21.

CVM Memory Configurations (G5/Broadwell)


This topic lists the recommended Controller VM memory allocations for workload categories.

Controller VM Memory Configurations for Base Models

Note: If the AOS upgrade process detects that any node hypervisor host has total physical
memory of 64 GB or greater, it automatically upgrades any Controller VM in that node with less
than 32 GB memory by 4 GB. The Controller VM is upgraded to a maximum 32 GB.
If the AOS upgrade process detects any node with less than 64 GB memory size, no
memory changes occur.
For nodes with ESXi hypervisor hosts with total physical memory of 64 GB, the
Controller VM is upgraded to a maximum 28 GB. With total physical memory greater
than 64 GB, the existing Controller VM memory is increased by 4 GB.

Note: G6/Skylake platforms do not have workload memory requirements for Controller VM and
vCPU configurations, unlike the G4/G5 platforms. G6/Skylake platforms do have Controller VM
memory configuration requirements and recommendations for features. See CVM Memory
Configurations for Features on page 21.

• The Foundation imaging process sets the number of vCPUs allocated to each Controller VM
according to your platform model.

Platform Recommended / vCPUs


Default Memory (GB)
Default configuration for all platforms 20 8

Nutanix Broadwell Models


The following table shows the minimum amount of memory required for the Controller VM
on each node for platforms that do not follow the default. For the workload translation into
models, see Platform Workload Translation (G5/Broadwell) on page 19.

AHV |  Controller VM Memory Configurations | 18


Platform Default Memory (GB)
VDI, server virtualization 20
Storage Heavy 28
Storage Only 28
Large server, high-performance, all-flash 32

Platform Workload Translation (G5/Broadwell)


The following table maps workload types to the corresponding Nutanix and Lenovo models.

Workload Exceptions

Note: Upgrading to 5.1 requires a 4GB memory increase, unless the CVM memory already has 32
GB.

If all the data disks in a platform are SSDs, the node is assigned the High Performance workload
except for the following exceptions.

• Klas Voyager 2 uses SSDs but due to workload balance, this platform workload default is
VDI.
• Cisco B-series is expected to have large remote storage and two SSDs as a local cache for
the hot tier, so this platform workload is VDI.

Workload Nutanix Nutanix Lenovo Cisco Dell Additional


Platforms
Features NX Model SX Model HX Model UCS XC

VDI NX-1065S- SX-1065- HX3310 B200-M4 XC430- Klas


G5 G5 Xpress Telecom
VOYAGER2

NX-1065- - HX3310-F C240-M4L - Crystal


G5
RS2616PS18

NX-3060- - HX2310-E C240-M4S - -


G5

NX-3155G- - HX3510-G C240- - -


G5 M4S2

NX-3175- - HX3710 C220-M4S - -


G5

- - HX1310 C220-M4L - -

- - HX2710-E Hyperflex - -
HX220C-
M4S

- - HX3510- - - -
FG

- - HX3710-F - - -

Storage Heavy NX-6155- - HX5510 - - -


G5

AHV |  Controller VM Memory Configurations | 19


Workload Nutanix Nutanix Lenovo Cisco Dell Additional
Platforms
Features NX Model SX Model HX Model UCS XC

NX-8035- - HX5510-C - - -
G5

NX-6035- - - - - -
G5

Storage Node NX-6035C- - HX5510-C - XC730xd-12C -


G5

High Performance NX-8150- - HX7510 C240- XC630-10P -


and All-Flash G5 M4SX

NX-1155- - HX7510-F Hyperflex XC730xd-12R -


G5 HX240C-
M4SX

NX-6155- - - - - -
G5

NX-8150- - - - - -
G5

CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge)


This topic lists the recommended Controller VM memory allocations for models and features.

Controller VM Memory Configurations for Base Models

Table 2: Platform Default

Platform Recommended/ vCPUs


Default Memory (GB)
Default configuration for all platforms unless 20 8
otherwise noted

The following tables show the minimum amount of memory and vCPU requirements and
recommendations for the Controller VM on each node for platforms that do not follow the
default.

Table 3: Nutanix Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
NX-1020 16 16 4
NX-6035C 28 28 8
NX-6035-G4 28 20 8
NX-8150 32 32 8

AHV |  Controller VM Memory Configurations | 20


Platform Recommended Default Memory vCPUs
Memory (GB) (GB)
NX-8150-G4 32 32 8
NX-9040 32 20 8
NX-9060-G4 32 32 8

Table 4: Dell Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)

XC730xd-24 32 20 8
XC6320-6AF
XC630-10AF

Table 5: Lenovo Platforms

Platform Recommended/ vCPUs


Default Memory (GB)

HX-3500 28 8
HX-5500
HX-7500

CVM Memory Configurations for Features


Note: If the AOS upgrade process detects that any node hypervisor host has total physical
memory of 64 GB or greater, it automatically upgrades any Controller VM in that node with less
than 32 GB memory by 4 GB. The Controller VM is upgraded to a maximum 32 GB.
If the AOS upgrade process detects any node with less than 64 GB memory size, no
memory changes occur.
For nodes with ESXi hypervisor hosts with total physical memory of 64 GB, the
Controller VM is upgraded to a maximum 28 GB. With total physical memory greater
than 64 GB, the existing Controller VM memory is increased by 4 GB.

If each Controller VM in your cluster includes 32 GB of memory, you can enable and use all AOS
features listed here (deduplication, redundancy factor 3, and so on) for each platform type
(high performance, all flash, storage heavy, and so on).
The table shows the extra memory needed plus the minimum Controller VM memory if you are
using or enabling a listed feature. Controller VM memory required = (minimum CVM memory for
the node + memory required to enable features) or 32 GB CVM memory per node, whichever is
less.
For example, to use capacity tier deduplication, each Controller VM would need at least 32 GB
(20 GB default + 12 GB for the feature).
To use performance tier deduplication and redundancy factor 3, each Controller VM
would need a minimum 28 GB (20 GB default + 8 GB for the features). However, 32 GB is
recommended in this case.

AHV |  Controller VM Memory Configurations | 21


Features Memory
(GB)
Capacity tier deduplication (includes performance tier deduplication) 12
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4
Capacity tier deduplication + redundancy factor 3 12
4
HOST NETWORK MANAGEMENT
Network management in an Acropolis cluster consists of the following tasks:

• Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you
configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the
hosts during the imaging process.

Prerequisites for Configuring Networking


Change the configuration from the factory default to the recommended configuration. See
Default Factory Configuration on page 26 and AHV Networking Recommendations on
page 23.

AHV Networking Recommendations


Nutanix recommends that you perform the following OVS configuration tasks from the
Controller VM, as described in this documentation:

• Viewing the network configuration


• Configuring an Open vSwitch bond with desired interfaces
• Assigning the Controller VM to a VLAN
For performing other OVS configuration tasks, such as adding an interface to a bridge and
configuring LACP for the interfaces in an OVS bond, log on to the AHV host, and then follow
the procedures described in the OVS documentation at http://openvswitch.org/.
Nutanix recommends that you configure the network as follows:

Table 6: Recommended Network Configuration

Network Component Best Practice

Open vSwitch Do not modify the OpenFlow tables that are associated with the
default OVS bridge br0.

VLANs Add the Controller VM and the AHV host to the same VLAN.
By default, the Controller VM and the hypervisor are assigned
to VLAN 0, which effectively places them on the native VLAN
configured on the upstream physical switch.
Do not add any other device, including guest VMs, to the VLAN to
which the Controller VM and hypervisor host are assigned. Isolate
guest VMs on one or more separate VLANs.

AHV |  Host Network Management | 23


Network Component Best Practice

Virtual bridges Do not delete or rename OVS bridge br0.


Do not modify the native Linux bridge virbr0.

OVS bonded port Aggregate the 10 GbE interfaces on the physical host to an OVS
(bond0) bond on the default OVS bridge br0 and trunk these interfaces on
the physical switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode.

Note: The mixing of bond modes across AHV hosts in the same
cluster is not recommended and not supported.

LACP configurations are known to work, but support might be


limited.

1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make
interfaces (physical host) sure that the guest VMs do not use the VLAN over which the
Controller VM and hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity,
follow the hypervisor manufacturer’s switch port and networking
configuration guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to
bridge br0, either individually or in a second bond. Use them on
other bridges.

IPMI port on the Do not trunk switch ports that connect to the IPMI interface.
hypervisor host Configure the switch ports as access ports for management
simplicity.

Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur
as implementations scale upward (see Knowledge Base article
KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-
blocking switches with larger buffers for production workloads.
Use an 802.3-2012 standards–compliant switch that has a low-
latency, cut-through design and provides predictable, consistent
traffic latency regardless of packet size, traffic pattern, or the
features enabled on the 10 GbE interfaces. Port-to-port latency
should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on
switch ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated
buffer for each port.

AHV |  Host Network Management | 24


Network Component Best Practice

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine


architecture. This simple, flat network design is well suited
for a highly distributed, shared-nothing compute and storage
architecture.
Add all the nodes that belong to a given cluster to the same
Layer-2 network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.

Controller VM Do not remove the Controller VM from either the OVS bridge br0
or the native Linux bridge virbr0.

This diagram shows the recommended network configuration for an Acropolis cluster. The
interfaces in the diagram are connected with colored lines to indicate membership to different
VLANs:

Figure 3:

Layer 2 Network Management with Open vSwitch


AHV uses Open vSwitch to connect the Controller VM, the hypervisor, and the guest VMs
to each other and to the physical network. The OVS package is installed by default on each
Acropolis node and the OVS services start automatically when you start a node.
To configure virtual networking in an Acropolis cluster, you need to be familiar with OVS. This
documentation gives you a brief overview of OVS and the networking components that you
need to configure to enable the hypervisor, Controller VM, and guest VMs to connect to each
other and to the physical network.

About Open vSwitch


Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and
designed to work in a multiserver virtualization environment. By default, OVS behaves like a

AHV |  Host Network Management | 25


Layer 2 learning switch that maintains a MAC address learning table. The hypervisor host and
VMs connect to virtual ports on the switch. Nutanix uses the OpenFlow protocol to configure
and communicate with Open vSwitch.
Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch.
As an example, the following diagram shows OVS instances running on two hypervisor hosts.

Figure 4: Open vSwitch

Default Factory Configuration


The factory configuration of an Acropolis host includes a default OVS bridge named br0 and a
native linux bridge called virbr0.
Bridge br0 includes the following ports by default:

• An internal port with the same name as the default bridge; that is, an internal port named
br0. This is the access port for the hypervisor host.
• A bonded port named bond0. The bonded port aggregates all the physical interfaces
available on the node. For example, if the node has two 10 GbE interfaces and two 1 GbE
interfaces, all four interfaces are aggregated on bond0. This configuration is necessary for
Foundation to successfully image the node regardless of which interfaces are connected to
the network.

Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with
Desired Interfaces on page 29.

The following diagram illustrates the default factory configuration of OVS on an Acropolis node:

AHV |  Host Network Management | 26


Figure 5: Default factory configuration of Open vSwitch in AHV

The Controller VM has two network interfaces. As shown in the diagram, one network interface
connects to bridge br0. The other network interface connects to a port on virbr0. The
Controller VM uses this bridge to communicate with the hypervisor host.

Viewing the Network Configuration


Use the following commands to view the configuration of the network elements.

Before you begin


Log on to the Acropolis host with SSH.

Procedure

• To show interface properties such as link speed and status, log on to the Controller VM, and
then list the physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces

Output similar to the following is displayed:


name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

AHV |  Host Network Management | 27


• To show the ports and interfaces that are configured as uplinks, log on to the Controller VM,
and then list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks

Replace bridge with the name of the bridge for which you want to view uplink information.
Omit the --bridge_name parameter if you want to view uplink information for the default
OVS bridge br0.
Output similar to the following is displayed:
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2 eth1 eth0
lacp: off
lacp-fallback: false
lacp_speed: slow

• To show the bridges on the host, log on to any Controller VM with SSH and list the bridges:
nutanix@cvm$ manage_ovs show_bridges

Output similar to the following is displayed:


Bridges:
br0

• To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then
list the configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name

For example, show the configuration of bond0.


root@ahv# ovs-appctl bond/show bond0

Output similar to the following is displayed:


---- bond0 ----
bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
active slave mac: 0c:c4:7a:48:b2:68(eth0)

slave eth0: enabled


active slave
may_enable: true

slave eth1: disabled


may_enable: false

Creating an Open vSwitch Bridge

About this task


To create an OVS bridge, do the following:

AHV |  Host Network Management | 28


Procedure

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.

3. Create an OVS bridge on each host in the cluster.


nutanix@cvm$ allssh 'manage_ovs --bridge_name bridge create_single_bridge'

Replace bridge with a name for the bridge. Bridge names must not exceed six (6) characters.
The output does not indicate success explicitly, so you can append && echo success to the
command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 create_single_bridge &&
echo success'

Output similar to the following is displayed:


================== 192.0.2.10 =================
success
...

Configuring an Open vSwitch Bond with Desired Interfaces


When creating an OVS bond, you can specify the interfaces that you want to include in the
bond.

About this task


Use this procedure to create a bond that includes a desired set of interfaces or to specify a new
set of interfaces for an existing bond. If you are modifying an existing bond, AHV removes the
bond and then re-creates the bond with the specified interfaces.

Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond,
so the disassociation is necessary to help prevent any unpredictable performance issues that
might result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that
you aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate
OVS bridge.

To create an OVS bond with the desired interfaces, do the following:

Procedure

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.

AHV |  Host Network Management | 29


3. Create a bond with the desired set of interfaces.
nutanix@cvm$ manage_ovs --bridge_name bridge --interfaces interfaces --
bond_name bond_name update_uplinks

Replace bridge with the name of the bridge on which you want to create the bond. Omit the
--bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:

• A comma-separated list of the interfaces that you want to include in the bond. For
example, eth0,eth1.
• A keyword that indicates which interfaces you want to include. Possible keywords:

• 10g. Include all available 10 GbE interfaces


• 1g. Include all available 1 GbE interfaces
• all. Include all available interfaces
For example, create a bond with interfaces eth0 and eth1 on a bridge named br1. Using allssh
enables you to use a single command to effect the change on every host in the cluster.

Note: If the bridge on which you want to create the bond does not exist, you must first create
the bridge. For information about creating an OVS bridge, see Creating an Open vSwitch
Bridge on page 28. The following example assumes that a bridge named br1 exists on
every host in the cluster.

nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces eth0,eth1 --


bond_name bond1 update_uplinks'

Example output similar to the following is displayed:


2015-03-05 11:17:17 WARNING manage_ovs:291 Interface eth1 does not have link
state
2015-03-05 11:17:17 INFO manage_ovs:325 Deleting OVS ports: bond1
2015-03-05 11:17:18 INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1
2015-03-05 11:17:22 INFO manage_ovs:364 Sending gratuitous ARPs for
192.0.2.21

VLAN Configuration
You can set up a segmented virtual network on an Acropolis node by assigning the ports on
Open vSwitch bridges to different VLANs. VLAN port assignments are configured from the
Controller VM that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations
on page 23. For information about assigning guest VMs to a VLAN, see the Web Console
Guide.

Assigning an Acropolis Host to a VLAN

About this task


To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:

AHV |  Host Network Management | 30


Procedure

1. Log on to the AHV host with SSH.

2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you
want the host be on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag

Replace host_vlan_tag with the VLAN tag for hosts.

3. Confirm VLAN tagging on port br0.


root@ahv# ovs-vsctl list port br0

4. Check the value of the tag parameter that is shown.

5. Verify connectivity to the IP address of the AHV host by performing a ping test.

Assigning the Controller VM to a VLAN


By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the
Controller VM to a different VLAN, change the VLAN ID of its public interface. After the change,
you can access the public interface from a device that is on the new VLAN.

About this task

Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you
are logged on to the Controller VM through its public interface. To change the VLAN ID, log on to
the internal interface that has IP address 192.168.5.254.

Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a
VLAN, do the following:

Procedure

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.

3. Assign the public interface of the Controller VM to a VLAN.


nutanix@cvm$ change_cvm_vlan vlan_id

Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10

Output similar to the following us displayed:


Replacing external NIC in CVM, old XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<source bridge="br0" />
<vlan>
<tag id="10" />
</vlan>

AHV |  Host Network Management | 31


<virtualport type="openvswitch">
<parameters interfaceid="95ce24f9-fb89-4760-98c5-01217305060d" />
</virtualport>
<target dev="vnet0" />
<model type="virtio" />
<alias name="net2" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
type="pci" />
</interface>

new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
type="pci" />
<source bridge="br0" />
<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.

4. Restart the network service.


nutanix@cvm$ sudo service network restart

Configuring a Virtual NIC to Operate in Access or Trunk Mode


By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual
NIC can send and receive traffic only over its own VLAN, which is the VLAN of the virtual
network to which it is connected. If restricted to using access mode interfaces, a VM running
an application on multiple VLANs (such as a firewall application) must use multiple virtual NICs
—one for each VLAN. Instead of configuring multiple virtual NICs in access mode, you can
configure a single virtual NIC on the VM to operate in trunk mode. A virtual NIC in trunk mode
can send and receive traffic over any number of VLANs in addition to its own VLAN. You can
trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the trunk
mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic
only over its own VLAN.

About this task


To configure a virtual NIC as an access port or trunk port, do the following:

Procedure

1. Log on to the Controller VM with SSH.

2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess |
kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:

• vm. Name of the VM.


• network. Name of the virtual network to which you want to connect the virtual NIC.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk.
The parameter is processed only if vlan_mode is set to kTrunked and is ignored if

AHV |  Host Network Management | 32


vlan_mode is set to kAccess. To include the default VLAN, VLAN 0, include it in the
list of trunked networks. To trunk all VLANs, set vlan_mode to kTrunked and skip this
parameter.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess
for access mode and to kTrunked for trunk mode. Default: kAccess.
b. Configure an existing virtual NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_update vm mac_addr [update_vlan_trunk_info={true
| false}] [vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:

• vm. Name of the VM.


• mac_addr. MAC address of the virtual NIC to update (the MAC address is used to
identify the virtual NIC). Required to update a virtual NIC.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk.
The parameter is processed only if vlan_mode is set to kTrunked and is ignored if
vlan_mode is set to kAccess. To include the default VLAN, VLAN 0, include it in the
list of trunked networks. To trunk all VLANs, set vlan_mode to kTrunked and skip this
parameter.
• update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. If not
specified, the parameter defaults to false and the vlan_mode and trunked_networks
parameters are ignored.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess
for access mode and to kTrunked for trunk mode.

Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see
the "VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App
Mobility Fabric Guide.

Changing the IP Address of an Acropolis Host

About this task


Perform the following procedure to change the IP address of an Acropolis host.

CAUTION: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor
can be multihomed provided that one interface is on the same subnet as the Controller VM.

AHV |  Host Network Management | 33


Procedure

1. Edit the settings of port br0, which is the internal port on the default bridge br0.

a. Log on to the host console as root.


You can access the hypervisor host console either through IPMI or by attaching a
keyboard and monitor to the node.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0

c. Update entries for host IP address, netmask, and gateway.


The block of configuration information that includes these entries is similar to the
following:
ONBOOT="yes"
NM_CONTROLLED="no"
PERSISTENT_DHCLIENT=1
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="br0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"

• Replace host_ip_addr with the IP address for the hypervisor host.


• Replace subnet_mask with the subnet mask for host_ip_addr.
• Replace gateway_ip_addr with the gateway address for host_ip_addr.
d. Save your changes.
e. Restart network services.
/etc/init.d/network restart

2. Log on to the Controller VM and restart genesis.


nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:


Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]

For information about how to log on to a Controller VM, see Controller VM Access on
page 8.

3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see
Assigning an Acropolis Host to a VLAN on page 30.
5
VIRTUAL MACHINE MANAGEMENT
The following topics describe various aspects of virtual machine management in an AHV
cluster.

Supported Guest VM Types for AHV


The compatibility matrix available on the Nutanix support portal includes the latest supported
AHV guest VM OSes.

Maximum vDisks per bus type

• SCSI: 256
• PCI: 6
• IDE: 4

Unified Extensible Firmware Interface (UEFI) Support for Guest VMs


AHV does not support VMs created in UEFI mode.

Virtual Machine Network Management


Virtual machine network management involves configuring connectivity for guest VMs through
Open vSwitch bridges.

Configuring 1 GbE Connectivity for Guest VMs


If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE
interfaces (eth0 and eth1) to a bond on a separate OVS bridge, create a VLAN network on the
bridge, and then assign guest VM interfaces to the network.

About this task


To configure 1 GbE connectivity for guest VMs, do the following:

Procedure

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.

AHV |  Virtual Machine Management | 35


3. Determine the uplinks configured on the host.
nutanix@cvm$ allssh manage_ovs show_uplinks

Output similar to the following is displayed:


Executing manage_ovs show_uplinks on the cluster
================== 192.0.2.49 =================
Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

================== 192.0.2.50 =================


Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

================== 192.0.2.51 =================


Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample
output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the
bridge name and bond name are br0 and br0-up, respectively.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --
bond_name br0-up update_uplinks'

The command removes the bond and then re-creates the bond with only the 10 GbE
interfaces.

5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge
called br1 (bridge names must not exceed 6 characters.).
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 create_single_bridge'

6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example,
aggregate them to a bond named br1-up.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name
br1-up update_uplinks'

7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the
guest VMs, and associate the new bridge with the network. For example, create a network
named vlan10.br1 on VLAN 10.
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1

8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign
interfaces on the guest VMs to the network.
For information about assigning guest VM interfaces to a network, see "Creating a VM" in the
Prism Web Console Guide.

Configuring a Virtual NIC to Operate in Access or Trunk Mode


By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual
NIC can send and receive traffic only over its own VLAN, which is the VLAN of the virtual

AHV |  Virtual Machine Management | 36


network to which it is connected. If restricted to using access mode interfaces, a VM running
an application on multiple VLANs (such as a firewall application) must use multiple virtual NICs
—one for each VLAN. Instead of configuring multiple virtual NICs in access mode, you can
configure a single virtual NIC on the VM to operate in trunk mode. A virtual NIC in trunk mode
can send and receive traffic over any number of VLANs in addition to its own VLAN. You can
trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the trunk
mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic
only over its own VLAN.

About this task


To configure a virtual NIC as an access port or trunk port, do the following:

Procedure

1. Log on to the Controller VM with SSH.

2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess |
kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:

• vm. Name of the VM.


• network. Name of the virtual network to which you want to connect the virtual NIC.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk.
The parameter is processed only if vlan_mode is set to kTrunked and is ignored if
vlan_mode is set to kAccess. To include the default VLAN, VLAN 0, include it in the
list of trunked networks. To trunk all VLANs, set vlan_mode to kTrunked and skip this
parameter.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess
for access mode and to kTrunked for trunk mode. Default: kAccess.
b. Configure an existing virtual NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_update vm mac_addr [update_vlan_trunk_info={true
| false}] [vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:

• vm. Name of the VM.


• mac_addr. MAC address of the virtual NIC to update (the MAC address is used to
identify the virtual NIC). Required to update a virtual NIC.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk.
The parameter is processed only if vlan_mode is set to kTrunked and is ignored if
vlan_mode is set to kAccess. To include the default VLAN, VLAN 0, include it in the

AHV |  Virtual Machine Management | 37


list of trunked networks. To trunk all VLANs, set vlan_mode to kTrunked and skip this
parameter.
• update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. If not
specified, the parameter defaults to false and the vlan_mode and trunked_networks
parameters are ignored.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess
for access mode and to kTrunked for trunk mode.

Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see
the "VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App
Mobility Fabric Guide.

Virtual Machine Memory and CPU Hot-Plug Configurations


Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the
memory allocation and the number of CPUs on your VMs while the VMs are powered on. You
can change the number of vCPUs (sockets) while the VMs are powered on. However, you
cannot change the number of cores per socket while the VMs are powered on.
You can change the memory and CPU configuration of your VMs by using the Acropolis CLI
(aCLI), Prism Web Console (see Managing a VM (AHV) in the Prism Web Console Guide), or
Prism Central (see Managing a VM (AHV and Self Service)) in the Prism Central Guide.
Following is the supportability matrix of operating systems on which the memory and CPUs are
hot-pluggable.

Operating Edition Bits Hot-pluggable Hot-pluggable


Systems Memory CPU

Windows Server Datacenter x86 Yes No


2008

Windows Server Standard x86_64 No No


2008 R2

Windows Server Datacenter x86_64 Yes Yes


2008 R2

Windows Server Standard x86_64 Yes No


2012 R2

Windows Server Datacenter x86_64 Yes No


2012 R2

CentOS 6.3+ x86 No Yes

CentOS 6.3+ x86_64 Yes Yes

CentOS 6.8 No Yes

CentOS 6.8 x86_64 Yes Yes

CentOS 7.2 x86_64 Yes Yes

Red Hat 6.3+ x86, x86_64 Yes Yes


Enterprise Linux

AHV |  Virtual Machine Management | 38


Operating Edition Bits Hot-pluggable Hot-pluggable
Systems Memory CPU

Red Hat 7.0+ x86, x86_64 Yes Yes


Enterprise Linux

Suse Linux 11-SP3+ x86_64 No Yes


Enterprise Edition

Suse Linux 12 x86_64 Yes Yes


Enterprise Edition

Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory
online. If the memory is not online, you cannot use the new memory. Perform the following
procedure to make the memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/device/system/memory/memoryXXX/state

Display the state of a specific memory block.


$ grep line /sys/devices/system/memory/*/state

2. Make the memory online.


$ echo online > /sys/devices/system/memory/memoryXXX/state

2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more
memory to that VM so that the final memory is greater than 3 GB, results in a memory-
overflow condition. To resolve the issue, restart the guest OS (CentOS 7.2) with the following
setting:
swiotlb=force

CPU OS Limitations
1. On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo,
you might have to bring the CPUs online. For each hot-plugged CPU, run the following
command to bring the CPU online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online

Replace <n> with the number of the hot plugged CPU.


2. Device Manager on some versions of Windows such as Windows Server 2012 R2 displays the
hot-plugged CPUs as new hardware, but the hot-plugged CPUs are not displayed under Task
Manager.

Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)

About this task


Perform the following procedure to hot plug the memory and CPUs on the AHV VMs.

Procedure

1. Log on the Controller VM with SSH.

AHV |  Virtual Machine Management | 39


2. Update the memory allocation for the VM.
nutanix@cvm$ acli vm.update vm-name memory=new_memory_size

Replace vm-name with the name of the VM and new_memory_size with the memory size.

3. Update the number of CPUs on the VM.


nutanix@cvm$ acli vm.update vm-name num_vcpus=n

Replace vm-name with the name of the VM and n with the number of CPUs.

Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported
version, you must power cycle the VM that was instantiated and powered on before the
upgrade, so that it is compatible with the memory and CPU hot-plug feature. This power-cycle
has to be done only once after the upgrade. New VMs created on the supported version shall
have the hot-plug compatibility by default.

Virtual Machine Memory Management (vNUMA)


AHV hosts support Virtual Non-uniform Memory Access (vNUMA) on virtual machines. You can
enable vNUMA on VMs when you create or modify the VMs to optimize memory performance.

Non-uniform Memory Access (NUMA)


In a NUMA topology, memory access times of a VM are dependent on the memory location
relative to a processor. A VM accesses memory local to a processor faster than the non-local
memory. You can achieve optimal resource utilization if both CPU and memory from the same
physical NUMA node is used. Memory latency is introduced if you are running the CPU on one
NUMA node (for example, node 0) and the VM accesses the memory from another node (node
1). Ensure that the virtual hardware topology of VMs matches the physical hardware topology
to achieve minimum memory latency.

Virtual Non-uniform Memory Access (vNUMA)


vNUMA optimizes memory performance of virtual machines that require more vCPUs or
memory than the capacity of a single physical NUMA node. In a vNUMA topology, you can
create multiple vNUMA nodes where each vNUMA node includes vCPUs and virtual RAM. When
you assign a vNUMA node to a physical NUMA node, the vCPUs can intelligently determine the
memory latency (high or low). Low memory latency within a vNUMA node results in low latency
within a physical NUMA node.

Enabling vNUMA on Virtual Machines

Before you begin


Before you enable vNUMA, see the AHV Best Practices Guide under Solutions Documentation.

About this task


Perform the following procedure to enable vNUMA on your VMs running on the AHV hosts.

Procedure

1. Log on to a Controller VM with SSH.

2. Access the Acropolis command line.


nutanix@cvm$ acli
<acropolis>

AHV |  Virtual Machine Management | 40


3. Do one of the following:

• Enable vNUMA if you are creating a new VM.


<acropolis> vm.create vm_name num_vcpus=x \
num_cores_per_vcpu=x memory=xG \
num_vnuma_nodes=x

• Enable vNUMA if you are modifying an existing VM.


<acropolis> vm.update vm_name \
num_vnuma_nodes=x

Replace vm_name with the name of the VM on which you want to enable vNUMA or vUMA.
Replace x with the values for the following indicated parameters:

• num_vcpus: Type the number of vCPUs for the VM.


• num_cores_per_vcpu: Type the number of cores per vCPU.
• memory: Type the memory in GB for the VM.
• num_vnuma_nodes: Type the number of vNUMA nodes for the VM.

GPU and vGPU Support


AHV supports GPU-accelerated computing for guest VMs. You can configure either GPU pass-
through or a virtual GPU.

Note: You can configure either pass-through or a vGPU for a guest VM but not both.

This guide describes the concepts and driver installation information. For the configuration
procedures, see the Prism Web Console Guide.

Supported GPUs
The following GPUs are supported:

• NVIDIA® Tesla® M10


• NVIDIA® Tesla® M60
• NVIDIA® Tesla® P40

GPU Pass-Through for Guest VMs


AHV hosts support GPU pass-through for guest VMs, allowing applications on VMs direct
access to GPU resources. The Nutanix user interfaces provide a cluster-wide view of GPUs,
allowing you to allocate any available GPU to a VM. You can also allocate multiple GPUs to a
VM. However, in a pass-through configuration, only one VM can use a GPU at any given time.

Host Selection Criteria for VMs with GPU Pass-Through


When you power on a VM with GPU pass-through, the VM is started on the host that has the
specified GPU, provided that the Acropolis Dynamic Scheduler determines that the host has
sufficient resources to run the VM. If the specified GPU is available on more than one host,
the Acropolis Dynamic Scheduler ensures that a host with sufficient resources is selected. If
sufficient resources are not available on any host with the specified GPU, the VM is not powered
on.

AHV |  Virtual Machine Management | 41


If you allocate multiple GPUs to a VM, the VM is started on a host if, in addition to satisfying
Acropolis Dynamic Scheduler requirements, the host has all of the GPUs that are specified for
the VM.
If you want a VM to always use a GPU on a specific host, configure host affinity for the VM.

Support for Graphics and Compute Modes


AHV supports running GPU cards in either graphics mode or compute mode. If a GPU is running
in compute mode, Nutanix user interfaces indicate the mode by appending the string compute
to the model name. No string is appended if a GPU is running in the default graphics mode.

Switching Between Graphics and Compute Modes


If you want to change the mode of the firmware on a GPU, put the host in maintenance mode,
and then flash the GPU manually by logging on to the AHV host and performing standard
procedures as documented for Linux VMs by the vendor of the GPU card.
Typically, you restart the host immediately after you flash the GPU. After restarting the host,
redo the GPU configuration on the affected VM, and then start the VM. For example, consider
that you want to re-flash an NVIDIA Tesla® M60 GPU that is running in graphics mode. The
Prism web console identifies the card as an NVIDIA Tesla M60 GPU. After you re-flash the GPU
to run in compute mode and restart the host, redo the GPU configuration on the affected VMs
by adding back the GPU, which is now identified as an NVIDIA Tesla M60.compute GPU, and
then start the VM.

Supported GPU Cards


For a list of supported GPUs, see Supported GPUs on page 41.

Limitations
GPU pass-through support has the following limitations:

• HA is not supported for VMs with GPU pass-through. If the host fails, VMs that have a GPU
configuration are powered off and then powered on automatically when the node is back up.
• Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is
necessary when the BIOS, BMC, and the hypervisor on the host are being upgraded. During
these upgrades, VMs that have a GPU configuration are powered off and then powered on
automatically when the node is back up.
• VM pause and resume are not supported.
• You cannot hot add VM memory if the VM is using a GPU.
• Hot add and hot remove support is not available for GPUs.
• You can change the GPU configuration of a VM only when the VM is turned off.
• The Prism web console does not support console access for VMs that are configured with
GPU pass-through. Before you configure GPU pass-through for a VM, set up an alternative
means to access the VM. For example, enable remote access over RDP.
Removing GPU pass-through from a VM restores console access to the VM through the
Prism web console.

Configuring GPU Pass-Through


For information about configuring GPU pass-through for guest VMs, see Creating a VM (AHV)
in the "Virtual Machine Management" chapter of the Prism Web Console Guide.

AHV |  Virtual Machine Management | 42


NVIDIA GRID Virtual GPU Support on AHV
AHV supports NVIDIA GRID technology, which enables multiple guest VMs to use the same
physical GPU concurrently. Concurrent use is made possible by dividing a physical GPU into
discrete virtual GPUs (vGPUs) and allocating those vGPUs to guest VMs. Each vGPU is allocated
a fixed range of the physical GPU’s framebuffer and uses all the GPU processing cores in a time-
sliced manner.
Virtual GPUs are of different types (vGPU types are also called vGPU profiles) and differ by the
amount of physical GPU resources allocated to them and the class of workload that they target.
The number of vGPUs into which a single physical GPU can be divided therefore depends on
the vGPU profile that is used on a physical GPU.
vGPU profiles are licensed through an NVIDIA GRID license server. Guest VMs check out a
license over the network when starting up and return the license when shutting down. The
choice of license depends on the type of vGPU that the applications running on the VM
require. Licenses are available in various editions, and the vGPU profile that you want might
be supported by more than one license edition. You must determine the vGPU profile that the
VM requires, install an appropriate license on the licensing server, and configure the VM to use
that license and vGPU type. For information about licensing for different vGPU types, see the
NVIDIA GRID licensing documentation.
When the VM is powering on, it checks out the license from the licensing server. If the specified
license is not available on the licensing server, the VM starts up and functions normally, but the
vGPU runs with reduced capability. When a license is checked back in, the vGPU is returned to
the vGPU resource pool. When powered on, guest VMs use a vGPU in the same way that they
use a physical GPU that is passed through.
Each physical GPU supports more than one vGPU profile, but a physical GPU cannot run
multiple vGPU profiles concurrently. After a vGPU of a given profile is created on a physical
GPU (that is, after a vGPU is allocated to a VM that is powered on), the GPU is restricted to
that vGPU profile until it is freed up completely. To understand this behavior, consider that you
configure a VM to use an M60-1Q vGPU. When the VM is powering on, it is allocated an M60-1Q
vGPU instance only if a physical GPU that supports M60-1Q is either unused or already running
the M60-1Q profile and can accommodate the requested vGPU. If an entire physical GPU that
supports M60-1Q is free at the time the VM is powering on, an M60-1Q vGPU instance is created
for the VM on the GPU, and that profile is locked on the GPU. In other words, until the physical
GPU is completely freed up again, only M60-1Q vGPU instances can be created on that physical
GPU (that is, only VMs configured with M60-1Q vGPUs can use that physical GPU).

Note: On AHV, you can assign only one vGPU to a VM.

Supported GPU Cards


For a list of supported GPUs, see Supported GPUs on page 41.

Limitations for vGPU Support


vGPU support on AHV has the following limitations:

• You cannot hot-add memory to VMs that have a vGPU.


• VMs with a vGPU cannot be live-migrated.
• VMs with a vGPU are not protected by the disaster recovery (DR) feature.

AHV |  Virtual Machine Management | 43


• The Prism web console does not support console access for VMs that are configured with
a vGPU. Before you add a vGPU to a VM, set up an alternative means to access the VM. For
example, enable remote access over RDP.
Removing a vGPU from a VM restores console access to the VM through the Prism web
console.

NVIDIA GRID vGPU Driver Installation and Configuration Workflow


To enable guest VMs to use vGPUs on AHV, you must install NVIDIA drivers on the guest VMs,
install the NVIDIA GRID host driver on the hypervisor, and set up an NVIDIA GRID License
Server.

Before you begin

• Make sure that NVIDIA GRID Virtual GPU Manager (the host driver) and the NVIDIA GRID
guest operating system driver are at the same version.
• The GPUs must run in graphics mode. If any M60 GPUs are running in compute mode, switch
the mode to graphics before you begin. See the gpumodeswitch User Guide.
• If you are using NVIDIA vGPU drivers on a guest VM and you modify the vGPU profile
assigned to the VM (in the Prism web console), you might need to reinstall the NVIDIA guest
drivers on the guest VM.

About this task


To enable guest VMs to use vGPUs, do the following:

Procedure

1. If you do not have an NVIDIA GRID licensing server, set up the licensing server.
See the Virtual GPU License Server User Guide.

2. Download the guest and host drivers (both drivers are included in a single bundle) from the
NVIDIA Driver Downloads page. For information about the supported driver versions, see
Virtual GPU Software R384 for Nutanix AHV Release Notes.

3. Install the host driver (NVIDIA GRID Virtual GPU Manager) on the AHV hosts. See Installing
NVIDIA GRID Virtual GPU Manager (Host Driver) on page 44.

4. In the Prism web console, configure a vGPU profile for the VM.
To create a VM, see Creating a VM (AHV) in the "Virtual Machine Management" chapter of
the Prism Web Console Guide. To allocate vGPUs to an existing VM, see the "Managing a VM
(AHV)" topic in that Prism Web Console Guide chapter.

5. Do the following on the guest VM:

a. Download the NVIDIA GRID guest operating system driver from the NVIDIA portal, install
the driver on the guest VM, and then restart the VM.
b. Configure vGPU licensing on the guest VM.
This step involves configuring the license server on the VM so that the VM can request
the appropriate license from the license server. For information about configuring vGPU
licensing on the guest VM, see the NVIDIA GRID vGPU User Guide.

Installing NVIDIA GRID Virtual GPU Manager (Host Driver)


NVIDIA GRID Virtual GPU Manager for AHV can be installed from any Controller VM by the use
of the install_host_package script. The script, when run on a Controller VM, installs the driver on

AHV |  Virtual Machine Management | 44


all the hosts in the cluster. If one or more hosts in the cluster do not have a GPU installed, the
script prompts you to choose whether or not you want to install the driver on those hosts. In a
rolling fashion, the script places each GPU host in maintenance mode, installs the driver from
the RPM package, and restarts the host after the installation is complete. You can copy the RPM
package to the Controller VM and pass the path of the package to the install_host_package
script as an argument. Alternately, you can make the RPM package available on a web server
and pass the URL to the install_host_package script.

About this task

Note: VMs using a GPU must be powered off if their parent host is affected by this install. If
left running, the VMs are automatically powered off when the driver installation begins on their
parent host, and then powered on after the installation is complete.

To install the host driver, do the following:

Procedure

1. To make the driver available to the script, do one of the following:

a. Copy the RPM package to a web server to which you can connect from a Controller VM
on the AHV cluster.
b. Copy the RPM package to any Controller VM in the cluster on which you want to install
the driver.

2. Log on to any Controller VM in the cluster with SSH.

3. If the RPM package is available on a web server, install the driver from the server location.
nutanix@cvm$ install_host_package -u url

Replace url with the URL to the driver on the server.

4. If the RPM package is available on the Controller VM, install the driver from the location to
which you uploaded the driver.
nutanix@cvm$ install_host_package -r rpm

Replace rpm with the path to the driver on the Controller VM.

5. At the confirmation prompt, type yes to confirm that you want to install the driver.
If some of the hosts in the cluster do not have GPUs installed on them, you are prompted,
once for each such host, to choose whether or not you want to install the driver on those
hosts. Specify whether or not you want to install the host driver by typing yes or no.

Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for paravirtual devices that enhance stability and
performance of VMs on AHV.
Nutanix VirtIO is available in two formats:

• An ISO used when installing Windows in a VM on AHV.


• An installer used to update VirtIO for Windows.

AHV |  Virtual Machine Management | 45


VirtIO Requirements
The following are requirements for Nutanix VirtIO for Windows.

• Operating system:

• Microsoft Windows Server Version: Windows 2008 R2 or later versions.


• Microsoft Windows Client Version: Windows 7 or later versions.
• AHV version 20160925.30 (at minimum)

Installing Nutanix VirtIO for Windows


This topic describes how to download the Nutanix VirtIO and Nutanix VirtIO Microsoft Installer
(MSI). The MSI installs and upgrades the Nutanix VirtIO drivers.

Before you begin


Be sure you have the VirtIO requirements, see VirtIO Requirements on page 46.

About this task


To download the Nutanix VirtIO, perform the following.

Procedure

1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.

2. Use the filter search to find the latest Nutanix Virtio package.

3. Click and download the Nutanix VirtIO package.

• You can choose the ISO, if you are creating a new Windows VM. The installer is available
on the ISO if your VM does not have internet access.
• You can choose the MSI, if you are updating drivers in a Windows VM.

Figure 6: Search filter and VirtIO options

4. Upload the ISO to the cluster


This task is described in Configuring Images in the Web Console Guide.

5. Run the Nutanix VirtIO MSI by opening the download.

AHV |  Virtual Machine Management | 46


6. Read and accept the Nutanix VirtIO License Agreement. Click Install.

Figure 7: Nutanix VirtIO Windows Setup Wizard

The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.

Manually Installing Nutanix VirtIO


Manually install Nutanix VirtIO required for 32-bit OSes.

Before you begin


Be sure you have the VirtIO requirements, see VirtIO Requirements on page 46.

About this task

Note: To automatically install Nutanix VirtIO, see Installing Nutanix VirtIO for Windows on
page 46.

Procedure

1. Go to the Nutanix Support Portal and navigate to Downloads > Tools & Firmware.

2. Use the filter search to find the latest Nutanix Virtio ISO.

3. Download the latest VirtIO for Windows ISO to your local machine.

Note: Nutanix recommends extracting the VirtIO ISO into the same VM where you will load
Nutanix VirtIO for easier installation.

4. Upload the Nutanix VirtIO ISO to your cluster


This procedure is described in Configuring Images in the Web Console Guide.

5. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.

AHV |  Virtual Machine Management | 47


6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.

a. TYPE: CD-ROM
b. OPERATION: CLONE FROM IMAGE SERVICE
c. BUS TYPE: IDE
d. IMAGE: Select the Nutanix VirtIO ISO
e. Click Add.

7. Log into the VM and navigate to Control Panel > Device Manager.

AHV |  Virtual Machine Management | 48


8. Note: Ensure you select the x86 subdirectory for 32-bit Windows or the amd64 for 64-bit
Windows.

Open the devices and select the specific Nutanix drivers for download. For each device, right
click and Update Driver Software into the drive containing the VirtIO ISO. For each device,
follow the wizard instructions until you receive installation confirmation.

a. System Devices > Nutanix VirtIO Balloon Drivers


b. Network Adapter > Nutanix VirtIO Ethernet Adapter.
c. Processors > Storage Controllers > Nutanix VirtIO SCSI pass through Controller
The Nutanix VirtIO SCSI pass through Controller prompts you to restart your system. You
may restart at any time to successfully install the controller.

AHV |  Virtual Machine Management | 49


Figure 8: List of Nutanix VirtIO downloads

AHV |  Virtual Machine Management | 50


Upgrading Nutanix VirtIO for Windows
This topic describes how to upload and upgrade Nutanix VirtIO and Nutanix VirtIO Microsoft
Installer (MSI). The MSI installs and upgrades the Nutanix VirtIO drivers.

Before you begin


Be sure you have the VirtIO requirements, see VirtIO Requirements on page 46.

Procedure

1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.

2. Select the ISO if you are creating a new Windows VM.

Note: The installer is available on the ISO if your VM does not have internet access.

a. Upload the ISO to the cluster as described in Configuring Images in the Web Console
Guide.
b. Mount ISO image into CD-ROM of each VM where you want to upgrade in the cluster.

3. If you are updating drivers in a Windows VM, select the appropriate 32-bit or 64-bit MSI.

4. Upgrade drivers.

Note: The below options might prompt a system restart.

• For SCSI drivers for SCSI boot disks, manually upgrade the drivers with the vendor's
instructions.
• For all other drivers, run the Nutanix VirtIO MSI installer (the preferred installation
method) and follow the wizard instructions.

Note: Running the Nutanix VirtIO MSI installer upgrades all drivers.

Upgrading drivers may cause VMs to restart automatically.

AHV |  Virtual Machine Management | 51


5. Read and accept the Nutanix VirtIO License Agreement. Click Install.

Figure 9: Nutanix VirtIO Windows Setup Wizard

The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.

Creating a Windows VM on AHV with Nutanix VirtIO (New and Migrated VMs)

Before you begin

• Upload your Windows Installer ISO to your cluster as described in the Configuring Images
section of the Web Console Guide document.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Configuring Images section
of the Web Console Guide document.

About this task


The following task describes how to create a Windows VM in AHV or migrate a Windows VM
from a non-Nutanix source to AHV with the Nutanix VirtIO drivers. To install a new or migrated
Windows VM with Nutanix VirtIO, complete the following.

Procedure

1. Log on to the Prism web console using your Nutanix credentials.

2. At the top-left corner, click Home > VM.


The VM page appears.

AHV |  Virtual Machine Management | 52


3. Click + Create VM in the corner of the page.
The Create VM dialog box appears.

Figure 10: The Create VM dialog box

4. Complete the indicated fields.

a. NAME: Enter a name for the VM.


b. vCPU(s): Enter the number of vCPUs.
c. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
d. MEMORY: Enter the amount of memory for the VM (in GiBs).

AHV |  Virtual Machine Management | 53


5. If you are creating a Windows VM, add a Windows CD-ROM to the VM.

a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated
fields.
The current CD-ROM opens in a new window.
b. OPERATION: CLONE FROM IMAGE SERVICE
c. BUS TYPE: IDE
d. IMAGE: Select the Windows OS Install ISO.
e. Click Update.

6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.

a. TYPE: CD-ROM
b. OPERATION:CLONE FROM IMAGE SERVICE
c. BUS TYPE: IDE
d. IMAGE: Select the Nutanix VirtIO ISO.
e. Click Add.

7. Add a new disk for the hard drive.

a. TYPE: DISK
b. OPERATION: ALLOCATE ON STORAGE CONTAINER
c. BUS TYPE: SCSI
d. STORAGE CONTAINER: Select the appropriate storage container.
e. SIZE: Enter the number for the size of the hard drive (in GiB).
f. Click Add to add the disk driver.

8. If you are migrating a VM, create a disk from the disk image. Click Add New Disk and
complete the indicated fields.

a. TYPE: DISK
b. OPERATION: CLONE FROM IMAGE
c. BUS TYPE: SCSI
d. CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you
created previously.
e. Click Add to add the disk driver.

9. (Optional) After you have migrated or created a VM, add a network interface card (NIC).
Click Add New NIC and completing the indicated fields.

a. VLAN ID: Choose the VLAN ID according to network requirements and enter the IP
address if required.
b. Click Add.

10. Once you complete the indicated fields, click Save.

AHV |  Virtual Machine Management | 54


What to do next
Install Windows by following Installing Windows on a VM on page 55.

Installing Windows on a VM

Before you begin


Create a Windows VM. The "Creating a Windows VM on AHV after Migration" topic in the
Migration Guide describes how to create this VM.

About this task


To install a Windows VM, do the following.

Procedure

1. Log on to the web console.

2. Click Home > VM to open the VM dashboard.

3. Select the Windows VM.

4. In the center of the VM page, click Power On.

5. Click Launch Console.


The Windows console opens in a new window.

6. Select the desired language, time and currency format, and keyboard information.

7. Click Next > Install Now.


The Windows Setup windows displays the operating systems to install.

8. Select the Windows OS you want to install.

9. Click Next and accept the license terms.

10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.

AHV |  Virtual Machine Management | 55


11. Choose the Nutanix VirtIO driver.

a. Select the Nutanix VirtIO CD drive.


b. Expand the Windows OS folder and click OK.

Figure 11: Select the Nutanix VirtIO drivers for your OS

The Select the driver to install window appears.

12. Select the correct driver and click Next.


The amd64 folder contains drivers for 64-bit operating systems. The x86 folder contains
drivers for 32-bit operating systems.

13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress that can take several minutes.

14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows Setup completes installation.

PXE Configuration for AHV VMs


You can configure a VM to boot over the network in a Preboot eXecution Environment (PXE).
Booting over the network is called PXE booting and does not require the use of installation
media. When starting up, a PXE-enabled VM communicates with a DHCP server to obtain
information about the boot file it requires.

AHV |  Virtual Machine Management | 56


Configuring PXE boot for an AHV VM involves performing the following steps:

• Configuring the VM to boot over the network.


• Configuring the PXE environment.
The procedure for configuring a VM to boot over the network is the same for managed and
unmanaged networks. The procedure for configuring the PXE environment differs for the two
network types, as follows:

• An unmanaged network does not perform IPAM functions and gives VMs direct access to an
external Ethernet network. Therefore, the procedure for configuring the PXE environment
for AHV VMs is the same as for a physical machine or a VM that is running on any other
hypervisor. VMs obtain boot file information from the DHCP or PXE server on the external
network.
• A managed network intercepts DHCP requests from AHV VMs and performs IP address
management (IPAM) functions for the VMs. Therefore, you must add a TFTP server and the
required boot file information to the configuration of the managed network. VMs obtain boot
file information from this configuration.
A VM that is configured to use PXE boot boots over the network on subsequent restarts until
the boot order of the VM is changed.

Configuring the PXE Environment for AHV VMs


The procedure for configuring the PXE environment for a VM on an unmanaged network is
similar to the procedure for configuring a PXE environment for a physical machine on the
external network and is beyond the scope of this document. This procedure configures a PXE
environment for a VM in a managed network on an AHV host.

About this task


To configure a PXE environment for a VM on a managed network on an AHV host, do the
following:

Procedure

1. Log on to the Prism web console, click the gear icon, and then click Network Configuration in
the menu.
The Network Configuration dialog box is displayed.

2. On the Virtual Networks tab, click the pencil icon shown for the network for which you want
to configure a PXE environment.
The VMs that require the PXE boot information must be on this network.

3. In the Update Network dialog box, do the following:

a. Select the Configure Domain Settings check box and do the following in the fields shown
in the domain settings sections:

• In the TFTP Server Name field, specify the host name or IP address of the TFTP server.
If you specify a host name in this field, make sure to also specify DNS settings in the
Domain Name Servers (comma separated), Domain Search (comma separated), and
Domain Name fields.
• In the Boot File Name field, specify the boot file that the VMs must use.
b. Click Save.

AHV |  Virtual Machine Management | 57


4. Click Close.

Configuring a VM to Boot over a Network


To enable a VM to boot over the network, update the VM's boot device setting. Currently, the
only user interface that enables you to perform this task is the Acropolis CLI (aCLI).

About this task


To configure a VM to boot from the network, do the following:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

2. Access the aCLI.


nutanix@cvm$ acli
<acropolis>

3. Create a VM.

<acropolis> vm.create vm num_vcpus=num_vcpus memory=memory

Replace vm with a name for the VM, and replace num_vcpus and memory with the number
of vCPUs and amount of memory that you want to assign to the VM, respectively.
For example, create a VM named nw-boot-vm.

<acropolis> vm.create nw-boot-vm num_vcpus=1 memory=512

4. Create a virtual interface for the VM and place it on a network.


<acropolis> vm.nic_create vm network=network

Replace vm with the name of the VM and replace network with the name of the network.
If the network is an unmanaged network, make sure that a DHCP server and the boot file
that the VM requires are available on the network. If the network is a managed network,
configure the DHCP server to provide TFTP server and boot file information to the VM. See
Configuring the PXE Environment for AHV VMs on page 57.
For example, create a virtual interface for VM nw-boot-vm and place it on a network named
network1.
<acropolis> vm.nic_create nw-boot-vm network=network1

5. Obtain the MAC address of the virtual interface.


<acropolis> vm.nic_list vm

Replace vm with the name of the VM.


For example, obtain the MAC address of VM nw-boot-vm.
acli vm.nic_list nw-boot-vm
00-00-5E-00-53-FF

AHV |  Virtual Machine Management | 58


6. Update the boot device setting so that the VM boots over the network.
<acropolis> vm.update_boot_device vm mac_addr=mac_addr

Replace vm with the name of the VM and mac_addr with the MAC address of the virtual
interface that the VM must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM
uses the virtual interface with MAC address 00-00-5E-00-53-FF.
<acropolis> vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF

7. Power on the VM.


<acropolis> vm.on vm_list [host="host"]

Replace vm_list with the name of the VM. Replace host with the name of the host on which
you want to start the VM.
For example, start the VM named nw-boot-vm on a host named host-1.
<acropolis> vm.on nw-boot-vm host="host-1"

Uploading Files to DSF for Microsoft Windows Users


If you are a Microsoft Windows user, you can securely upload files to DSF by using the
following procedure.

Procedure

1. Authenticate by using Prism username and password or, for advanced users, use the public
key that is managed through the Prism cluster lockdown user interface.

2. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start
browsing the DSF data store.

Note: The root directory displays storage containers and you cannot change it. You can only
upload files to one of the storage containers and not directly to the root directory. To create
or delete storage containers, you can use the prism user interface.

Enabling Load Balancing of vDisks in a Volume Group


About this task
AHV hosts support load balancing of vDisks in a volume group for user VMs. Load balancing
of vDisks in a volume group enables IO-intensive VMs to utilize resources such as the CPU and
memory of multiple Controller VMs (CVMs). vDisks belonging to a volume group are distributed
across the CVMs in a cluster thereby improving the performance and preventing bottlenecks.
However, each vDisk still utilizes the resources of a single CVM.

Note:

• vDisk load balancing is disabled by default.


• You can attach a maximum number of 10 load-balanced volume groups per user VM.
• For Linux VMs, ensure that the SCSI device timeout is 60 seconds. For more
information about checking and modifying the SCSI device timeout, see the
Red Hat documentation at https://access.redhat.com/documentation/en-

AHV |  Virtual Machine Management | 59


us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/
task_controlling-scsi-command-timer-onlining-devices.

Perform the following procedure to enable load balancing of vDisks by using aCLI.

Procedure

1. Log on to a Controller VM with SSH.

2. Access the Acropolis command line.


nutanix@cvm$ acli
<acropolis>

3. Do one of the following:

• Enable vDisk load balancing if you are creating a new volume group.
<acropolis> vg.create vg_name load_balance_vm_attachments=true

Replace vg_name with the name of the volume group.


• Enable vDisk load balancing if you are updating an existing volume group.
<acropolis> vg.update vg_name load_balance_vm_attachments=true

Replace vg_name with the name of the volume group.

Note: If you are modifying an existing volume group, you must first detach all the VMs that
are attached to that volume group and then enable vDisk load balancing.

4. (Optional) Disable vDisk load balancing.


<acropolis> vg.update vg_name load_balance_vm_attachments=false

Replace vg_name with the name of the volume group.

Performing Power Operations on VMs


You can initiate safe and graceful power operations such as soft shutdown and restart of the
VMs running on the AHV hosts by using the aCLI. The soft shutdown and restart operations
are initiated and performed by Nutanix Guest Tools (NGT) within the VM thereby ensuring a
safe and graceful shutdown or restart of the VM. You can create a pre-shutdown script that
you can choose to run before a shutdown or restart of the VM. In the pre-shutdown script,
include any tasks or checks that you want to run before a VM is shut down or restarted. You
can choose to abort the power operation if the pre-shutdown script fails. If the script fails, an
alert (guest_agent_alert) is generated in the Prism web console.

Before you begin


Ensure that you have met the following prerequisites before you initiate the power operations:
1. NGT is enabled on the VM. All operating systems that NGT supports are supported for this
feature.
2. NGT version running on the Controller VM and user VM is the one supported from AOS 5.6
or later.
3. NGT version running on the Controller VM and user VM is the same.

AHV |  Virtual Machine Management | 60


4. (Optional) If you want to run a pre-shutdown script, place the script in the following
locations depending on your VMs:

• Windows VMs: installed_dir\scripts\power_off.bat


Note that the file name of the script must be power_off.bat.
• Linux VMs: installed_dir/scripts/power_off
Note that the file name of the script must be power_off.

About this task

Note: You can also perform these power operations by using the V3 API calls. For more
information, see developer.nutanix.com.

Perform the following steps to initiate the power operations:

Procedure

1. Log on to a Controller VM with SSH.

2. Access the Acropolis command line.


nutanix@cvm$ acli
<acropolis>

3. Do one of the following:

• Soft shut down the VM.


<acropolis> vm.guest_shutdown vm_name enable_script_exec=[true or false]
fail_on_script_failure=[true or false]

Replace vm_name with the name of the VM.


• Restart the VM.
<acropolis> vm.guest_reboot vm_name enable_script_exec=[true or false]
fail_on_script_failure=[true or false]

Replace vm_name with the name of the VM.


Set the value of enable_script_exec to true to run your pre-shutdown script and set the
value of fail_on_script_failure to true to abort the power operation if the pre-shutdown
script fails.

AHV |  Virtual Machine Management | 61


6
EVENT NOTIFICATIONS
You can register webhook listeners with the Nutanix event notification system by creating
webhooks on the Nutanix cluster. For each webhook listener, you can specify the events for
which you want notifications to be generated. Multiple webhook listeners can be notified for
any given event. The webhook listeners can use the notifications to configure services such as
load balancers, firewalls, TOR switches, and routers. Notifications are sent in the form of a JSON
payload in an HTTP POST request, enabling you to send them to any endpoint device that can
accept an HTTP POST payload at a URL. Notifications can also be sent over an SSL connection.
For example, if you register a webhook listener and include VM migration as an event of
interest, the Nutanix cluster sends the specified URL a notification whenever a VM migrates to
another host.
You register webhook listeners by using the Nutanix REST API, version 3.0. In the API request,
you specify the events for which you want the webhook listener to receive notifications, the
listener URL, and other information such as a name and description for the webhook.

Generated Events
The following events are generated by an AHV cluster.

Table 7: Virtual Machine Events

Event Description
VM.CREATE A VM is created.
VM.DELETE A VM is deleted.
When a VM that is powered on is deleted,
in addition to the VM.DELETE notification, a
VM.OFF event is generated.

VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
When a VM is migrated, in addition to the
VM.MIGRATE notification, a VM.UPDATE
event is generated.

VM.ON A VM is powered on.


When a VM is powered on, in addition to the
VM.ON notification, a VM.UPDATE event is
generated.

AHV |  Event Notifications | 62


Event Description
VM.OFF A VM is powered off.
When a VM is powered off, in addition to the
VM.OFF notification, a VM.UPDATE event is
generated.

VM.NIC_PLUG A virtual NIC is plugged into a network.


When a virtual NIC is plugged in, in addition to
the VM.NIC_PLUG notification, a VM.UPDATE
event is generated.

VM.NIC_UNPLUG A virtual NIC is unplugged from a network.


When a virtual NIC is unplugged, in addition
to the VM.NIC_UNPLUG notification, a
VM.UPDATE event is generated.

Table 8: Virtual Network Events

Event Description
SUBNET.CREATE A virtual network is created.
SUBNET.DELETE A virtual network is deleted.
SUBNET.UPDATE A virtual network is updated.

Creating a Webhook
Send the Nutanix cluster an HTTP POST request whose body contains the information essential
to creating a webhook (the events for which you want the listener to receive notifications, the
listener URL, and other information such as a name and description of the listener).

About this task

Note: Each POST request creates a separate webhook with a unique UUID, even if the data in
the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not send
another request with changes. Instead, update the webhook. See Updating a Webhook on
page 65.

To create a webhook, send the Nutanix cluster an API request of the following form:

Procedure

POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",

AHV |  Event Notifications | 63


"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}

Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate
values for the following parameters:

• name. Name for the webhook.


• post_url. URL at which the webhook listener receives notifications.
• username and password. User name and password to use for authenticating to the listener.
Include these parameters if the listener requires them.
• events_filter_list. Comma-separated list of events for which notifications must be generated.
• description. Description of the webhook.
• api_version. Version of Nutanix REST API in use.
The following sample API request creates a webhook that generates notifications when VMs are
powered on and powered off:
POST https://192.0.2.3:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "vm_notifications_webhook",
"resources": {
"post_url": "http://192.0.2.10:8080/",
"credentials": {
"username":"admin",
"password":"Nutanix/4u"
},
"events_filter_list": [
"VM.ON", "VM.OFF", "VM.UPDATE", "VM.CREATE", "NETWORK.CREATE"
]
},
"description": "Notifications for VM events."
},
"api_version": "3.0"
}

The Nutanix cluster responds to the API request with a 200 OK HTTP response that contains the
UUID of the webhook that is created. The following response is an example:
{
"status": {
"state": "PENDING"
},
"spec": {
. . .
"uuid": "003f8c42-748d-4c0b-b23d-ab594c087399"
}

AHV |  Event Notifications | 64


}

The notification contains metadata about the entity along with information about the type of
event that occurred. The event type is specified by the event_type parameter.

Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created
successfully.

About this task


To list webhooks, do the following:

Procedure

• To show a single webhook, send the Nutanix cluster an API request of the following form:
GET https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid

Replace cluster_IP_address with the IP address of the Nutanix cluster. Replace


webhook_uuid with the UUID of the webhook that you want to show.
• To list all the webhooks configured on the Nutanix cluster, send the Nutanix cluster an API
request of the following form:
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks/list
{
"filter": "string",
"kind": "webhook",
"sort_order": "ASCENDING",
"offset": 0,
"total_matches": 0,
"sort_column": "string",
"length": 0
}

Replace cluster_IP_address with the IP address of the Nutanix cluster and specify
appropriate values for the following parameters:

• filter. Filter to apply to the list of webhooks.


• sort_order. Order in which to sort the list of webhooks. Ordering is performed on
webhook names.
• offset.
• total_matches. Number of matches to list.
• sort_column. Parameter on which to sort the list.
• length.

Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update
the name, listener URL, event list, and description.

About this task


To update a webhook, send the Nutanix cluster an API request of the following form:

AHV |  Event Notifications | 65


Procedure

PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid

{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}

Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID
of the webhook you want to update, respectively. For a description of the parameters, see
Creating a Webhook on page 63.

Deleting a Webhook

About this task


To delete a webhook, send the Nutanix cluster an API request of the following form:

Procedure

DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid

Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID
of the webhook you want to update, respectively.

Notification Format
An event notification has the same content and format as the response to the version 3.0
REST API call associated with that event. For example, the notification generated when a VM is
powered on has the same format and content as the response to a REST API call that powers
on a VM. However, the notification also contains a notification version, and event type, and an
entity reference, as shown:
{
"version":"1.0",
"data":{
"metadata":{

"status": {
"name": "string",
"providers": {},
.
.

AHV |  Event Notifications | 66


.
"event_type":"VM.ON",
"entity_reference":{
"kind":"vm",
"uuid":"63a942ac-d0ee-4dc8-b92e-8e009b703d84"
}
}

For VM.DELETE and SUBNET.DELETE, the UUID of the entity is included but not the metadata.

AHV |  Event Notifications | 67


COPYRIGHT
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host The commands are executed as a non-privileged user (such as


$ command nutanix) in the system shell.

root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a


log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

AHV |  Copyright | 68
Interface Target Username Password

SSH client Nutanix Controller VM admin Nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: March 15, 2019 (2019-03-15T16:56:19-07:00)

AHV |  Copyright | 69

S-ar putea să vă placă și