Sunteți pe pagina 1din 71

ESXi Host Lifecycle

First Published On: 03-02-2017


Last Updated On: 05-07-2018

1
ESXi Host Lifecycle

Table of Contents

1. Standalone ESXi Installation and Configuration


1.1.ESXi Console Overview
1.2.ESXi Install
1.3.vSphere Hosts Upgrade - ESXCLI Command
2. Host Profiles
2.1.Checking host compliance
2.2.Understanding Host Customizations
2.3.Batch host customization
3. Auto Deploy
3.1.Enabling the GUI for Image Builder
3.2.Using the GUI for Auto Deploy
3.3.New Discovered Hosts Workflow
3.4.Adding Reverse Proxy Caching
4. vSphere Update Manager
4.1.Using the Update Manager Interface to Upgrade from ESXi 6.5 to 6.7
4.2.Using the Update Manager 6.7 Interface to Patch VMware ESXi 6.5 Hosts
4.3.vSphere Quick Boot Demo
4.4.Faster Host Upgrades to vSphere 6.7
4.5.Upgrading a cluster with VUM
4.6.Terminology Overview

2
ESXi Host Lifecycle

1. Standalone ESXi Installation and


Configuration
Learn about installing, configuring and updating individual ESXi hosts.

3
ESXi Host Lifecycle

1.1 ESXi Console Overview

This walkthrough is designed to provide a step-by-step overview on how to manage the ESXi host using Direct Control User
Interface. Use arrow keys to navigate through the screens.

Connect to the host console. Select [Customize System/View Logs] to customize the system.

4
ESXi Host Lifecycle

Login using the root user ID and the password that were set during the installation. Select [OK] to access the System
Customization screen.

From the list of customization options, select [Configure Password] to modify the password. Select [ Change] to configure
password.

5
ESXi Host Lifecycle

Update the password. Select [OK] to save and go back to the home screen.

On the System Customization screen, select [Configure Lockdown Mode]. Use the Spacebar to Enable/Disable lockdown
mode and select [OK] to save.

6
ESXi Host Lifecycle

From the System Customization screen, select Configure Management Network. Go to [Network Adapters] and select [
Change] to update network adapters.

Assign multiple adapters to provide for redundancy. Select [OK] to save and go back.

7
ESXi Host Lifecycle

Choose VLAN from the list and select [ Change] to set the VLAN ID.

Setting the VLAN ID is optional. Once complete, select [OK] to save and continue.

8
ESXi Host Lifecycle

Select [IP Configuration] from the Configure Management Network list. Set up the IP, the subnet mask and the default
gateway. Select [OK] to save.

Select [IPv6 Configuration] from the Configure Management Network list to reach this screen. Enable/disable IPv6. Any
changes here will restart the host without a warning. Select [OK] to save.

9
ESXi Host Lifecycle

Select [DNS Configuration] from the list. Specify the IP addresses of the primary and alternate DNS servers, and the
hostname of the vSphere host. Select [OK] to save and proceed.

Select [Custom DNS Suffixes] to configure additional DNS suffixes. Specify the desired DNS suffix and select [OK]. Go back
to the main screen using the [Esc] key.

10
ESXi Host Lifecycle

Choose [Restart the Management Network] from the list and select [Restart] to confirm.

Select [OK] to close the Restart Management Network window and proceed.

11
ESXi Host Lifecycle

Choose [Test Management Network] from the list. It pings the local gateway along with the IP addresses of the DNS server.
Select [OK] to continue.

It automatically performs a resolution on the hostname. Select [OK] to close this window and go back to the home screen
to view options to restore the network.

12
ESXi Host Lifecycle

Choose [Network Restore Options]. It helps restore connectivity when a host gets disconnected from the network. Select
[Change] to customize settings and [Exit] to go back.

Select [Configure Keyboard]. Choose the desired layout and select [OK].

13
ESXi Host Lifecycle

Select [Troubleshooting Options] from the list. In this screen, you can enable/disable ESXi shell and SSH service on the
host.

Select [Modify ESXi Shell and SSH Timeouts]. Set timeouts to ensure that the services are not left on indefinitely, and to
automatically terminate unattended shell sessions. Select [OK] to continue.

14
ESXi Host Lifecycle

Select [Restart Management Agents]. Select [OK] to confirm or [Cancel] to abort and go back to the home screen.

On the System Customization screen, select [Host Logs] to view log details on the right window pane.

15
ESXi Host Lifecycle

Select [View Support Information] from the list. This option presents details about the host serial number, license key, SSL
and SSH keys on the right window pane.

Select [Reset System Configuration] from the list to reset all the changes made. Select [Log Out] to go back.

16
ESXi Host Lifecycle

Select [Shut Down/Restart].

Login using the root account. Select [OK].

17
ESXi Host Lifecycle

This concludes the walkthrough of managing the ESXi host using Direct Control User Interface.Select the next walkthrough
of your choice using the navigation panel.

1.2 ESXi Install

This walkthrough provides a step-by-step overview on how to install ESXi on a vSphere host. Use the arrow keys to
navigate through the screens.

18
ESXi Host Lifecycle

Begin by downloading the VMware ESXi installation media and inserting/mounting the ISO/Image into the server's CD-
ROM/DVD drive. Configure the host's BIOS to boot from CD-ROM/DVD and boot the host. Following the boot the ESXi
Installer will automatically load.

Begin the installation by pressing [Enter] at the ESXi Installer welcome screen.

19
ESXi Host Lifecycle

Press [F11] to accept the End User License Agreement. The installer will proceed to scan for available hard disks.

Use the arrow keys to highlight the boot disk where you will install ESXi and press [Enter].

20
ESXi Host Lifecycle

If an existing ESXi image is found on the disk, the installer ask if you want to upgrade or do a fresh install. Use the arrow
keys to choose the type of install, press the [spacebar] to select, and press [Enter] to continue.

Use the arrow keys to highlight the desired keyboard layout and press [Enter] to continue.

21
ESXi Host Lifecycle

Set the root password and press [Enter] to continue.

Press [F11] to confirm the install. The hard disk will be partitioned and ESXi will be installed on the host.

22
ESXi Host Lifecycle

After the installation completes, eject/unmount the ESXi ISO from the server's CD-ROM/DVD drive and press [Enter] to
reboot.

Following the reboot configure the host’s management network. From the console press [F2] to customize the system and
login as "root" with the password you set during the install.

23
ESXi Host Lifecycle

Use the arrow keys to highlight "Configure Management Network" and press [Enter].

Use the arrow keys to highlight "Network Adapters" and press [Enter].

24
ESXi Host Lifecycle

Use the arrow keys to highlight the network adapters that will be used for the management network and press the
[spacebar] to select each adapter. After all the adapters have been selected press [Enter] to continue.

If you use VLAN tags set the VLAN-ID for the management network. Use the arrow keys to select "VLAN (optional)”, set the
VLAN-ID for the management network and press [Enter]. If a VLAN-ID is not required skip this step.

25
ESXi Host Lifecycle

Next, select "IP Configuration". Specify if the host will use a dynamic or static IP address. If using static IP address provide a
host unique IP address along with the appropriate subnet mask and default gateway. Note, static IP addresses are
recommended for vSphere hosts. Press [Enter] to continue.

Next, select "DNS Configuration". Specify if the host will use dynamic or static DNS settings. If using static DNS settings,
provide the IP address for the primary and alternate DNS server along with the server's hostname. Press [Enter] to
continue.

26
ESXi Host Lifecycle

Next, select "Custom DNS Suffixes". Enter the DNS suffix for the vSphere host and press [Enter] to continue.

Press [Esc] to exit the "Configure Management Network" menu. When prompted to apply the changes press [Y].

27
ESXi Host Lifecycle

Next, verify the network settings. Select "Test Management Network" and press [Enter].

Press [Enter] to begin the test. The server will verify it has network connectivity by pinging its default gateway, the primary
and alternate DNS servers, and by resolving the hostname.

28
ESXi Host Lifecycle

Verify all the tests complete with a status "OK". Press [Enter] to close the window and press [Esc] to exit out of the System
Customization menu.

This concludes the walkthrough on installing and configuring vSphere ESXi on a vSphere host. Continue to the next PWT in
the series to see how to add the vSphere host to vCenter Server.

1.3 vSphere Hosts Upgrade - ESXCLI Command

29
ESXi Host Lifecycle

This walkthrough is designed to show how to upgrade a vSphere host using the "esxcli" command from within the ESXi
Shell. Use the arrow keys to navigate through the screens.

We begin by accessing the vSphere host's console. Here we see the host is currently running ESXi 5.0. We will upgrade this
host to version 5.5 using the “esxcli” command.

30
ESXi Host Lifecycle

At the host's console press [F2] to login as a full privileged administrative user. In this example we are logging in as "root".

From the system customization menu, go to [Troubleshooting Options].

31
ESXi Host Lifecycle

Select [Enable ESXi Shell] and press [Enter] to enable the shell. We need to enable the ESXi shell before we can logon to it
and perform the upgrade.

Next, select [Enable SSH] and press enter to enable the SSH service. In this example we will use SSH to copy the ESXi 5.5
software depot onto the host prior to the upgrade.

32
ESXi Host Lifecycle

Before we can upgrade the host we need to copy the ESXi 5.5 upgrade image onto a datastore that is accessible from the
host. Here we have saved a copy of the ESXi 5.5 software depot onto a Linux desktop. We then used the secure copy
command (scp) to copy it to the local datastore [local-ds-01] on our ESXi host.

After copying the ESXi 5.5 software depot onto the host, we return to the host console and press Alt-F1 to access the ESXi
Shell.

33
ESXi Host Lifecycle

Login to the ESXi shell as an administrative user, in this example we have logged in as "root"

Next, we run the “esxcli software sources profile list” command and pass in the location of the 5.5 software depot. This
command provides a list of the available 5.5 image profiles. Here we see that there are two image profiles: a “standard”
profile and a “no-tools” profile. We will use the “Standard” profile.

34
ESXi Host Lifecycle

Next we perform the upgrade by running the "esxcli software profile update command" and pass in the location of the
offline depot along with the name of the image profile that we want to use. As this command will generate a lot of output
we re-direct the output into the file /tmp/output.txt to make it easier to review following the upgrade.

Following the upgrade we run the command "more /tmp/output.txt" to review the results.

35
ESXi Host Lifecycle

We see at the beginning of the output.txt file that the upgrade completed successfully and that a system reboot is needed
for the upgrade to take affect. The remaining information captured in the output.txt file is a summary of the VIBs that were
installed as part of the upgrade. Reboot the host to complete the upgrade.

Following the reboot we see that the host is now running ESXi 5.5. This concludes the walkthrough on upgrading a vSphere
host using the "esxcli" command from within the ESXi Shell. Use the navigation menu on the left to select the next
walkthrough.

36
ESXi Host Lifecycle

2. Host Profiles
VMware vSphere Host Profiles offers configuration management and compliance checking for clusters
of VMware ESXi hosts.

37
ESXi Host Lifecycle

2.1 Checking host compliance

Host Profiles - Granular Compliance Results Walkthrough

Click to see topic media

2.2 Understanding Host Customizations

Understanding How Host Profiles Handles Host-Specific


Configuration Settings Through Customizations

Host Profiles is an advanced capability of VMware vSphere that provides for configuration and
compliance checking of multiple VMware ESXi hosts. Although a profile can be attached directly to a
single host in vCenter Server, typically, a profile is attached to a vSphere cluster, where all the hosts
have the same hardware, storage, and networking configurations. The latest release of vSphere
includes several enhancements to Host Profiles. This article discusses two different sources of
configuration settings for a host.

While Host Profiles focuses on configuring identical settings across multiple hosts, certain items must
be unique for each host. These unique items are known as customizations; in the past, known as
answer files.

Administrators initially configure a reference host to meet business requirements and then extract the
entire configuration into a new profile which can be subsequently edited or updated as requirements
change. These settings are applied to other hosts in the cluster through the process of remediation,
and hosts that are not able to meet all the profile requirements are flagged as non-compliant.

Profiles That Use Dynamic Addressing Require Little Customization


In a very basic scenario, it is possible to forego customizations that require administrator input. This is
the case if hosts are using DHCP for network identity – IP address and hostname – and there are no
specific business requirements for setting unique root passwords per host.

38
ESXi Host Lifecycle

Typical vSphere Host Configurations Use Static IP Addresses


But for most customers, static IP addresses are desirable in the datacenter, at least for IP storage and
perhaps for vMotion or other VMkernel interfaces. Security guidelines may require all hosts to have
unique root credentials, and there are other configurable items in a profile that also need to be
specified per host. In general, when an attribute in Host Profiles is set to prompt for “user specified”
input, that item will need to be configured per-host through customizations.

The following image gives some examples of settings on a host that will require customization:

39
ESXi Host Lifecycle

When these customizations are missing, the profile will not be compliant – for many reasons. For
example, shared datastores cannot be mounted if the appropriate VMkernel IP address is not
configured.

40
ESXi Host Lifecycle

Host Customizations Supply the Necessary Static Elements


Host customizations can be provided by vSphere administrators through a wizard during the
remediation process, or they can be uploaded in bulk via CSV file – a new feature of vSphere 6.5.

41
ESXi Host Lifecycle

Once the host customizations have been provided and stored on vCenter Server, the associated profile
can be remediated to become compliant.

42
ESXi Host Lifecycle

Persistence of Host Customization Data


Host customization data is stored on vCenter Server, and will be deleted if a host is removed from
inventory. This is an important behavior to be aware of, as sometimes hosts are removed and re-added
to vCenter Server as part of troubleshooting or during a major rolling upgrade.

And finally, be aware that these host customizations apply to both stateful hosts using traditional on-
disk installation, as well as stateless hosts that are booted from the network with Auto Deploy.

43
ESXi Host Lifecycle

Takeaways
• Host Profiles is a feature of vSphere designed to apply identical configuration to multiple
VMware ESXi hosts
• Settings that are unique for individual hosts are provided through customizations
• vSphere Administrators enter or update customizations through graphical clients or via CSV file

2.3 Batch host customization

Host Profiles - Batch Host Customization Walkthrough

Click to see topic media

44
ESXi Host Lifecycle

3. Auto Deploy
VMware vSphere Auto Deploy uses industry-standard PXE technologies to boot VMware ESXi hosts
directly from the network instead of local storage devices.

45
ESXi Host Lifecycle

3.1 Enabling the GUI for Image Builder

Because the Image Builder and Auto Deploy features are tightly coupled, the UI is only visible when
both of these services are running. To enable the GUI, navigate to Administration > System
Configuration > Services in the vSphere Web Client. Start both services, and set them to start
automatically, if desired. Then log out and back in to the Web Client to verify the Auto Deploy object is
available.

Alternatively, these services can be enabled via command line. Simply SSH into the VCSA and run the
following commands:

usr/lib/vmware-vmon/vmon-cli --update imagebuilder --starttype AUTOMATIC


/usr/lib/vmware-vmon/vmon-cli --update rbd --starttype AUTOMATIC
/usr/lib/vmware-vmon/vmon-cli --start rbd
/usr/lib/vmware-vmon/vmon-cli --start imagebuilder

Regardless of whether Auto Deploy is in use in an environment or not, the Image Builder GUI is a
convenient alternative to the PowerCLI cmdlets previously required for creating custom VMware ESXi
images. Administrators can upload zip depots of images and drivers, as well as create online depots
that connect to VMware or OEM partner image repositories.

46
ESXi Host Lifecycle

The full URL of the VMware public depot is: https://hostupdate.vmware.com/software/VUM/


PRODUCTION/main/vmw-depot-index.xml

In addition to being available to Auto Deploy for deploy rule creation, the UI also allows administrators
to customize, compare, or export images to ISO or zip format for a variety of uses. The vSphere 6.5
product documentation describes the functionality in more detail.

Even though the PowerCLI Image Builder is still available, this new Image Builder GUI helps those
customers that prefer a more guided approach for these tasks.

3.2 Using the GUI for Auto Deploy

Auto Deploy GUI – Software Depots and Deploy Rules Walkthrough

Click to see topic media

3.3 New Discovered Hosts Workflow

Auto Deploy GUI – Discovered Hosts Walkthrough

Click to see topic media

VMware vSphere 6.5 Auto Deploy Discovered Hosts Workflow Demo

Click to see topic media

3.4 Adding Reverse Proxy Caching

The latest release of VMware vSphere contains improvements to Auto Deploy, including a new
graphical user interface, a new deployment workflow, and various manageability and operational
enhancements. One such enhancement is a dramatically-simplified caching capability.

There are several reasons why you might consider adding reverse proxy caching to your Auto Deploy
infrastructure. First, this design will reduce the load on the vCenter Server Appliance and Auto Deploy
service, freeing up resources for other processes. Second, the boot time of individual stateless
VMware ESXi hosts is modestly improved – saving about 30 seconds in a typical setup, possibly more
in a heavily-loaded environment. Finally, you can potentially boot far more stateless hosts concurrently
without overwhelming the VCSA.

47
ESXi Host Lifecycle

Resiliency is a natural priority when changing critical infrastructure components. I’m glad to report
that the new reverse proxy design does not create a single point of failure, since you can deploy
multiple proxy servers that are tried in a round-robin sequence with no load balancers. Furthermore, if
all proxies happen to become unavailable, the stateless clients fail gracefully back to the default
behavior of directly accessing the Auto Deploy server. This is a welcome improvement over previous
releases. Just keep in mind that the caches are only for performance optimization, and not for
redundancy of the overall stateless infrastructure – the Auto Deploy server is still in charge and must
be online for successful host boot operations.

Instant Reverse Proxy Container


If you like the sound of these benefits, then it’s easy enough to test this design by quickly deploying a
Docker container configured for the task. Create one or two Linux VMs running Docker (I’m using
PhotonOS in my lab) and fire up the Nginx container that I published on Hub:

docker run --restart=always -p 5100:80 -d -e AUTO_DEPLOY=10.197.34.22:6501 egray/


auto_deploy_nginx

In the above example, the proxy will listen on port 5100 and fetch any requested ESXi image files from
your existing Auto Deploy server located at 10.197.34.22. Run this container on each VM that will act
as a proxy, and make note of their IP addresses for the next part.

Connectivity Test
Before you configure Auto Deploy to use these new caches, it’s a good idea to verify connectivity. One
way to do this is to watch the Nginx log file while manually requesting a file from the cache.

To watch the Nginx log, get the id of the container and use the docker logs –f command:

root@photon-a9f9d2d38769 [ ~ ]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c73960b6cd13 egray/auto_deploy_nginx "/bin/sh -
c 'envsubst" 5 seconds ago Up 4 seconds 443/tcp, 0.0.0.0:5100->80/
tcp determined_booth
root@photon-a9f9d2d38769 [ ~ ]# docker logs -f c739

Then, request the Auto Deploy tramp file from another system, like so:

$ curl http://10.197.34.172:5100/vmw/rbd/tramp
!gpxe
set max-retries 6
set retry-delay 20
post /vmw/rbd/host-register?bootmac=${mac}

Confirm that the proxy responds immediately with the above output. If it does not, go back and
double-check addresses, ports, and other potential connectivity problems. Also, observe the log file
that is being tailed for a corresponding hit.

Activate the Proxies


Now that you have one or more reverse proxies up and running, it’s time to configure Auto Deploy to
start using them. This is done by using PowerCLI, through a new cmdlet introduced in the 6.5 release.
Run the following command for each proxy you need to register, adjusting the address accordingly:

Add-ProxyServer -Address http://10.197.34.171:5100


Add-ProxyServer -Address http://10.197.34.172:5100

48
ESXi Host Lifecycle

Check the configuration by running Get-ProxyServer, and if necessary, remove a proxy from rotation
with the Remove-ProxyServer cmdlet.At this point, any stateless hosts that boot will use the cache.
You can verify the configuration by accessing the Auto Deploy diagnostics web interface:

https://vcsa:6501/vmw/rbd/host/

Click on any listed host, then on the diagnostics page that will appear, click Get iPXE Configuration.
Check the resulting configuration for the multiple-urs directive and lines beginning with uris -a that
point to your proxy caches, like so:

set watchdog-expiration 600


set multiple-uris 1
uris -a http://10.197.34.171:5100
uris -a http://10.197.34.172:5100
kernel -n colonel.c32 /vmw/cache/fe/be5d851efb91bf4c5484d99498a04c/
mboot.c32.c0a4d96685fec0cd2eba007e904ff115
imgargs colonel.c32 -c /vmw/rbd/host/0c111f2628259084380c3bc56f091df6/boot.cfg -e
boot colonel.c32

Action!
Boot or reboot stateless hosts, and they will access the proxy caches. You can monitor requests
coming to the Auto Deploy server and to the caches to verify the changes have taken effect. Note that
the first time a host boots, the proxy will need to fetch all the files from Auto Deploy to cache them.
After that, everything but a small set of non-cacheable files will be served from the caches.

The caches are easy to monitor through the docker logs command, as described above. It’s also pretty
simple to watch key activity on the Auto Deploy (VCSA) system. Try the following command with and
without the caches enabled if you want to get a feel for the boot time reduction in your environment:

root@vcsa [ ~ ]# tail -f /var/log/vmware/rbd/ssl_request_log | egrep 'tramp|


waiter|boot.cfg|/up'

From Concept to Production


The example above is a proof of concept, intended to help you understand how to configure and
monitor the effects of reverse proxies. For most production datacenters, it would be wise to create a
proxy server that is equipped with SSL certificates so that the traffic between hosts and the proxies
can be encrypted. The Nginx SSL configuration is straightforward, but beyond the scope of this article.
You can also read how I created the container if you want to use that as a reference.

Summary
The new reverse proxy cache feature in Auto Deploy 6.5 is very easy to set up, and will boost
performance without introducing additional failure points to your vSphere infrastructure. Docker
containers running Nginx offer a simple way to demonstrate the concept in your environment.

49
ESXi Host Lifecycle

4. vSphere Update Manager


VMware vSphere Update Manager, or VUM, is the easiest way to patch and upgrade VMware ESXi
hosts at scale.

50
ESXi Host Lifecycle

4.1 Using the Update Manager Interface to Upgrade from ESXi 6.5
to 6.7

Upgrade VMware ESXi Hosts with the New Update Manager Interface in vSphere 6.7
In VMware vSphere 6.7, the vSphere Update Manger (VUM) interface is now part of the HTML5 vSphere Client.In this
demo, we will walk through the workflow to perform a major version upgrade.Click the Update Manager icon to begin.

VMware ESXi Image Repository


Update Manager is capable of host patching as well as major version upgrades. Host upgrade software is delivered in an
ISO image. To add an image to the VUM repository, click "Import"

51
ESXi Host Lifecycle

Add ISO Image


An ESXi ISO image can be obtained from VMware or from a server hardware vendor. Either browse the local disk or enter a
URL to have VUM download the file directly. Click "Import"

Initiate Baseline Creation


After adding an ESXi ISO image to the VUM repository, it is easy to create an upgrade baseline. Select the desired image
and click "New Baseline"

52
ESXi Host Lifecycle

Create Upgrade Baseline


In the new VUM interface, upgrade baselines require just a few clicks to create. After specifying a baseline name, verifying
the ESXi image, and reviewing the details, click "Finish"

Confirm Upgrade Baseline


After creating the upgrade baseline, verify that it is listed on the Baselines tab. To begin the cluster upgrade procedure,
click "Hosts and Clusters"

53
ESXi Host Lifecycle

Attach Baseline
VUM is most effective when a baseline is attached to a cluster of ESXi hosts, although it is possible to attach to individual
hosts, if necessary. With the cluster selected, click "Attach"

Select Baseline to Attach


In the dialog box, we can choose one or more baselines to attach to this cluster. In this scenario, we want to choose just the
ESXi 6.7 upgrade baseline we created earlier. Click OK

54
ESXi Host Lifecycle

Check Cluster Compliance


With the desired baseline now attached to the cluster, we will have Update Manager check each host to see if they are
currently compliant or if they will need to be remediated. Click "Check Compliance"

Verify Compliance and Check Remediation Status


Once Update Manager is finished checking each host in the cluster, the results are displayed in the center information
card. Here we can see that all four of these hosts are not compliant with the baseline and will need to be remediated.
Before we do that, let's run the cluster pre-check to ensure that remediation will be successful. Click "Pre-Check
Remediation"

55
ESXi Host Lifecycle

Remediation Pre-Check
The pre-check process will check to see if DRS is enabled so that running VMs can be migrated with zero-downtime across
the cluster. The pre-check also displays the status of HA admission control and enhanced vMotion compatibility. Click
"Done"

Verify Pre-Check Results


After running the pre-check, verify that the cluster is ready for upgrade. Click "Remediate" to begin.

56
ESXi Host Lifecycle

Streamlined Remediation
In the new Update Manager interface, the remediation wizard from previous releases is gone. Instead, we have a chance to
review the actions that will be taken in a very efficient way.Click OK

Upgrade Without Downtime


During the cluster remediation process, hosts are put into maintenance mode after the running VMs are migrated to other
cluster nodes. This process is repeated, typically one host at a time, until the entire cluster is upgraded. Click the Refresh
link to see the final status.

57
ESXi Host Lifecycle

Verify Cluster Upgrade


When Update Manager is finished upgrading the cluster, the status information cards will show that the cluster is now
compliant. This concludes the new Update Manager interface demo.

4.2 Using the Update Manager 6.7 Interface to Patch VMware ESXi
6.5 Hosts

Using Update Manager 6.7 to Keep a Cluster of VMware ESXi 6.5 Hosts Patched
VMware vSphere Update Manager is capable of performing major version upgrades, applying patches and updates to
supported versions of ESXi host, or installing drivers or other third-party components.In this example, we will walk through

58
ESXi Host Lifecycle

the procedure to apply a patch to a cluster of hosts running VMware ESXi 6.5, as the underlying application is not yet
certified on VMware ESXi 6.7, so we cannot perform a major version upgrade at this time. Click the Update Manager icon
to begin.

Empty Patch Repository


By default, Update Manager will download VMware ESXi patches directly from VMware over the public Internet. For
improved security, some environments do not allow Internet access from datacenter management components.In this
demonstration, Update Manager does not have Internet access, so we will manually import the specific patches deemed
necessary. These patches, sometimes called offline bundles or depots, can be downloaded by logging into My VMware;
they are distributed in zip format.Click Import to begin.

59
ESXi Host Lifecycle

Import Patch Bundle


The VMware ESXi patch bundle can either be uploaded from a local drive or from an internal URL, as seen here.Click Import
to complete the process.

View the Updates Repository


Once the ESXi patch has finished importing, the individual bulletins can be seen in the repository tab. Everything looks
good, click the Baselines tab to continue.

Review Baselines

60
ESXi Host Lifecycle

Update Manager is able to perform major version upgrades, apply patches, or install extensions on managed ESXi hosts.
Each of these tasks are enabled via baselines In our patching scenario, we need to create a new baseline to act as a
container for the patches we just imported. Click New.

New Baseline
On the Baselines tab, the "New" menu item has two sub-entries, choose "New Baseline"

Baseline Definition Wizard


To create a new baseline, we need to supply a name and an optional description. Since our goal is to apply a patch to
VMware ESXi 6.5 hosts, select the Patch option and click next.

61
ESXi Host Lifecycle

Manual Patch Baseline


In this environment, there are tight controls for compliance reasons - we will specify the exact patches to install instead of
dynamically matching patterns through the automatic feature - uncheck that option and click next.

Select Patches
For this baseline, we will select the two patch bulletins that are part of the bundle we just uploaded.Since this environment
does not have Internet access, only the patches that we import to the repository appear in this list. In a less-restrictive
datacenter, this list would include all possible patch releases and could be filtered as needed by clicking the column
headings. Click Next.

62
ESXi Host Lifecycle

Verify Baseline
One final check of the patch baseline... Everything looks good, so click Finish.

Confirm Patch Baseline


After creating the new baseline, it appears in the list. Click Hosts and Clusters

63
ESXi Host Lifecycle

Prepare to Patch the Cluster


With the target cluster selected, click Attach to select the patch baseline we just created.

Select Patch Baseline


We can attach the new patch baseline by checking the corresponding box. Click OK

64
ESXi Host Lifecycle

Check Baseline Compliance


Now that the baseline is attached to the cluster, Update Manager will check each host to see if action is required in order
for that host to be considered compliant.Click Check Compliance

Cluster Not Compliant


Once the compliance check is finished, Update Manager will indicate the status of each host in the cluster. In this case, all
of the hosts are out of compliance and need to have the patch installed, as expected.Before we begin, we will first check
the cluster for any potential blocking issues by using the pre-check. Click Pre-Check Remediation.

65
ESXi Host Lifecycle

Pre-Check Finished
The pre-check dialog box will show the status of individual items, such as confirming DRS is enabled. Everything is ready
for remediation, so click Done.

Begin Remediation
Now that the pre-check is finished, we can proceed with cluster remediation. Click Remediate

66
ESXi Host Lifecycle

New Remediate Interface


Update Manager 6.7 features a new interface with a streamlined flow, and no longer uses the multi-step wizard when
remediating. After reviewing the actions that will be taken, click OK.

Remediate With Zero Downtime


Update Manager evacuates hosts one at a time and places them into maintenance mode before applying the patches.
Running VMs are moved to other hosts with vMotion. Click Refresh to check the cluster status.

67
ESXi Host Lifecycle

Patching Complete
After Update Manager is finished applying patches to all nodes in the cluster, the status will be updated to show that they
are compliant with our chosen patch baseline.Update Manager 6.7 can upgrade hosts to the latest release of VMware ESXi,
or it can keep hosts running older versions patched until the time comes to upgrade.

4.3 vSphere Quick Boot Demo

Click to see the topic media

68
ESXi Host Lifecycle

VMware vSphere 6.7 Quick Boot


VMware vSphere 6.7 introduces a new technology that reduces the time required for hypervisor maintenance tasks. By
using vSphere Quick Boot, VMware ESXi restarts without rebooting the underlying physical server. This eliminates the
time-consuming device initialization and self-testing procedures, shortening the time required to patch or upgrade a host.

4.4 Faster Host Upgrades to vSphere 6.7

Click to see the topic media

Faster Upgrades to vSphere 6.7


VMware vSphere 6.7 incorporates optimizations that speed up major version upgrades, so customers moving from 6.5 to
6.7 will spend less time waiting for hosts to upgrade.

4.5 Upgrading a cluster with VUM

vSphere Update Manager Overview & Cluster Upgrade Walkthrough

Click to see topic media

VMware vSphere 6.5 Embedded Update Manager (VUM) Demo

Click to see topic media

4.6 Terminology Overview

69
ESXi Host Lifecycle

Downloading Updates and Related Metadata

Downloading virtual appliance upgrades, host patches, extensions, and related metadata is a
predefined automatic process that you can modify. By default, at regular configurable intervals,
Update Manager contacts VMware or third-party sources to gather the latest information (metadata)
about available upgrades, patches, or extensions.

VMware provides information about patches for ESXi hosts and virtual appliance upgrades.

Update Manager downloads the following types of information:

• Metadata about all ESXi 5.5 and ESXi 6.x patches, regardless of whether you have hosts of such
versions in your environment or not.
• Metadata about ESXi 5.5 and ESXi 6.x patches as well as about extensions from third-party
vendor URL addresses.
• Notifications, alerts, and patch recalls for ESXi 5.5 and ESXi 6.x hosts.
• Metadata about upgrades for virtual appliances.

Downloading information about all updates is a relatively low-cost operation in terms of disk space
and network bandwidth. The availability of regularly updated metadata lets you add scanning tasks for
hosts or appliances at any time.

Update Manager supports the recall of patches for hosts that are running ESXi 5.0 or later. A patch is
recalled if the released patch has problems or potential issues. After you scan the hosts in your
environment, Update Manager alerts you if the recalled patch has been installed on a certain host.
Recalled patches cannot be installed on hosts with Update Manager. Update Manager also deletes all
the recalled patches from the Update Manager patch repository. After a patch fixing the problem is
released, Update Manager downloads the new patch to its patch repository. If you have already
installed the problematic patch, Update Manager notifies you that a fix was released and prompts you
to apply the new patch.

If Update Manager cannot download upgrades, patches, or extensions — for example, if it is deployed
on an internal network segment that does not have Internet access — you must use UMDS to
download and store the data on the machine on which UMDS is installed. The Update Manager server
can use the upgrades, patches, and extensions that UMDS downloaded after you export them.

For more information about UMDS, see Installing, Setting Up, and Using Update Manager Download
Service.

You can configure Update Manager to use an Internet proxy to download upgrades, patches,
extensions, and related metadata.

You can change the time intervals at which Update Manager downloads updates or checks for
notifications. For detailed descriptions of the procedures, see Configure Checking for Updates and
Configure Notifications Checks.

Types of Software Updates and Related Terms

Update Manager downloads software updates and metadata from Internet depots or UMDS-created
shared repositories. You can import offline bundles and host upgrade images from a local storage
device into the local Update Manager repository.

VIB A VIB is a single software package.

Bulletin A grouping of one or more VIBs. Bulletins are defined within metadata.

Depot A logical grouping of VIBs and associated metadata that is published online.

70
ESXi Host Lifecycle

Host
An ESXi image that you can import in the Update Manager repository and use for
upgrade
upgrading ESXi 5.5 or ESXi 6.0 hosts to ESXi 6.5.
image

A bulletin that defines a group of VIBs for adding an optional component to an


Extension ESXi host. An extension is usually provided by a third party that is also responsible
for patches or updates to the extension.

Extra data that defines dependency information, textual descriptions, system


Metadata
requirements, and bulletins.

An archive that encapsulates VIBs and corresponding metadata in a self-contained


Offline package that is useful for offline patching. You cannot use third-party offline
bundle ZIP bundles or offline bundles that you generated from custom VIB sets for host
upgrade from ESXi 5.5 or ESXi 6.0 to ESXi 6.5.

A bulletin that groups one or more VIBs together to address a particular issue or
Patch
enhancement.

Roll-up A collection of patches that is grouped for ease of download and deployment.

VA upgrade Updates for a virtual appliance, which the vendor considers an upgrade.

71

S-ar putea să vă placă și