Documente Academic
Documente Profesional
Documente Cultură
VMkernel is the core operating system which provides means for running different
processes on the system, including management applications and agents as well as virtual
machines. It also has control of all the hardware devices on the server and manages
resources for the applications.
The term “User world” refers to set of process running above the VMkernel operating
system.
HOSTD: It is the process that authenticates users and keeps track of which users and
groups have which privileges. It also allows you to create and manage local users. The
HOSTD process provides a programmatic interface to VMkernel and is used by direct VI
Client connections as well as the VI API.
Hostd is demon or service runs in every ESXi host and performs major
tasks like, VM power on, local user management etc. But when ESXi host
joins in a vCenter server Vpxa agent gets activated and talk to Vpxd
service which runs in vCenter server. So the conclusion is Vpxd talks to
Vpxa and Vpxa talks to hostd, this is how Vpxd (vCenter demon) talks to
hostd demon via Vpxa agents.
What initial tasks can be done by DCUI?
->Set administrative password
->Configure networking
->Perform simple network tests
->View logs
->Restart agents
->Restore defaults
->When attempting to add an ESXi/ESX host to vCenter Server, you see an error
similar to:
Unable to access the specified host, either it doesn't exist, the server software is not
responding, or there is a network problem
ESXi
7. Verify if the hostd process has stopped responding on the affected ESXi host.
8. The vpxa agent has stopped responding on the affected ESXi host.
9. Verify if the ESXi host has experienced a Purple Diagnostic Screen
10. ESXi hosts can disconnect from vCenter Server due to underlying storage issues.
Note: In ESXi 4.1 and ESXi 5.0, 5.1, 5.5 and 6.0 this option is available under
Troubleshooting Options.
5. Press Enter.
6. Press F11 to restart the services.
7. When the service restarts, press Enter.
Press Esc to log out of the system
From the Local Console or SSH:
1. Log in to SSH or Local console as root.
Run these commands:
/etc/init.d/hostd restart
/etc/init.d/vpxa restart
The following changes from vSphere 4.x affect vSphere installation and setup.
TCP/UDP-902 -Default port used by the vCenter Server system to send data to
managed hosts. Managed hosts also send a regular heartbeat
over UDP port 902 to the vCenter Server system.
TCP-8000 -Requests from vMotion
TCP-8100 , 8200 -Traffic between ESXi hosts for vSphere Fault Tolerance (FT)
TCP-8182 - Traffic between ESXi hosts for vSphere High Availability (HA)
Thick Provision Eager Zeroed:- All space allocated at creation and pre-zeroed. Required
in Fault Tolerance (FT). A 40 GB virtual disk means a 40 GB VMDK flat file.
Thin Provision:- Space allocated on demand. The VMDK flat file will grow depending on
the amount of data actually stored in it, up to the maximum size specified for the
virtual hard disk.
What are the ways a VM can handle optical media file?
->Client Devices attached CD/DVD
->Host Devices attached CD/DVD
->Datastore ISO File
What is the number of cores available in VM hardware versions 7, 8,9
and 10?
VM-hardware 10 Esxi-5.5x 64
VM-hardware 9 Esxi-5.1x 32
VM-hardware 8 Esxi-5.0x 8
VM-hardware 7 Esxi-4.0x 4
What are the advantages of installing VMware tools?
-> Optimized SCSI driver
-> Enhanced video and mouse drivers
-> VM heartbeat (HA)
-> VM quiescing for snapshots and backups
-> Enhanced memory management (Memory Ballooning etc.)
-> VM focus- you are able to move into and out of VM consoles easily
without using the Ctrl+Alt.
Where are VMware tools ‘ISO’ found?
In ESXi host check “/vmimages/toolsisoimages” directory by using ESXi shell only.
What is Virtual Machine Snapshots?
Ability to create point-in-time checkpoints of a VM. Snapshot preserve captured state
of a VM at a specific point in time so that we can revert back.
Why VMDK file is known as deltadisk?
Every time you take a snapshot a new .vmdk file is created and when you add data to
the VM..... not the original vmdk but the new vmdk file that is created after
snapshot... grows. That’s why it is called a 'delta disk' or a 'differencing disk' because it
does not stores entire hard disk of the VM. It stores only the changes or differencing
disk space after creating snapshot.
Common issues why snapshot fails?
If you configure mapped LUN’s as physical then you need to remove it to take
snapshot. Physical mode RDM not supported in snapshot.
Snapshots of powered-on or suspended virtual machines with independent disks are not
supported.
What are the settings that are taken in to consideration when we
initiate snapshot?
VM configuration (what hardware is attached to)
State of VM’s hard disk file (To revert back if needed)
State of VM.s memory (If it is On)
vCenter Server:
What is vCenter server?
vCenter Server is a Windows/linux-based application that serves as a centralized
management tool for ESXi hosts and their respective VMs in a VMware vSphare
infrastructure.
Key features such as vMotion, Storage vMotion, vSphere DRS, vSphere HA, and
vSphere FT are all enabled and made possible by vCenter Server.
vCenter Server also provides scalable authentication service and role-based administration
based on integration with Active Directory.
vCenter server core services:
vCenter Server offers core services in the following areas:
->VM deployment
->VM management
->ESXi host management
->Resource management for ESXi hosts and VMs
->Template management
->Scheduled tasks
->Statistics and logging
->Alarms and event management
vCenter Server Heartbeat: A product available from VMware. Using vCenter
Server Heartbeat will automate both the process of keeping the active and passive
vCenter Server instances synchronized and the process of failing over from one to
another (and back again).
Can a local user defined in a ESXi host connect to vCenter server
using vSphare client?
Although the vSphere Client supports authentication of both vCenter Server and ESXi
hosts, organizations should use a consistent method for provisioning user accounts to
manage their vSphere infrastructure because local user accounts created on an ESXi host
are not reconciled or synchronized with the Windows or Active Directory accounts that
vCenter Server uses.
Which version of vCenter Server you will use- What are advantages
and disadvantages of using each vCenter server editions?
In vSphere 5 vCenter Server now comes not only as a Windows-based application but
also as SuSE Linux-based virtual appliance. There are advantages and disadvantages for
each vertions:-
Although vCenter Server is the application that performs the management of your ESXi
hosts and VMs, vCenter Server uses a database for storing all of its configuration,
permissions, statistics, and other data.
vCenter server supports following databases:-
->IBM DB2- 9.5, 9.7
->Oracle 10g R2-- 11g R1-- 11g R2
->Microsoft SQL Server 2008 R2 Express (bundled with vCenter Server)
->Microsoft SQL Server 2005- 2008
->Microsoft SQL Server 2008 R2
vCenter server Linked Mode Group?
Multiple instances of vCenter Server that share information among them.
In what situation you need a separate database server for vCenter?
[a single (1) vCenter Server with fewer than five (5) Esxi hosts or fewer than 50
VMs].
What are the services installed to facilitate the operation of vCenter
Server?
1. vCenter Inventory Service.
2. VMware vCenter Orchestrator Configuration (supports the Orchestrator workflow
engine.
3. VMware VirtualCenter Management Web services.
4. VMware VirtualCenter Server is the core of vCenter Server and provides centralized
management of ESX/ESXi hosts and VMs.
5. VMware vSphere Profile-Driven Storage Service.
6. VMwareVCMSDS is the Microsoft ADAM instance that supports multiple vCenter
Server instances in a linked mode group and is used for storing roles and permissions.
Note that ADAM is used for storing roles and permissions both in stand-alone
installations as well as installations with a linked mode group.
What are the limitations of Using SQL Server 2008 Express Edition?
SQL Server 2008 Express Edition is the minimum database available as a backend to
the Windows Server–based version of vCenter Server.
Microsoft SQL Server 2008 Express Edition has physical limitations that include the
following:
Despite the fact that vCenter Server is accessible via a web browser, it is not necessary
to install Internet Information Services on the vCenter Server computer. vCenter Server
is accessed via a browser that relies on the Apache Tomcat web service and that is
installed as part of the vCenter Server installation. IIS should be uninstalled because it
can cause conflicts with Apache Tomcat.
What are the memory requirement of vCenter server?
Host profile:- Host profile is a collection of all the various configuration settings available
for an ESXi host. By attaching a host profile to an ESXi host, you can (i) compare the
compliance or non-compliance of that host with the settings outlined in the host
profile. It provides administrators with a way to not only to verify consistent settings
across all the ESXi hosts but also to (ii) quickly and easily apply settings to new ESXi
hosts.
What is SSO:- Single Sign On is an authentication and identity management service. It
allows administrators and the various vSphere software components to communicate
with each other through a secure token exchange mechanism, instead of requiring each
component to authenticate a user separately with a directory service like Active
Directory.
VMware Lookup Service:- The vCenter Sign-On installer also deploys the "VMware Lookup
Service" on the same address and port. This Lookup Service enables different
components of vSphere to find one another in a secure way.
In Details:-
Multiple instances of vCenter Server that share information among them are referred to
as a "linked mode group".
If you need more ESXi hosts or more VMs than a single vCenter Server
instance can handle, or if for whatever other reason you need more than one instance
of vCenter Server, you can install multiple instances of vCenter Server and have those
instances share inventory and configuration information for a centralized view of all the
virtualized resources across the enterprise.
In a linked mode environment, there are multiple vCenter Server instances, and each of
the instances has its own set of hosts, clusters, and VMs. However, when a user logs
into a vCenter Server instance using the vSphere Client, that user sees all the vCenter
Server instances where he or she has permissions assigned. This allows a user to perform
actions on any ESXi host managed by any vCenter Server within the linked mode group.
vCenter Server linked mode uses Microsoft ADAM to replicate information between the
instances. The replicated information includes the following:
1 Connection information (IP addresses and ports)
2 Certificates and thumbprints
3 Licensing information
4 User roles and permissions
In a linked mode environment, the vSphere Client shows all the vCenter Server instances
for which a user has permission
Before you install additional vCenter Server instances, you must verify the following
prerequisites:-
1. Link mode servers should be Member of same domain or a trusted domain
2. DNS name must match with the vcenter servers server name
3. Lonked mode servers Cannot be DC or terminal server
4. Cannot combine with earlier versions of vcenter vertions
5. Must have its own backend database
1. Member of same domain or a trusted domain:-All computers that will run vCenter
Server in a linked mode group must be members of a domain. The servers can exist in
different domains only if a two-way trust relationship exists between the domains.
2. DNS name must match with the vCenter server name:- DNS must be operational.
Also, the DNS name of the servers must match the server name.
3 Cannot be DC or terminal server:-The servers that will run vCenter Server cannot be
domain controllers or terminal servers.
4 Cannot combine with earlier versions:- You cannot combine vCenter Server 5 instances
in a linked mode group with earlier versions of vCenter Server.
5 Must have its own backend database:- Each vCenter Server instance must have its
own backend database, and each database must be configured as outlined earlier with
the correct permissions. The databases can all reside on the same database server, or
each database can reside on its own database server.
Connecting vCenter Server to a Microsoft SQL Server database, like the Oracle
implementation, requires a few specific configuration tasks, as follows
1 Both Windows and mixed mode authentication are supported
2 A new database for each vCenter Server
3 SQL login that has full access to the database
4 Appropriate permissions by mapping the SQL login to the dbo user
5 SQL login must also be set as the owner of the database
6 Must also have dbo (db_owner) privileges on the MSDB database when installing
vCenter Server can export topology maps in a variety of graphics formats. The topology
maps, coupled with the data found on the Storage Views, Hardware Status, and
Summary tabs should provide enough information for your manager
The vCenter Single Sign On is an authentication and identity management service which
makes the VMware cloud infrastructure platform more secure. It allows administrators
and the various vSphere software components to communicate with each other through
a secure token exchange mechanism, instead of requiring each component to authenticate
a user separately with a directory service like Active Directory.
Roles:-
For the first installation of vCenter Server with vCenter Single Sign-On, you must
install all three components, Single Sign-On Server, Inventory Service, and vCenter
Server, in the vSphere environment. In subsequent installations of vCenter Server in
your environment, you do not need to install Single Sign-On. One Single Sign-On server
can serve your entire vSphere environment. After you install vCenter Single Sign-On
once, you can connect all new vCenter Server instances to the same authentication
server. However, you must install a Inventory Service instance for each vCenter Server
instance.
The vCenter Sign-On installer also deploys the VMware Lookup Service on the same
address and port. The Lookup Service enables different components of vSphere to find
one another in a secure way. When you install vCenter Server components after vCenter
Single Sign-On, you must provide the Lookup Service URL. The Inventory Service and
the vCenter Server installers ask for the Lookup Service URL and then contact the
Lookup Service to find vCenter Single Sign-On. After installation, the Inventory Service
and vCenter Server are registered in Lookup Service so other vSphere components, like
the vSphere Web Client, can find them.
What is cloning?
A clone is an exact copy of an existing virtual machine.
What is Snapshot?
A snapshot is a point in time check points of a VM with all its configuration settins
disk states and memory contains . We can return back to that point in time check
point or image stage at any time.
Is cloning a running VM possible- Yes
Linked Clone-A linked clone is a copy of a virtual machine that shares virtual disks
with the parent virtual machine in an ongoing manner. This conserves disk space, and
allows multiple virtual machines to use the same software installation. It is made of
parent VM's snapshot. It needs continuous access to its parent VM' and is disabled if
access to parent VM fails. Linked clone have some performance issues than a full
clone VM.
What is Sysprep-This is a native windows tool that can be used in VMware also.
The purpose of the tool is to allow for a single Windows installation to be cloned many
times over and over, each time with an unique identity (new computer name, new IP
address, and new Security Identifier (SID). From windows 2008 onwards VMware does
not need this tools
Folder of file
Single files OVA.
The Folder Of Files:-This format puts the integrity checking manifest (.MF) file, the
structural definition (.OVF) file, and the virtual hard disk (.VMDK) file — as separate
files in a folder.
Single File (OVA) format:- This format combines those separate components
(.MF,.OVF,.VMDK) into a single file.
.mf or Manifest file:- contains SHA-1 digests of the other two files. Used For OVF’s
integrity checking
.ovf or OVF descriptor:- XML document that contains information about the OVF
template, such as product details, virtual hardware, requirements, licensing, a full list of
file references,
.vmdk or virtual hard disk file:- it may contain multiple VMDK files, all of which would
need to be referenced in the OVF descriptor file
What is OVA- OVF templates can also be distributed as a single file for easy
distribution? This single file ends in .ova and is in zipped TAR format. .Ova means Open
Virtualization Appliance.
What is VAPP- vSphere vApps leverage OVF as a way to combine multiple VMs into
a single administrative unit. When the vApp is powered on, all VMs in it are powered
on, in a sequence specified by the administrator. The same goes for shutting down a
vApp. vApps also act a bit like resource pools for the VMs contained within them.
Remember:-You can create a vApp inside other vApps, but you can’t create a vApp on
a cluster that does not have vSphere DRS enabled.
What is P2V and V2V migration- VMware offer a stand-alone product called
VMware Converter. VMware Converter provides both P2V functionality as well as
virtual-to-virtual (V2V) functionality.
No Clones Templates
1 Clone creates an exact copy of a running Template act as a baseline
virtual machine at the time of cloning process. image with a predefined
configuration as per organization
standard.
5 VM clones are not suited for mass deployment Templates are best suited for
of virtual machine. mass deployment of virtual
machine.
VMware vSphare Resource Allocation
Describe Reservation Limit and share?
Reservations:- Reservations serve to act as guarantees of a particular resource.
Limits:- Limits are, quite simply, a way to restrict the amount of a given resource
that a VM can use. Limits enforce an upper ceiling on the usage of resource.
Shares:- Shares serve to establish priority. Shares apply only during periods of host
resource contention and serve to establish prioritized access to host resource.
Reservation-guaranted resource,
Limit-Upper lmit of given resource,
Share-Prioritize resource access in time of contension
CPU Utilization
No. There’s no competition between VMs for CPU time because both are idle. Share
comes in to play in time of resource contention.
The ESX host with dual, single-core, 3 GHz CPUs has two equally
busy VMs running (both requesting maximum CPU capacity). The
shares are set at the defaults for the running VMs. Will the Shares
values have any effect in this scenario-?
No. Again, there’s no competition between VMs for CPU time, this time because each
VM is serviced by a different core in the host.
Remember:-CPU Affinity Not Available with Fully Automatic DRS enabled Clusters.
Reservations set on CPU cycles provide guaranteed processing power for VMs. Unlike
memory, reserved CPU cycles can and will be used by ESXi to service other requests
when needed. As with memory, the ESXi host must have enough real, physical CPU
capacity to satisfy a reservation in order to power on a VM.
Limits on CPU usage simply prevent a VM from gaining access to additional CPU
cycles even if CPU cycles are available to use. Even if the host has plenty of CPU
processing power available to use, a VM with a CPU limit will not be permitted to use
more CPU cycles than specified in the limit. Depending on the guest OS and the
applications, this might or might not have an adverse effect on performance.
Shares are used to determine CPU clock cycle allocation when the ESXi host is
experiencing CPU contention. CPU shares grant CPU access on a percentage basis
calculated on the number of shares granted out of the total number of shares assigned.
What is Resource Pool? Why it is required?
Resource pool basically is a special type of container object, much like a folder, mainly
used to group VM's with similar resource allocation needs to avoid administrative
overhead. Resource pool uses reservations, limits, and shares to control and modify
resource allocation behavior, but only for memory and CPU.
What is Expandable Reservation in resource Pool?
A Resource Pool provides resources to its child objects. A child object can either be a
virtual machine or another resource pool. This is what called the parent-child
relationship.
But what happens if the child objects in the resource pool are configured with
reservations that exceeds the reservation set on the parent resource pool-- in that case
parent resource pool needs to request protected resources from its parent resource
pool. This can only be done if expandable reservation is enabled.
Please note that the resource pool request protected or reserved resources from its
parent resource pool, it will not accept resources that are not protected by a
reservation.
You want to a understand Resource Pool's resource allocation, from
where you can see allocation of resources to objects within the
vCenter Server hierarchy?
Clusters "Resource Allocation" tab can verify the allocation of resources to objects
within the vCenter Server hierarchy.
Remember:-Shares Apply Only During Actual Resource Contention
Remember that share allocations come into play only when VMs are fighting one
another for a resource — in other words, when an ESXi host is actually unable to
satisfy all the requests for a particular resource. If an ESXi host is running only eight
VMs on top of two quad-core processors, there won’t be contention to manage
(assuming these VMs have only a single vCPU and Shares values won’t apply).
What is Processor core? Thread? what is Hyperthreading? what is
Logical CPU and Virtual CPU?
Core : one operations unit or processing unit Inside your physical processor.
Hyper-Threading:- Normally a processor Core can handle one thread or one operation at
a same time (time means processor time slot). But with Hyper-Threading enabled a
processor Core can handle two threads at the same time.
Logical Processor : The number of threads that processor Cores can handle in a machine
is the number of logical processor. So if you want to know how much logical processor
do you have, just count the total number of threads a processor can handle.
Virtual Processor: when you create a virtual machine you do assign to it a processor.
Like vRAM, VHD, Virtual network interface, we can assign a Virtual Processor (VP) to
a virtual machine. Actually virtual Processor is nothing but the physical processor’s
TimeSlot slice that will be given to the virtual machine.
What is SMP?
Symmetric Multiprocessing: SMP is processing of a program by multiple processors that
share a common operating system and memory.
Second, you must create and configure custom network resource pools as necessary.
What are three basic settings each network resource pool consists of?
'Physical Adapter Shares'
'Host Limit'
QoS Priority Tag
Physical Adapter Shares'- priority for access to the physical network adapters when
there is network contention.
'Host Limit'- upper limit on the amount of network traffic a network resource pool
is allowed to consume (in Mbps)
QoS Priority Tag- The QoS (Quality of Service) priority tag is an 802.1p tag that is
applied to all outgoing packets. Configured upstream network switches can further
enhance and enforce the QoS just beyond the ESXi host.
What are the pre-requisites of storage I/O control (SIOC)?
SIOC has a few requirements you must meet:
(i)All SIOC enabled Datastores should be under a single vCenter Server-
(j)No RDM Support, NO NFS Support
(k) Datastore with Multiple Extents Support not supported-
For controlling the use of storage I/O by VMs SIOC uses shares and limits-
SIOC provides two mechanisms for controlling the use of storage I/O by VMs: shares
and limits. the Shares value establishes a relative priority as a ratio of the total
number of shares assigned, while the Limit value defines the upper ceiling on the
number of I/O operations per second (IOPS) that a given VM may generate.
One way would be to configure the VMs with 8 GB of RAM and specify a reservation
of only 2 GB. VMware ESXi will guarantee that every VM will get 2 GB of RAM,
including preventing additional VMs from being powered on if there isn’t enough RAM
to guarantee 2 GB of RAM to that new VM. However, the RAM greater than 2 GB is
not guaranteed and, if it is not being used, will be reclaimed by the host for use
elsewhere. If plenty of memory is available to the host, the ESXi host will grant what
is requested; otherwise, it will arbitrate the allocation of that memory according to the
shares values of the VMs.
A fellow VMware administrator is a bit concerned about the use of
CPU reservations. She is worried that using CPU reservations will
“strand” CPU resources, preventing those reserved but unused
resources from being used by other VMs. Are this administrator’s
concerns well founded?
For CPU reservations- No. While it is true that VMware must have enough unreserved
CPU capacity to satisfy a CPU reservation when a VM is powered on, reserved CPU
capacity is not “locked” to a VM like memory reservation. If a VM has reserved but
unused CPU capacity, that capacity can and will be used by other VMs on the same
host. The other administrator’s concerns could be valid, however, for memory
reservations.
Your company runs both test/development workloads and production
workloads on the same hardware. How can you help ensure that
test/development workloads do not consume too many resources and
impact the performance of production workloads?
Create a resource pool and place all the test/development VMs in that resource pool.
Configure the resource pool to have a CPU limit and a lower CPU shares value. This
ensures that the test/development will never consume more CPU time than specified in
the limit and that, in times of CPU contention; the test/development environment will
have a lower priority on the CPU than production workloads.
Name two limitations of Network I/O Control?
NIOC works only with
vSphere Distributed Switches,
The ability to only control outbound network traffic
The fact that it requires vCenter Server in order to operate
” System network resource pools” cannot be assigned to user-created port groups.
What are the requirements for using Storage I/O Control?
All datastores and ESXi hosts that will participate in Storage I/O Control must be
managed by the same vCenter Server instance.
Raw Device Mappings (RDMs) and NFS datastores are not supported for SIOC.
Datastores must have only a single extent; datastores with multiple extents are not
supported
What is vMotion?
vMotion is a feature that allows running VMs to be migrated from one physical ESXi
host to another physical ESXi host with no downtime to end users.
Storage vMotion:- Storage vMotion is the storage equivalent of vMotion, and it is used
to manually balance storage utilization between two or more datastores.
Networking
CPU
Processors compatibility:-
# Same CPU vendor :- (Intel or AMD).
# Same CPU family:- (Xeon 55xx, Xeon 56xx, or Opteron).
# Same CPU features:- the presence of SSE2, SSE3, and SSE4, and NX or XD.
# Virtualization enabled:- For 64-bit VMs, CPUs must have virtualization
technology enabled (Intel VT or AMD-v).
Host and VM
No Device physically available to only one host:- The VM must not be connected to any
device physically available to only one ESXi host. This includes disk storage, CD/DVD
drives, floppy drives, serial ports, or parallel ports.
No internal-only vSwitch:-
No CPU affinity Rule:-
Shared Storage for hosts:-
No. vMotion is a solution to address planned downtime of the ESXi hosts by manually
moving live VMs to other host.
What is EVC?
EVC:- vMotion requires compatible CPU families on both source and the destination
ESXi hosts. For that vSphere offers “Enhanced vMotion Compatibility (EVC)”. This
can mask differences between CPU families in order to maintain successful vMotion.
Can you change the EVC level for a cluster while there are VMs
running on hosts in the cluster?
No, you cannot change the EVC level when VMs are running on the host. New EVC level
means new CPU masks must be calculated and applied. CPU masks can be applied only
when VMs are powered off.
Describe in details what is VMware Enhanced vMotion Compatibility
(EVC)?
DRS takes Automatic vMotion decisions based on the selected automation level (the
slider bar).
There are five positions for the slider bar on the Fully Automated setting of the DRS
cluster. The values of the slider bar range from Conservative to Aggressive and
automate 5 priority recommendations accordingly.
Maintenance mode is a setting on a ESXi host that prevents the ESXi host from
performing any VM related functions. VMs currently running on a ESXi host being put
into maintenance mode must be shut down or moved to another host before the ESXi
host will actually enter maintenance mode. This means that an ESXi host in a DRS-
enabled cluster will automatically generate priority 1 recommendations to migrate all
VMs to other hosts within the cluster.
What is Distributed Resource Scheduler (DRS) Rules or affinity rules?
vSphere DRS supports three types of DRS rules:-
[1] VM affinity rules- “Keep Virtual Machines Together” -Keeps VMs together on the
same ESXi host
[2] VM anti-affinity rules- “Separate Virtual Machines” - Keeps VMs seperate on the
different ESXi host.
[3] Host affinity rules- “Virtual Machines To Hosts” - Keeps VM DRS group and Host
DRS group together or separate.
The host affinity rule brings together a VM DRS group and a host DRS group according
to preferred rule behavior. There are four host affinity rule behaviors:
1 Must Run On Hosts In Group
2 Should Run On Hosts In Group
3 Must Not Run On Hosts In Group
4 Should Not Run On Hosts In Group
What is per-VM Distributed Resource Scheduler Settings?
The administrator can then selectively choose VMs that are not going to be acted on
by DRS in the same way as the rest of the VMs in the cluster. However, the VMs
should remain in the cluster to take advantage of high-availability features provided by
vSphere HA.
The per-vM automation levels available include the following:
Manual (Manual intelligent placement and vMotion)
Partially Automated (automatic intelligent placement, manual vMotion)
Fully Automated (automatic intelligent placement and vMotion)
Default (inherited from the cluster setting)
Disabled (vMotion disabled)
Storage vMotion
1. Nonvolatile files copy:- First, vSphere copies over the nonvolatile files that makes up
a VM: Ex- the configuration file (VMX), VMkernel swap file, (.SAWP) log files, and
snapshots.
2. Ghost or shadow VM created on destination datastore:- Next, vSphere starts a ghost
or shadow VM on the destination datastore using the nonvolatile files copied. Because
this ghost VM does not yet have a virtual disk (that hasn’t been copied over yet), it
sits idle waiting for its virtual disk.
3. Destination disk and mirror driver created:- Storage vMotion first creates the
destination disk. Then a mirror device — a new driver that mirrors I/Os between the
source and destination disk — is inserted into the data path between the VM and the
underlying storage.
4. Single-pass copy of the virtual disk(s):- With the I/O mirroring driver in place,
vSphere makes a single-pass copy of the virtual disk(s) from the source to the
destination. As changes are made to the source, the I/O mirror driver ensures those
changes are also reflected at the destination.
5. vSphere quickly suspends and resumes in order to transfer control over to the ghost
VM:- When the virtual disk copy is complete, vSphere quickly suspends and resumes in
order to transfer control over to the ghost VM created on the destination datastore
earlier. This generally happens so quickly that there is no disruption of service, like with
vMotion.
6. Source datastore files are deleted:- The files on the source datastore are deleted. It’s
important to note that the original files aren’t deleted until it’s confirmed that the
migration was successful; this allows vSphere to simply fall back to its original location if
an error occurs. This helps prevent data loss situations or VM outages because of an
error during the Storage vMotion process.
What we should remember when using Storage vMotion with Raw
Device Mappings (RDM)?
There are two type of Raw Device Mappings (RDM’s) - physical mode RDM and virtual
mode RDM. Virtual mode RDM use one VMDK mapping file to give raw LUN access. Be
careful when using Storage vMotion with virtual mode (RDMs).
If you want to migrate only the VMDK mapping file, be sure to select “Same Format
As Source” for the virtual disk format. If you select a different format, virtual mode
RDMs will be converted into VMDKs as part of the Storage vMotion operation (physical
mode RDMs are not affected). Once an RDM has been converted into a VMDK, it
cannot be converted back into an RDM again.
Storage DRS :
Just as vSphere DRS uses clusters as a collection of hosts on which to act, SDRS
uses data store clusters as collections of datastores on which it acts.
Just as vSphere DRS can perform both initial placement and manual and automatic
balancing, SDRS also performs initial placement of VMDKs and manual or automatic
balancing of VMDKs.
Just as vSphere DRS offers affinity and anti-affinity rules to influence
recommendations, SDRS offers VMDK affinity and anti-affinity functionality.
No NFS and VMFS combination:- Datastores of different sizes and I/O capacities can
be combined in a datastore cluster. Additionally, datastores from different arrays and
vendors can be combined into a datastore cluster. However, you cannot combine NFS
and VMFS datastores in a datastore cluster.
No replicated and nonreplicated datastore combination:-
No ESX/ESXi 4.x and earlier host connection with ESXi 5 datastore:- All hosts
attached to a datastore in a datastore cluster must be running ESXi 5 or later.
ESX/ESXi 4.x and earlier cannot be connected to a datastore that you want to add to
a datastore cluster.
No Datastores shared across multiple datacenters:-
What are the relations between Storage I/O Control and Storage DRS
Latency Thresholds?
You’ll note that the default I/O latency threshold for SDRS (15 ms) is well below the
default for SIOC (30 ms). The idea behind these default settings is that SDRS can
make a migration to balance the load (if fully automated) before throttling becomes
necessary.
Just as vSphere DRS has affinity and anti-affinity rules, SDRS offers vSphere
administrators the ability to create VMDK anti-affinity and VM anti-affinity rules.
These rules modify the behavior of SDRS to ensure that specific VMDKs are always
kept separate (VMDK anti-affinity rule) or that all the virtual disks from certain VMs
are kept separate (VM anti-affinity rule).
To configure Storage DRS to keep all disks for a VM together, check the boxes in the
Keep VMDKs Together column.
Name the two ways in which an administrator are notified that a
Storage DRS recommendation has been generated?
First, an alarm is generated to note that an SDRS recommendation is present. You can
view this alarm on the "Alarms" tab of the datastore cluster in "Datastores And
Datastore Clusters" inventory view.
Second,In addition, the "Storage DRS" tab of the datastore cluster (visible in
"Datastores And Datastore Clusters" inventory view) will list the current SDRS
recommendations and give you the option to apply those recommendations.
You can use drag and drop to add a datastore to an existing datastore cluster as well.
Please note, that drag and drop won’t warn you that you’re adding a datastore that
doesn’t have connections to all the hosts that are currently connected to the datastore
cluster. So when using SDRS some host may find that a particular datastore is
unreachable. To avoid this situation you should always use the "Add Storage" dialog box.
[When using drag and drop to add a datastore to a datastore cluster, the user is not
notified if the datastore isn’t accessible to all the hosts that are currently connected
to the datastore cluster. This introduces the possibility that one or more ESXi hosts
could be “stranded” from a VM’s virtual disks if Storage DRS migrates them onto a
datastore that is not accessible from that host.]
VMWare Storage:
What is VMFS?
vSphere Virtual Machine File System (VMFS) is similar to NTFS for Windows Server
and EXT3 for Linux. Like these files systems, it is native; it’s included with vSphere
and operates on top of block storage objects. If you’re leveraging any form of block
storage in vSphare, then you’re using VMFS. VMFS creates a shared storage pool that
is used for one or more VMs.
3.In a single VMFS-3 datastore, 32 extents are supported for a maximum size of up
to 64 TB-
4.Except the first extent, where VMFS metadata resides, Removing the LUN
supporting a VMFS-3 extent will not make the spanned VMFS datastore unavailable-
5.Removing an extent affects only the portion of the datastore supported by that
extent-
7.VMFS-3 allocates the initial blocks for a new file (VMs) randomly in the file system
(or Extents), and subsequent allocations for that file are sequential-
8.VMFS datastore that spans multiple extents across multiple LUNs reduces LUN queue
depth-
1.Maximum 64 TB in size using only a single extent- Multiple extents are still also
limited to 64 TB-
2.A single block size of 1 MB allowed that support a 2TB file creation-
3.More efficient sub-block allocation, only 8 kb compare to VMFS-3s 64 kb-
4.Not limited to 2 TB for the creation of physical-mode RDMs-
5.Non-disruptive in place and online upgrade to VMFS-5 from VMFS-3 datastores-
multipathing:
Multipathing is the term used to describe how a host, such as an ESXi host, manages
storage devices that have multiple ways (or paths) to access them. Multipathing is
extremely common in Fiber Channel and FCoE environments and is also found in iSCSI
environments. Multipathing for NFS is also available but handled much differently than
for block storage.
Pluggable Storage Architecture (PSA):
Elements of vSphere storage stack that deal with multipathing is called Pluggable
Storage Architecture (PSA)
What is LUN queues:
Queues exist on the server (in this case the ESXi host), generally at both the HBA and
LUN levels. Queues exist on the storage array also. . Block-centric storage arrays
generally have these queues at the target ports, array-wide, and array LUN levels, and
finally at the spindles themselves.
Keep in mind that the bulk of the power of profile-driven storage comes from the
interaction with VASA (Virtualization layer to array communication) to automatically
gather storage capabilities from the underlying array. However, you might find it
necessary or useful to define one or more additional storage capabilities that you can
use in building your VM storage profiles.
[Please remember:- Keep in mind that you can assign only one user-defined storage
capability per datastore. The VASA provider can also only assign a single system-
provided storage capability to each datastore. This means that datastores may have up
to 2 capabilities assigned: one system-provided capability and one user-defined
capability].
Upgrade from VMFS -3 to VMFS - 5:-
Yes it is possible to non disruptively upgrade fromVMFS -3 to VMFS -5.
Note1: Upgrading VMFS 5 partitions will retain the partition characteristics of the
original VMFS 3 datastore including file block-size, sub-block size of 64k etc.
Note2: Increasing the size of upgraded VMFS datastore beyond 2TB changes the
partition type MBR to GPT.However all other features/characteristics continue to
remain same.
Disk signature:-
Every VMFS datastore has a Universally Unique Identifier (UUID) embedded in the
filesystem. This Unique 'UUID" differentiate a datastore from others. When you clone or
replicate a VMFS datastore, the copy of the datastore will be a byte-for-byte copy of
the datastore, right down to the UUID. If you attempt to mount the LUN that has
the copy of the VMFS data store, vSphere will see this as a duplicate copy and will
require that you do one of two things:
1 Unmount the original and mount the copy with the same UUID.
2 Keep the original mounted and write a new disk signature to the copy.
Raw Device Mapping ( RDM):-
Normally VMs use shared storage pool mechanisms like VMFS or NFS datastores. But
there are certain use cases where a storage device must be presented directly to the
guest operating system inside a VM.
vSphere provides this functionality via a "Raw Device Mapping". RDMs are presented to
your ESXi hosts and then via vCenter Server directly to a VM. Subsequent data I/O
bypasses the VMFS and volume manager completely. I/O management is handled via a
mapping file that is stored on a VMFS volume.
You can configure RDMs in two different modes: pRDM and vRDM.
Physical Compatibility Mode (pRDM):- In this mode, all I/O passes directly through to
the underlying LUN device, and the mapping file is used solely for locking and vSphere
management tasks. You might also see this referred to as a pass-through disk.
Virtual Mode (vRDM):- In this mode, all I/O travels through the VMFS layer.
Advantages and disadvantages of pRDM and vRDM:-
1) pRDMs do not support vSphere snapshot:
2) pRDMs do not covert to virtual disk through SvMotion:
3) pRDMs can easily be moved to a physical host:
1. BusLogic Parallel- Well supported for older guest OSes.... doesn’t perform well.....
various Linux flavors use it.
2. LSI Logic Parallel- well supported by newer guest OSes..... default for Windows
Server 2003.
3. LSI Logic SAS- guest OS's are phasing out support for parallel SCSI in favor of
SAS....... default SCSI adapter suggested for VMs running Windows Server 2008 and
2008 R2,
4.VMware Paravirtual- Paravirtualized devices (and their corresponding drivers) are
specifically optimized to communicate more directly with the underlying VM Monitor
(VMM)..... They deliver higher throughput and lower latency, and they usually make
lower CPU impact for the I/O operations but at the cost of guest OS compatibility.
VM storage profile:-
VM storage profiles are a key component of profile-driven storage. By leveraging system-
provided storage capabilities supplied by a VASA provider (which is provided by the
storage vendor), as well as user-defined storage capabilities, you can build VM storage
profiles that help shape and control how VMs are allocated to storage.
Keep in mind that a datastore may have, at most, two capabilities assigned: one
system-provided capability and one user-defined capability.
Other than Raw Device Mapping what is the way to present storage
devices directly to a VM:-
Other than RDM You also have the option of using an in-guest iSCSI initiator to bypass
the hypervisor and access storage directly. Keep in mind the following tips about in-
guest iSCSI initiator:-
1 Separate datastore needed other than VMFS or NFS:-
2 More physical NICs needed in your server.... for defining redundant connections and
multipathing separately... as you are bypassing hypervisor level:-
3 No storage vMotion:-
4 No vSphare Snapshots:-
iSCSI:-
Ideal for customer those who are just getting started with vSphere and have no existing
Fiber Channel SAN infrastructure.
Fibre Channel:-
Ideal for customer those who have VMs with high-bandwidth (200 MBps+)
requirements (not in aggregate but individually).
NFS:-
Ideal for customer where there are many VMs with a low-bandwidth requirement
individually (and in aggregate)...and those who have less than a single link’s worth of
bandwidth.
vSphere storage models:-
vSphere has three fundamental storage presentation models:
(1) VMFS on block storage
(2) RDM and in-guest iSCSI initiator
(3) NFS.
The most flexible configurations use all three, predominantly via a shared-container
model and selective use of RDMs.
vSphere has three fundamental storage presentation models:
(1) VMFS on block storage
(2) RDM and in-guest iSCSI initiator
(3) NFS.
The most flexible configurations use all three, predominantly via a shared-container
model and selective use of RDMs.
Basic Storage Performance parameters:-
STORAGE BASICS:
2 Fibre Channel
6 InfiniBand
3.USB boot
What is SAN?
A storage area network (SAN) is a dedicated network that provides access to
consolidated, block level data storage. SAN refers to a network topology, not a
connection Protocol.
Fibre Channel is three times as fast as Small Computer System Interface (SCSI) as the
transmission interface between servers and clustered storage devices.
Fibre channel is more flexible; devices can be as far as ten kilometers (about six miles)
apart if optical fiber is used as the physical medium.
Optical fiber is not required for shorter distances; however, because Fibre Channel also
works using coaxial cable and ordinary telephone twisted pair you can use it in shorter
distances.
REMEMBER CAREFULLY:-
(1) ESXi boot from SAN and (2) Raw device mapping (RDM) are not supported in
NFS.
The Fibre Channel protocol can operate in three modes: point-to-point (FC-P2P),
arbitrated loop (FC-AL), and switched (FC-SW). Point-to-point and arbitrated loop are
rarely used today for host connectivity, and they generally predate the existence of
Fibre Channel switches.
50:00:00:25:b5:01:00:00 20:00:00:25:b5:01:00:0f
Like Ethernet MAC addresses, WWNs have a structure. The most significant two bytes
are used by the vendor (the four hexadecimal characters starting on the left) and are
unique to the vendor, so there is a pattern for QLogic or Emulex HBAs or array
vendors. In the previous example, these are Cisco CNAs connected to an EMC
Symmetrix VMAX storage array.
What is VSAN?
Like VLANs, VSANs provide isolation between multiple logical SANs that exist on a
common physical platform. This enables SAN administrators greater flexibility and
another layer of separation in addition to zoning.
2. To create fault and error domains on the SAN fabric, where noise,
chatter, and errors are not transmitted to all the initiators/targets
attached to the switch. Again, it’s somewhat analogous to one of the
uses of VLANs to partition very dense Ethernet switches into broadcast
domains.
How do you configure ‘Zoing’ in ‘FC’? What are the types of ‘Zoning’
you can configure in FC?
Zoning is configured on the Fibre Channel switches via simple GUIs or CLI tools and can
be configured by (I) Port or by (II) WWN:
1. Using port-based zoning:- Using port-based zoning, you would zone by configuring
your Fibre Channel switch for example “put port 5 and port 10 into a zone that we’ll
call zone_5_10.” Any device (and therefore any WWN) you physically plug into port 5
could communicate only to a device (or WWN) physically plugged into port 10.
2. Using WWN-based zoning:- Using WWN-based zoning, you would zone by configuring
your Fibre Channel switch to “put WWN from this HBA and these array ports into a
zone we’ll call ESXi_4_host1_CX_SPA_0.” In this case, if you moved the cables, the
zones would move to the ports with the matching WWNs.
What is FCoE?
FCoE was designed to be interoperable and compatible with Fiber Channel. In fact, the
FCoE standard is maintained by the same T11 body as Fiber Channel. At the upper
layers of the protocol stacks, Fiber Channel and FCoE look identical. It’s at the lower
levels of the stack that the protocols diverge.
In FCoE Fiber Channel frames are encapsulated into Ethernet frames, and transmitted in
a lossless manner.
What is iSCSI?
iSCSI brings the idea of a block storage SAN to customers with no Fiber Channel
infrastructure. iSCSI is an IETF standard for encapsulating SCSI control and data in
TCP/IP packets, which in turn are encapsulated in Ethernet frames. The following shows
how iSCSI is encapsulated in TCP/IP and Ethernet frames. TCP retransmission is used
to handle dropped Ethernet frames or significant transmission errors. Storage traffic can
be intense relative to most LAN traffic. This makes it important that you minimize
retransmits, minimize dropped frames, and ensure that you have “betthe- business”
Ethernet infrastructure when using iSCSI.
What is NFS?
Stands for Network File System. NFS protocol is a standard originally developed by Sun
Microsystems to enable remote systems to access a file system on another host as if it
were locally attached. vSphere implements a client compliant with NFSv3 using TCP.
When NFS datastores are used by vSphere, no local file system (such as VMFS) is used.
The file system will be on the remote NFS server. This means that NFS datastores
need to handle the same access control and file-locking requirements that vSphere
delivers on block storage using the vSphere Virtual Machine File System, or VMFS. NFS
servers accomplish this through traditional NFS file locks.
Network Load Balancing (NLB) clustering:- The Network Load Balancing configuration
involves an aggregation of servers that balances the requests for applications or services.
In a typical NLB cluster, all nodes are active participants in the cluster and are
consistently responding to requests for services. If one of the nodes in the NLB cluster
goes down, client connections are simply redirected to another available node in the NLB
cluster. NLB clusters are most commonly deployed as a means of providing enhanced
performance and availability.
Example- NLB on IIS server, ISA server, VPN server etc.
Windows Failover Clustering (WFC):- The Network Load Balancing configuration involves
an aggregation of servers that balances the requests for applications or services. In a
typical NLB cluster, all nodes are active participants in the cluster and are consistently
responding to requests for services. If one of the nodes in the NLB cluster goes down,
client connections are simply redirected to another available node in the NLB cluster.
NLB clusters are most commonly deployed as a means of providing enhanced performance
and availability.
Example- NLB on IIS server, ISA server, VPN server etc.
Windows Failover Clustering (WFC):- it is used solely for the sake of availability. Server
clusters or WFC do not provide performance enhancements outside of high availability. In
a typical server cluster, multiple nodes are configured to be able to own a service or
application resource, but only one node owns the resource at a given time. Each node
requires at least two network connections: one for the production network and one for
the cluster service heartbeat network between nodes. A common datastore is also
needed that houses the information accessible by the online active node and all the
other passive nodes. When the current active resource owner experiences a failure,
causing a loss in the heartbeat between the cluster nodes, another passive node becomes
active and assumes ownership of the resource to allow continued access with minimal
data loss.
Raw device mapping (RDM):- An RDM is a combination of direct access to a LUN, and
a normal virtual hard disk file.
An RDM can be configured in either Physical Compatibility mode or Virtual Compatibility
mode. The Physical Compatibility mode option allows the VM to have direct raw LUN
access. The Virtual Compatibility mode, however, is the hybrid configuration that allows
raw LUN access but only through a VMDK file acting as a proxy.
So, why choose one over the other? Because the RDM in Virtual Compatibility mode
uses a VMDK proxy file, it offers the advantage of allowing snapshots to be taken. By
using the Virtual Compatibility mode, you will gain the ability to use snapshots on top
of the raw LUN access in addition to any SAN-level snapshot or mirroring software.
AAM- Earlier versions of vSphere used Automated Availability Manager (AAM), which
had a number of notable limitations, like a strong dependence on name resolution and
scalability limits.
# /etc/init.d/vmware-fdm start
# /etc/opt/init.d/vmware-fdm restart
network partition:-
"Network partition" is the term used to describe the situation in which one or more
slave hosts cannot communicate with the master even though they still have network
connectivity between themselves. In this case, vSphere HA is able to use the heartbeat
datastores to detect whether the partitioned hosts are still live and whether action
needs to be taken to protect VMs on those hosts.
network isolation:-
Network isolation is the situation in which one or more slave hosts have lost all
management network connectivity. Isolated hosts can neither communicate with the
vSphere HA master nor communicate with other ESXi hosts.
datastore heart-beating:-
In this case, the slave host uses heartbeat datastores to notify the master that it is
isolated. The slave host uses a special binary file, the host-X-poweron file, to notify
the master. The vSphere HA master can then take the appropriate action to ensure
that the VMs are protected.”
In HA cluster, ESXi hosts uses heartbeats to communicate among other hosts in the
cluster. By default, Heartbeat will be sent every 1 second.
If a master ESXi host in the HA enabled cluster didn’t received heartbeat from any
other hosts in the cluster then the master host assumes that the slave host may be in
isolated state. It then checks that the slave host is capable of pinging its configured
isolation address(default gateway by default) or not. If the ping fails, VMware HA will
execute the Host isolation response.
In VMware vSphere 5.x, if the agent which fails is from a master host, then isolation
is declared in 5 seconds. If it is a slave, isolation is declared in 30 seconds.
In vmware 5.x then master host uses another technique to check live-ness of the slave
hosts in the cluster before declaring it as isolated. It is called datastore heartbeating.
Datastore heartbeating is used to determine whether the slave host has failed, is in a
network partition, or is network isolated. If the slave host has stopped datastore
heartbeating, it is considered to have failed and its virtual machines are restarted
elsewhere.
Vsphare HA requirements:-
Same shared storage for all hosts:-
Identical virtual networking configuration:-
Do HA uses vMotion to transfer live VM's to other HA hosts when source Hosts fails?
No because HA restarts VMs to other Hosts when source hosts fails. It is not live
migration and involves few minutes of downtime.
Or should the cluster prevent more VMs from being powered on than it can actually
protect? That is the basis for the admission control — and by extension, the admission
control policy — settings.
High Availability VM Monitoring:- vSphere HA has the ability to look for guest OS and
application failures. When a failure is detected, vSphere HA can restart the VM or the
specific application. The foundation for this functionality is built into the VMware Tools
which provide a series of heartbeats from the guest OS up to the ESXi host on which
that VM is running. By monitoring these heartbeats in conjunction with disk and
network I/O activity, vSphere HA can attempt to determine if the guest OS has failed.
vSphere Fault Tolerance:- vSphere Fault Tolerance (FT) is the evolution of “continuous
availability” that works by utilizing VMware vLockstep technology to keep a primary
machine and a secondary machine in a virtual lockstep. This virtual lockstep is based on
the record/playback technology. vSphere FT will stream data that will be recorded, and
then replayed. By doing it this way, VMware has created a process that matches
instruction for instruction and memory for memory to get identical results on the
secondary VM. So, the record process will take the data stream from primary VM, and
the playback will perform all the keyboard actions and mouse clicks on the secondary
VM.
Cluster level
Same FT version or build number on at least 2 host:-
HA must be enabled:-
VMware EVC must be enabled:-
ESXi host level
vSphere FT compatible CPUs:-
Hosts must be licensed for vSphere FT.
Hardware Virtualization (HV) must be enabled:-
Access to the same datastores:-
vSphere FT logging network with at least Gigabit Ethernet connectivity:-
VM level
VMs with a single vCPU:-
Supported guest OS's:-
VM files on share storage:-
Thick provisioned (eagerzeroedthick) or a Virtual mode RDM
No VM snapshots:-
No NIC passthrough or the older vlance NIC driver:-
No Paravirtualized kernel:-
No USB devices, sound devices, serial ports, or parallel ports:-
No mapped CD-ROM or floppy devices:-
No N_Port ID Virtualization:-
No Nested page tables/extended page tables (NPT/EPT):-
Not a linked clone VM:-
Operational changes or recommendations for FT:-
Power management must be turn off in the host BIOS:-
No sVmotion or sDRS for vSphere FT:-
No Hot-plugging devices:-
No Hardware Changes:- No Hardware Changes Includes No Network Changes.
What the basic troubleshooting steps in case of HA agent installs failed on hosts in HA
cluster?
Ex- 8182- (TCP/UDP) (Inbound/outbound )Traffic between hosts for vSphere High
Availability (vSphere HA)
4.Troubleshoot FDM :-
A.> Verify that all the configuration files of the FDM agent were pushed successfully
from the vCenter Server to your ESXi host:
Location: /etc/opt/vmware/fdm
File Names:
clusterconfig (cluster configuration),
compatlist (host compatibility list for virtual machines),
hostlist(host membership list), and
fdm.cfg.
5. Check the network settings like port group, switch configuration, etc are properly
configured and named exactly as other hosts in the cluster.
6. First try to restart /stop/start the VMware HA agent on the affected host using
the below commands. In addition u can also try to restart vpxa and management agent
in the Host.
# /etc/init.d/vmware-fdm stop
# /etc/init.d/vmware-fdm start
# /etc/opt/init.d/vmware-fdm restart
7. Right Click the affected host and click on “Reconfigure for VMWare HA” to re-install
the HA agent that particular host.
8. Remove the affected host from the cluster. Removing ESXi host from the cluster will
not be allowed untill that host is put into maintenance mode.
Alternative solution for 3 step is, Goto cluster settings and uncheck the vmware HA in
to turnoff the HA in that cluster and re-enable the vmware HA to get the agent
installed.