Sunteți pe pagina 1din 77

What is VMkernel?

VMkernel is the core operating system which provides means for running different
processes on the system, including management applications and agents as well as virtual
machines. It also has control of all the hardware devices on the server and manages
resources for the applications.

What are the User World processes?

The term “User world” refers to set of process running above the VMkernel operating
system.

Processes are HOSTD,VPXA,VMWare HA agents, SYSLOG deamon,NTP and SNMP

HOSTD: It is the process that authenticates users and keeps track of which users and
groups have which privileges. It also allows you to create and manage local users. The
HOSTD process provides a programmatic interface to VMkernel and is used by direct VI
Client connections as well as the VI API.

VPXA: The vpxa process is the agent used to connect to VirtualCenter


server. It runs as a special system user called vpxuser. It acts as the
intermediary between the hostd agent and VCenter server.
What is difference between HOSTD,VPXA and VPXD?

Hostd is demon or service runs in every ESXi host and performs major
tasks like, VM power on, local user management etc. But when ESXi host
joins in a vCenter server Vpxa agent gets activated and talk to Vpxd
service which runs in vCenter server. So the conclusion is Vpxd talks to
Vpxa and Vpxa talks to hostd, this is how Vpxd (vCenter demon) talks to
hostd demon via Vpxa agents.
What initial tasks can be done by DCUI?
->Set administrative password
->Configure networking
->Perform simple network tests
->View logs
->Restart agents
->Restore defaults

Describe VMware System Image Design?


The ESXi system has two independent banks of memory, each of which
stores a full system image, as a fail-safe for applying updates. When you
upgrade the system, the new version is loaded into the inactive bank of
memory, and the system is set to use the updated bank when it reboots.
If any problem is detected during the boot process, the system
automatically boots from the previously used bank of memory. You can
also intervene manually at boot time to choose which image to use for
that boot, so you can back out of an update if necessary.
At any given time, there are typically two versions of VI Client and two
versions of VMware Tools in the store partition.

What will happen if vCenter server fails?


1:-DRS will not work.
2:-HA works, but we can’t make HA related changes. New powered on VMs will not be
protected.
3:-FT works for one failure. New secondary vm will not be created
4:-Dvswitch works, but we can’t make changes.
5:-vMotion or SvMotion will not work.
6:-vSwitch continuous to work, you can even make changes to vSwitch.
7:-You will not be able to Deploy VM from the templates.
8:-You cannot clone the VM.
9:-You can take a snapshot of VM by connecting to the host using vSphere client.
10:-VM’s will work.

Where to find different VMWare ESXi Host log files?


/var/log/hostd.log: Host management service including virtual machine, host
task and events, communication with the vSphere client and vCenter
server Vpxa agent and SDK connections.
/var/log/hostd-probe.log: Host management service responsive checker.
/var/log/vmkernel.log: core VMkernel logs, including device discovery,
storage and networking device, driver events and virtual machine startup.
/var/log/vmkwarning.log: A summary of warnings and alert log message
excerpted from the VMkernel logs.
/var/log/vmksummary.log: A summary of Esxi host startup, shutdown and
an hourly heartbeat with uptime, number of virtual machines and service
resource consumption.
When an ESXi 5.1 / 5.5 host is managed by vCenter Server 5.1 and 5.5,
two components are installed, each with its own logs?
/var/log/vpxa.log: vCenter Server vpxa agent logs, including communication with vCenter
Server and the Host Management hostd agent.
/var/log/fdm.log: vSphere High Availability logs, produced by the fdm service.
/root/vmkernel-log.date virtual machine kernel core file
/var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command
entered.
Check VM log:
vmfs/volume/<vm name> vmware.log
/var/log/sysboot.log: Early VMkernel startup and module loading.
/var/log/boot.gz: A compressed file that contains boot log information and can be read
using zcat /var/log/boot.gz|more.

/var/log/syslog.log: Management service initialization, watchdogs, scheduled tasks and


DCUI use.
How to troubleshoot following Esxi Host problems?

->An ESXi/ESX host shows as Not Responding in VirtualCenter or vCenter Server

->An ESXi/ESX host shows as Disconnected in vCenter Server

->Cannot connect an ESXi/ESX host to vCenter Server

->Virtual machines on an ESXi/ESX host show as grayed out in vCenter Server

->When attempting to add an ESXi/ESX host to vCenter Server, you see an error
similar to:

Unable to access the specified host, either it doesn't exist, the server software is not
responding, or there is a network problem

ESXi

1. Verify that the ESXi host is in a powered ON state.


2. Verify that the ESXi host can be re-connected, or if reconnecting the ESXi host
resolves the issue.
3. Verify that the ESXi host is able to respond back to vCenter Server at the correct IP
address. If vCenter Server does not receive heartbeats from the ESXi host, it goes into
a not responding state.
4. Verify that network connectivity exists from vCenter Server to the ESXi host with the
IP and FQDN.
5. Verify that you can connect from vCenter Server to the ESXi host on TCP/UDP port
902
6. Verify if restarting the ESXi Management Agents resolves the issue.

7. Verify if the hostd process has stopped responding on the affected ESXi host.
8. The vpxa agent has stopped responding on the affected ESXi host.
9. Verify if the ESXi host has experienced a Purple Diagnostic Screen
10. ESXi hosts can disconnect from vCenter Server due to underlying storage issues.

Remember this while troubleshooting: -

From the Direct Console User Interface (DCUI):


1. Connect to the console of your ESXi host.
2. Press F2 to customize the system.
3. Log in as root.
4. Use the Up/Down arrows to navigate to Restart Management Agents.

Note: In ESXi 4.1 and ESXi 5.0, 5.1, 5.5 and 6.0 this option is available under
Troubleshooting Options.

5. Press Enter.
6. Press F11 to restart the services.
7. When the service restarts, press Enter.
Press Esc to log out of the system
From the Local Console or SSH:
1. Log in to SSH or Local console as root.
Run these commands:

/etc/init.d/hostd restart
/etc/init.d/vpxa restart

How to collect core dump file from Esxi 5.x environment?


During startup an Esxi host the startup script /usr/lib/vmware/vmksummary/log-
bootstop.sh checks the defined dump partition for new contents.If new content is found
an entry is written to the /var/log/vmksummary.log file citing bootstop:core dump
found.You can collect logs from an Esxi either by running vm-support at the command
line or by Exporting Diagnostic data from the vSpehre client.Both methods invoke the
vm-support script whick checks the defined dump partition for new contents.If new
content is found it is temporarily placed in a vmkernel-zdump file in /var/core before
being compressed in the vm-support output.
We can determine the device identifiers for the core partitions by running the command
Esxcfg-dumppart -t
->To change a directory with sufficient space to store the core dump file.
EX:cd/vmfs/volumes/Datastore Name/
Note:A core dump file will normally between 100 to 300MB in size.Be sure to select
sufficient free space.
->Dump the partition contents to a file by running the following command
esxcfg-dumppart –copy –devname “/vmfs/volumes/devices/disks/identifier” –zdumpname
/vmfs/volumes/datastore/filename
Ex: esxcfg-dumppart –copy –devname
“/vmfs/volumes/devices/disks/mpx.vmhba2:C0:T0:L0:2” –zdumpname
/vmfs/volumes/datastore1/filename.1
VMware P2V conversion checklist using VMware converter?
1.Disable UAC (User access control) for Windows
2.To eliminate DNS problems, use IP addresses instead of host names.
3.Do not choose vendor specific Diagnostic Partitions while conversion.
4.Convert directly to an ESXi host instead of vCenter Server.
5.Make sure there is at least 500MB of free space on the machine being converted.
6.Shut down any unnecessary services, such as SQL, antivirus programs, and firewalls.
7.Run a check disk on the volumes before running a conversion.
8. Ensure that these services are enabled:-
->Workstation Service.
->Server Service.
->CP/IP NetBIOS Helper Service.
->Volume Shadow Copy Service.
9.Check that the appropriate firewall ports are opened
10.Ensure that you are not using GPT on the disk partition
11.In Windows XP, disable Windows Simple File Sharing.
12.Unplug any USB, serial/parallel port devices from the source system.
13.NIC's that are statically configured to be at a different speed or duplex
14. If the source server contains a hard drive or partition larger than 256GB, ensure
that the destination datastores block size is 2MB, 4MB, or 8MB, and not the default
1MB size. The 1MB default block size cannot accommodate a file larger than 256GB.
15.Clear any third-party software from the physical machine that could be using the
Volume Shadow Copy Service (VSS).
16.Disable mirrored or striped volumes.

How vSphere 5.x Differs from vSphere 4.x?


vSphere 5.x is a major upgrade from vSphere 4.x.

The following changes from vSphere 4.x affect vSphere installation and setup.

->Services console is removed.


->Esxi does not have a graphical installer.
->vSphere Auto Deploy and vSphere Esxi image builder CLI.
->Installer caching.
->Changes to partitioning of Esxi host disks.
->VMWare vCenter Server Appliance.
->vSphere Web client.
->vCenter Single Sign On (SSO).

Introducing VMWare Esxi 5.x:


vRAM: Partition of physical memory that assigned to your VM virtually.
vRAM: Amount of licensed or permitted vRAM that can be assigned to your VM’s.
vRAM Entitlement Pooling: vRAM entitlements that aren’t being used by one Esxi host
can be used on another Host, as long as the total vRam across the entire pool falls
below the Licensed limit.

Which products are licensed features within the VMware vSphere


suite?
Licensed features in the VMware vSphere suite are Virtual SMP, vMotion, Storage
vMotion, vSphere DRS, Storage DRS vSphere HA and vSphere FT.
Which two features of VMware Esxi and VMware vCenter Server
together aim to reduce or eliminate downtime due to unplanned
hardware failures?
vSphere HA and vSphere FT are designed to reduce and eliminate downtime resulting
from unplanned hardware failures.
Name three features that are supported only when using vCenter
Server along with Esxi?
All of the following features are available only with vCenter Server: vSphere vMotion,
Storage vMotion, vSphere DRS, Storage DRS, vSphere HA, vSphere FT, SIOC (Storage
I/O control),NetIOC (Network I/O control),and vDS (vSphere Distributed swithch).
Name two features that are supported without vCenter Server but
with a licensed installation of Esxi?
Features that are supported by VMware ESXi without vCenter Server include core
virtualization features like virtualized networking, virtualized storage, vSphere vSMP,
and resource allocation controls.
How vSphere differs from other virtualization products?
VMware vSphere’s hypervisor, ESXi, uses a type 1 bare-metal hypervisor that handles
I/O directly within the hypervisor itself. This means that a host operating system, like
Windows or Linux, is not required in order for ESXi to function. Although other
virtualization solutions are listed as “type 1 bare-metal hypervisors,” most other type 1
hypervisors on the market today require the presence of a “parent partition” through
which all VM I/O must travel.
What are the editions available for VMware vSphare?
VMware offers three editions of vSphere:
->vSphere Standard Edition
->vSphere Enterprise Edition
->vSphere Enterprise Plus Edition
What are the editions available for VMware vCenter server?
->vCenter Foundation:-It supports the management of up to three (3) vSphere hosts.
-> vCenter Essentials kits :- Bundled with the vSphere Essentials kits
->vCenter Standard:-It includes all functionality and does not have a preset limit on the
number of vSphere hosts it can manage. Although normal sizing limits do apply. vCenter
Orchestrator is only included in the Standard edition of vCenter Server.

VMWare Important Ports:

TCP/UDP-902 -Default port used by the vCenter Server system to send data to
managed hosts. Managed hosts also send a regular heartbeat
over UDP port 902 to the vCenter Server system.
TCP-8000 -Requests from vMotion

TCP-8100 , 8200 -Traffic between ESXi hosts for vSphere Fault Tolerance (FT)

TCP-8182 - Traffic between ESXi hosts for vSphere High Availability (HA)

TCP- 7005 ,7009 -vCenter Single Sign-On

TCP-9443- vSphere Web Client HTTPS

TCP 9090 -vSphere Web Client HTTP

What is Virtual Machine?


Virtual machine is
What are virtual machine files-?
.VMX –Configuration File
.VMFX -Additional Configuration File
.VMTX –Template File
.NVRAM -BIOS/EFI Configuration
.VSWP -Swap file
.LOG -Current Log file
##.LOG -Old log file
.VMDK -Virtual Disk Descriptor
-FLAT.VMDK -Data Disk
-RDM.VMDK -Raw Device Mapping File
-DELTA.VMDK –Snapshot Disk
.VMSD -Snapshot Description File
.VMSN -Snapshot State
.VMSS -Suspand File

What are the virtual NIC types in VMware vSphere 5?


E1000 –Intel GigaBit Ethernet Modern
Vlance Flexable - Emulated AMD PCnet32 10 Mbps NIC, for 32bit OS
VMXNET2 -Jumbo Frames, Hardware Offloads
VMXNET3 -Latest paravirtualized driver, multiqueue support, IPv6
offloads, and MSI/MSI-X interrupt delivery
What are the Virtual Machine SCSI Controllers supported in windows?
->Bus Logic Parallel –Windows 2000
->LSI Logic Parallel –Windows Server 2003 and Windows server 2003 R2
->LSI Logic SAS -Windows Server 2008&R2 and Windows server 2012&R2
What is the disk provisioning option available when creating VM?
->Thick Provision Lazy Zeroed

->Thick Provision Eager Zeroed

-> Thin Provision


Thick Provision Lazy Zeroed:- All space allocated at creation but not pre-zeroed. A 40
GB virtual disk means a 40 GB VMDK flat file.

Thick Provision Eager Zeroed:- All space allocated at creation and pre-zeroed. Required
in Fault Tolerance (FT). A 40 GB virtual disk means a 40 GB VMDK flat file.
Thin Provision:- Space allocated on demand. The VMDK flat file will grow depending on
the amount of data actually stored in it, up to the maximum size specified for the
virtual hard disk.
What are the ways a VM can handle optical media file?
->Client Devices attached CD/DVD
->Host Devices attached CD/DVD
->Datastore ISO File
What is the number of cores available in VM hardware versions 7, 8,9
and 10?
VM-hardware 10 Esxi-5.5x 64
VM-hardware 9 Esxi-5.1x 32
VM-hardware 8 Esxi-5.0x 8
VM-hardware 7 Esxi-4.0x 4
What are the advantages of installing VMware tools?
-> Optimized SCSI driver
-> Enhanced video and mouse drivers
-> VM heartbeat (HA)
-> VM quiescing for snapshots and backups
-> Enhanced memory management (Memory Ballooning etc.)
-> VM focus- you are able to move into and out of VM consoles easily
without using the Ctrl+Alt.
Where are VMware tools ‘ISO’ found?
In ESXi host check “/vmimages/toolsisoimages” directory by using ESXi shell only.
What is Virtual Machine Snapshots?
Ability to create point-in-time checkpoints of a VM. Snapshot preserve captured state
of a VM at a specific point in time so that we can revert back.
Why VMDK file is known as deltadisk?
Every time you take a snapshot a new .vmdk file is created and when you add data to
the VM..... not the original vmdk but the new vmdk file that is created after
snapshot... grows. That’s why it is called a 'delta disk' or a 'differencing disk' because it
does not stores entire hard disk of the VM. It stores only the changes or differencing
disk space after creating snapshot.
Common issues why snapshot fails?
If you configure mapped LUN’s as physical then you need to remove it to take
snapshot. Physical mode RDM not supported in snapshot.
Snapshots of powered-on or suspended virtual machines with independent disks are not
supported.
What are the settings that are taken in to consideration when we
initiate snapshot?
VM configuration (what hardware is attached to)
State of VM’s hard disk file (To revert back if needed)
State of VM.s memory (If it is On)

vCenter Server:
What is vCenter server?
vCenter Server is a Windows/linux-based application that serves as a centralized
management tool for ESXi hosts and their respective VMs in a VMware vSphare
infrastructure.
Key features such as vMotion, Storage vMotion, vSphere DRS, vSphere HA, and
vSphere FT are all enabled and made possible by vCenter Server.
vCenter Server also provides scalable authentication service and role-based administration
based on integration with Active Directory.
vCenter server core services:
vCenter Server offers core services in the following areas:
->VM deployment
->VM management
->ESXi host management
->Resource management for ESXi hosts and VMs
->Template management
->Scheduled tasks
->Statistics and logging
->Alarms and event management
vCenter Server Heartbeat: A product available from VMware. Using vCenter
Server Heartbeat will automate both the process of keeping the active and passive
vCenter Server instances synchronized and the process of failing over from one to
another (and back again).
Can a local user defined in a ESXi host connect to vCenter server
using vSphare client?
Although the vSphere Client supports authentication of both vCenter Server and ESXi
hosts, organizations should use a consistent method for provisioning user accounts to
manage their vSphere infrastructure because local user accounts created on an ESXi host
are not reconciled or synchronized with the Windows or Active Directory accounts that
vCenter Server uses.
Which version of vCenter Server you will use- What are advantages
and disadvantages of using each vCenter server editions?
In vSphere 5 vCenter Server now comes not only as a Windows-based application but
also as SuSE Linux-based virtual appliance. There are advantages and disadvantages for
each vertions:-

1.Preloaded additional services like Auto Deploy, DHCP, TFTP, Syslog


2.Administrators platform familiarities
3.Using Microsoft SQL Server for backend database
4.Using vCenter server in Linked Mode
5.IPv6 Support
6.Running vCenter Server on a physical system
7.Using vCenter Heartbeat
1.Preloaded additional services like Auto Deploy, DHCP, TFTP, Syslog:-
The Linux-based virtual appliance comes preloaded with additional services like Auto
Deploy ,Dynamic Host Configuration Protocol (DHCP), Trivial File Transfer Protocol
(TFTP), and Syslog. If you need these services on your network, you can provide these
services with a single deployment of the vCenter virtual appliance.
With the Windows Server–based version, these services are separate installations or
possibly even require separate VMs (or, worse yet, separate physical servers!).
2.Administrators platform familiarities:-
If your experience is primarily with Windows Server, the Linux underpinnings of the
vCenter virtual appliance are something with which you may not be familiar. This
introduces a learning curve that you should consider.
Conversely, if your experience is primarily with Linux, then deploying a Windows Server–
based application will require some learning and acclimation for you and/or your staff.
3.Using Microsoft SQL Server for backend database:-
If you need support for Microsoft SQL Server, the Linux-based vCenter virtual
appliance won’t work; you’ll need to deploy the Windows Server–based version of
vCenter Server. However, if you are using Oracle or DB2, or if you are a small
installation without a separate database server, the vCenter Server virtual appliance will
work just fine (it has its own embedded database if you don’t have or don’t need a
separate database server).
4.Using vCenter server in Linked Mode:-
If you need to use linked mode, you must deploy the Windows Server–based version of
vCenter Server. The vCenter Server virtual appliance does not support linked mode.
5.IPv6 Support:-
If you need support for IPv6, the vCenter Server virtual appliance does not provide
that support; you must deploy the Windows Server–based version.
6.Running vCenter Server on a physical system:-
Because the vCenter Server virtual appliance naturally runs only as a VM, you are
constrained to that particular design decision. If you want or need to run vCenter
Server on a physical system, you cannot use the vCenter Server virtual appliance.
7.Using vCenter Heartbeat:-
If you want to use vCenter Heartbeat to protect vCenter Server from downtime, you’ll
need to use the Windows Server–based version of vCenter Server.
What are the databases supported by vCenter server?

Although vCenter Server is the application that performs the management of your ESXi
hosts and VMs, vCenter Server uses a database for storing all of its configuration,
permissions, statistics, and other data.
vCenter server supports following databases:-
->IBM DB2- 9.5, 9.7
->Oracle 10g R2-- 11g R1-- 11g R2
->Microsoft SQL Server 2008 R2 Express (bundled with vCenter Server)
->Microsoft SQL Server 2005- 2008
->Microsoft SQL Server 2008 R2
vCenter server Linked Mode Group?
Multiple instances of vCenter Server that share information among them.
In what situation you need a separate database server for vCenter?
[a single (1) vCenter Server with fewer than five (5) Esxi hosts or fewer than 50
VMs].
What are the services installed to facilitate the operation of vCenter
Server?
1. vCenter Inventory Service.
2. VMware vCenter Orchestrator Configuration (supports the Orchestrator workflow
engine.
3. VMware VirtualCenter Management Web services.
4. VMware VirtualCenter Server is the core of vCenter Server and provides centralized
management of ESX/ESXi hosts and VMs.
5. VMware vSphere Profile-Driven Storage Service.
6. VMwareVCMSDS is the Microsoft ADAM instance that supports multiple vCenter
Server instances in a linked mode group and is used for storing roles and permissions.
Note that ADAM is used for storing roles and permissions both in stand-alone
installations as well as installations with a linked mode group.
What are the limitations of Using SQL Server 2008 Express Edition?

SQL Server 2008 Express Edition is the minimum database available as a backend to
the Windows Server–based version of vCenter Server.

Microsoft SQL Server 2008 Express Edition has physical limitations that include the
following:

-> One CPU maximum


-> 1 GB maximum of addressable RAM
-> 4 GB database maximum

How do you protect vCenter server and make it highly available?


First> vCenter Server Heartbeat:-
Second> Standby physical vCenter server:-
Third> keep the standby vCenter Server system as a VM:-
First> vCenter Server Heartbeat:-
A product available from VMware. Using vCenter Server Heartbeat will automate both
the process of keeping the active and passive vCenter Server instances synchronized and
the process of failing over from one to another (and back again).
Second> Standby physical vCenter server:-
If the vCenter Server computer is a physical server, one way to provide availability is to
create a standby vCenter Server system that you can turn on in the event of a failure
of the online vCenter Server computer. After failure, you bring the standby server
online and attach it to the existing SQL Server database, and then the hosts can be
added to the new vCenter Server computer. In this approach, you’ll need to find
mechanisms to keep the primary and secondary/standby vCenter Server systems
synchronized.
Third> keep the standby vCenter Server system as a VM:-
A variation on that approach is to keep the standby vCenter Server system as a VM.
You can use physical-to-virtual (P2V) conversion tools to regularly “back up” the
physical vCenter Server instance to a standby VM. This method reduces the amount of
physical hardware required and leverages the P2V process as a way of keeping the two
vCenter Servers synchronized. Obviously, this sort of approach is viable for a Windows
Server–based installation on a physical system but not applicable to the virtual appliance
version of vCenter Server.
How to protect “vCenter” backend database server?
1.Database Cluster:-
2.SQL log shipping to create a database replica on separate system:-
3.Daily backup strategy which includes full, differential and transaction log backup:-

Protecting Backend database server-


1. Database Cluster:- The heart of the vCenter Server content is stored in a backend
database. Any good disaster-recovery or business-continuity plan must also include
instructions on how to handle data loss or corruption in the backend database, and the
separate database server (if running on a separate physical computer or in a separate
VM) should be designed and deployed in a resilient and highly available fashion. This is
especially true in larger environments. You can configure the backend database on a
cluster.
2.SQL log shipping to create a database replica:-
Other options might include using SQL log shipping to create a database replica on a
separate system.
3.Daily backup strategy which includes full, differential and transaction log backup:-
You should strengthen your database backup strategy to support easy recovery in the
event of data loss or corruption. Using the native SQL Server tools, you can create a
backup strategy that combines full, differential, and transaction log backups. This
strategy allows you to restore data up to the minute when the loss or corruption
occurred.
What is "Simple Recovery" model and what is "Full Recovery" model?
Simple recovery-delete transaction logs.
Full recovery-keeps transaction logs for full database recovery
If your SQL Server database is configured for the Full recovery model, the installer
suggests reconfiguring the vCenter Server database into the Simple recovery model.
What the warning does not tell you is that doing this means that you will lose the
ability to back up transaction logs for the vCenter Server database. If you leave the
database set to Full recovery, be sure to work with the database administrator to
routinely back up and truncate the transaction logs. By having transaction log backups
from a database in Full recovery, you have the option to restore to an exact point in
time. if any type of data corruption occur. If you alter the recovery model as
suggested, be sure you are making consistent full backups of the database, but
understand that you will be able to recover only to the point of the last full backup
because transaction logs will be unavailable.
Do we need IIS on vCenter server?

Despite the fact that vCenter Server is accessible via a web browser, it is not necessary
to install Internet Information Services on the vCenter Server computer. vCenter Server
is accessed via a browser that relies on the Apache Tomcat web service and that is
installed as part of the vCenter Server installation. IIS should be uninstalled because it
can cause conflicts with Apache Tomcat.
What are the memory requirement of vCenter server?

Host profile:- Host profile is a collection of all the various configuration settings available
for an ESXi host. By attaching a host profile to an ESXi host, you can (i) compare the
compliance or non-compliance of that host with the settings outlined in the host
profile. It provides administrators with a way to not only to verify consistent settings
across all the ESXi hosts but also to (ii) quickly and easily apply settings to new ESXi
hosts.
What is SSO:- Single Sign On is an authentication and identity management service. It
allows administrators and the various vSphere software components to communicate
with each other through a secure token exchange mechanism, instead of requiring each
component to authenticate a user separately with a directory service like Active
Directory.

VMware Lookup Service:- The vCenter Sign-On installer also deploys the "VMware Lookup
Service" on the same address and port. This Lookup Service enables different
components of vSphere to find one another in a secure way.

In Details:-

What is vCenter server Linked Mode Group?

Multiple instances of vCenter Server that share information among them are referred to
as a "linked mode group".
If you need more ESXi hosts or more VMs than a single vCenter Server
instance can handle, or if for whatever other reason you need more than one instance
of vCenter Server, you can install multiple instances of vCenter Server and have those
instances share inventory and configuration information for a centralized view of all the
virtualized resources across the enterprise.
In a linked mode environment, there are multiple vCenter Server instances, and each of
the instances has its own set of hosts, clusters, and VMs. However, when a user logs
into a vCenter Server instance using the vSphere Client, that user sees all the vCenter
Server instances where he or she has permissions assigned. This allows a user to perform
actions on any ESXi host managed by any vCenter Server within the linked mode group.
vCenter Server linked mode uses Microsoft ADAM to replicate information between the
instances. The replicated information includes the following:
1 Connection information (IP addresses and ports)
2 Certificates and thumbprints
3 Licensing information
4 User roles and permissions
In a linked mode environment, the vSphere Client shows all the vCenter Server instances
for which a user has permission

What are the prerequisites of installing vCenter server in a linked


mode group?

Before you install additional vCenter Server instances, you must verify the following
prerequisites:-
1. Link mode servers should be Member of same domain or a trusted domain
2. DNS name must match with the vcenter servers server name
3. Lonked mode servers Cannot be DC or terminal server
4. Cannot combine with earlier versions of vcenter vertions
5. Must have its own backend database
1. Member of same domain or a trusted domain:-All computers that will run vCenter
Server in a linked mode group must be members of a domain. The servers can exist in
different domains only if a two-way trust relationship exists between the domains.
2. DNS name must match with the vCenter server name:- DNS must be operational.
Also, the DNS name of the servers must match the server name.
3 Cannot be DC or terminal server:-The servers that will run vCenter Server cannot be
domain controllers or terminal servers.
4 Cannot combine with earlier versions:- You cannot combine vCenter Server 5 instances
in a linked mode group with earlier versions of vCenter Server.
5 Must have its own backend database:- Each vCenter Server instance must have its
own backend database, and each database must be configured as outlined earlier with
the correct permissions. The databases can all reside on the same database server, or
each database can reside on its own database server.

How do you modify vCenter server linked mode configuration?


1. Log into the vCenter Server computer as an administrative user, and run vCenter
Server Linked Mode Configuration from the Start=>All Programs=>VMware Menu.
2. Click Next at the Welcome To The Installation wizard For VMware vCenter Server
screen.

3. Select Modify Linked Mode Configuration, and click Next.


What is host profile?
A host profile is essentially a collection of all the various configuration settings for an
ESXi host. This includes settings such as NIC assignments, virtual switches, storage
configuration, date and time, and more. By attaching a host profile to an ESXi host,
you can then compare the compliance of that host with the settings outlined in the
host profile. If the host is compliant, then you know its settings are the same as the
settings in the host profile. If the host is not compliant, then you can enforce the
settings in the host profile to make it compliant. This provides administrators with a
way not only to verify consistent settings across ESXi hosts but also to quickly and
easily apply settings to new ESXi hosts.
To create a new profile, you must either create one from an existing host or import a
profile that was already created somewhere else. Creating a new profile from an existing
host requires only that you select the reference host for the new profile. vCenter
Server will then compile the host profile based on that host’s configuration.
Host profiles don’t do anything until they are attached to ESXi hosts. So attach the
host profile to the new ESXi host. Then Check Compliance with the host. If an ESXi
host is found noncompliant with the settings in a host profile, you can then place the
host in maintenance mode and apply the host profile. When you apply the host profile,
the settings found in the host profile are enforced on that ESXi host to bring it into
compliance.
What are the configuration requirements of using SQL server as a
backend database of vCenter server?

Connecting vCenter Server to a Microsoft SQL Server database, like the Oracle
implementation, requires a few specific configuration tasks, as follows
1 Both Windows and mixed mode authentication are supported
2 A new database for each vCenter Server
3 SQL login that has full access to the database
4 Appropriate permissions by mapping the SQL login to the dbo user
5 SQL login must also be set as the owner of the database
6 Must also have dbo (db_owner) privileges on the MSDB database when installing

Your manager has asked you to prepare an overview of the virtualized


environment. What tools in vCenter Server will help you in this task?

vCenter Server can export topology maps in a variety of graphics formats. The topology
maps, coupled with the data found on the Storage Views, Hardware Status, and
Summary tabs should provide enough information for your manager

What is SSO? what are its role in vCenter server?

The vCenter Single Sign On is an authentication and identity management service which
makes the VMware cloud infrastructure platform more secure. It allows administrators
and the various vSphere software components to communicate with each other through
a secure token exchange mechanism, instead of requiring each component to authenticate
a user separately with a directory service like Active Directory.
Roles:-
For the first installation of vCenter Server with vCenter Single Sign-On, you must
install all three components, Single Sign-On Server, Inventory Service, and vCenter
Server, in the vSphere environment. In subsequent installations of vCenter Server in
your environment, you do not need to install Single Sign-On. One Single Sign-On server
can serve your entire vSphere environment. After you install vCenter Single Sign-On
once, you can connect all new vCenter Server instances to the same authentication
server. However, you must install a Inventory Service instance for each vCenter Server
instance.
The vCenter Sign-On installer also deploys the VMware Lookup Service on the same
address and port. The Lookup Service enables different components of vSphere to find
one another in a secure way. When you install vCenter Server components after vCenter
Single Sign-On, you must provide the Lookup Service URL. The Inventory Service and
the vCenter Server installers ask for the Lookup Service URL and then contact the
Lookup Service to find vCenter Single Sign-On. After installation, the Inventory Service
and vCenter Server are registered in Lookup Service so other vSphere components, like
the vSphere Web Client, can find them.
What is cloning?
A clone is an exact copy of an existing virtual machine.

What is Snapshot?
A snapshot is a point in time check points of a VM with all its configuration settins
disk states and memory contains . We can return back to that point in time check
point or image stage at any time.
Is cloning a running VM possible- Yes
Linked Clone-A linked clone is a copy of a virtual machine that shares virtual disks
with the parent virtual machine in an ongoing manner. This conserves disk space, and
allows multiple virtual machines to use the same software installation. It is made of
parent VM's snapshot. It needs continuous access to its parent VM' and is disabled if
access to parent VM fails. Linked clone have some performance issues than a full
clone VM.

What is customization specification- While cloning a VM you can also customize


VMs, ensuring that each VM is unique. This VM customization information can be saved
and used over and over again.

What is Sysprep-This is a native windows tool that can be used in VMware also.
The purpose of the tool is to allow for a single Windows installation to be cloned many
times over and over, each time with an unique identity (new computer name, new IP
address, and new Security Identifier (SID). From windows 2008 onwards VMware does
not need this tools

How to customize windows VM clone-Copy Sysprep tool to a specific directory


in vCenter server so that customization wizard can take advantage from it. From
windows vista onwards this tool is built in to windows.

Ex- C:\document and settings\All users\Application Data\VMware\VMware


VertualCenter\sysprep

How to customize Unix or linux VMs- vCenter has inbuilt customization


tools because these software’s are open source.
What is Template-By creating templates, you ensure that your VM’s master image
doesn’t get accidentally changed or modified. It is because you can’t start the template
as a VM and can’t make any changes to it.

What is “Clone to Template”- Copies source or initial VM to a


template format, leaving the original VM intact

What is “Convert to Template”- This process takes the source or initial VM


and changes it to a template format. Thus it removes the ability to turn on the VM
without converting it back to a VM format.

Why “Convert To Template” command is grayed out- Because the VM is


currently powered on.

Customizing the VM template possible-No. templates can’t be powered on,


and guest OS customization requires that the VM be powered on.

Does cloning from template happens between 2 datacenter-Yes

What is OVF Template- Open Virtualization Format (OVF) is a standard


format for describing the configuration of a VM along with the data in the VM’s virtual
hard disk.

What is “Transient” IP allocation policy while deploying OVF- It sets the


IP address of a guest OS using a ‘script’ or an ‘executable application.

What are the OVF format available?-

Folder of file
Single files OVA.

The Folder Of Files:-This format puts the integrity checking manifest (.MF) file, the
structural definition (.OVF) file, and the virtual hard disk (.VMDK) file — as separate
files in a folder.
Single File (OVA) format:- This format combines those separate components
(.MF,.OVF,.VMDK) into a single file.

What are the different files that make up an OVF Template-

.mf or Manifest file:- contains SHA-1 digests of the other two files. Used For OVF’s
integrity checking

.ovf or OVF descriptor:- XML document that contains information about the OVF
template, such as product details, virtual hardware, requirements, licensing, a full list of
file references,

.vmdk or virtual hard disk file:- it may contain multiple VMDK files, all of which would
need to be referenced in the OVF descriptor file

What is OVA- OVF templates can also be distributed as a single file for easy
distribution? This single file ends in .ova and is in zipped TAR format. .Ova means Open
Virtualization Appliance.

What is VAPP- vSphere vApps leverage OVF as a way to combine multiple VMs into
a single administrative unit. When the vApp is powered on, all VMs in it are powered
on, in a sequence specified by the administrator. The same goes for shutting down a
vApp. vApps also act a bit like resource pools for the VMs contained within them.
Remember:-You can create a vApp inside other vApps, but you can’t create a vApp on
a cluster that does not have vSphere DRS enabled.

What is P2V and V2V migration- VMware offer a stand-alone product called
VMware Converter. VMware Converter provides both P2V functionality as well as
virtual-to-virtual (V2V) functionality.

What are the common issues in snapshot regarding RDM or mapped


LUN’s?
If you configure the VM with mapped LUNs (Physical mode RDM) then the snapshot
fails. If it is mapped as virtual (Virtual mode RDM which uses a VMDK mapping file to
access raw LUN) then we can take snapshot from it. So to take a snapshot of a VM
which uses physical mapped LUNs you must remove the LUN?

Differences between a clone and a template?

No Clones Templates
1 Clone creates an exact copy of a running Template act as a baseline
virtual machine at the time of cloning process. image with a predefined
configuration as per organization
standard.

2 Template are best suited


VM clones are best suited in test and for production environments whe
development environments where you create re you want mass distribution of
test and work with exact copies of a a baseline virtual machine along
production server without disturbing the with the installed OS, standard
production server. security policy and basic software.
Once the VMs are deployed then
you can configure other specific
software installation and
configuration task

3 Cloned virtual machine can be powered on Templates can’t be powered on.

4 Clone virtual machine can be edited. Template can’t be edited.

5 VM clones are not suited for mass deployment Templates are best suited for
of virtual machine. mass deployment of virtual
machine.
VMware vSphare Resource Allocation
Describe Reservation Limit and share?
Reservations:- Reservations serve to act as guarantees of a particular resource.
Limits:- Limits are, quite simply, a way to restrict the amount of a given resource
that a VM can use. Limits enforce an upper ceiling on the usage of resource.
Shares:- Shares serve to establish priority. Shares apply only during periods of host
resource contention and serve to establish prioritized access to host resource.

Reservation-guaranted resource,
Limit-Upper lmit of given resource,
Share-Prioritize resource access in time of contension

What are the advance memory management techniques used by


vSphare?
1. Transparent page sharing (TPS) 2. Ballooning 3. Swapping 4. Memory
compression

What is Transparent Page Sharing (TPS)?


identical memory pages are shared among VMs to reduce the total number of memory
pages needed. The hypervisor computes hashes of the contents of memory pages to
identify pages that contain identical memory.

What is memory ballooning?


'Ballooning' involves the use of a driver —called the balloon driver — installed into the
guest OS.
This driver is part of VMware Tools installation.
 The balloon driver can respond to commands from the hypervisor to reclaim memory
from that particular guest OS.
The balloon driver request memory from the guest OS — a process calling 'inflating'—
and then passing that memory back to the hypervisor for use by other VMs.
If guest OS can give up pages it is no longer using then there may not be any
performance degradation on the applications running inside that guest OS.
If the guest OS is already under memory pressure — then “inflating” the balloon
driver will invoke guest OS paging (or Guest swapping), which will degrade performance.
Describe briefly how Balloon driver works?
The balloon driver is part of the VMware Tools.
>As such, it is a guest OS– specific driver, meaning that Linux VMs would have a
Linux-based balloon driver, Windows VMs would have a Windows-based balloon driver, and
so forth.
Regardless of the guest OS, the balloon driver works in the same fashion. When the
ESXi host is running low on physical memory, the hypervisor will signal the balloon
driver to grow. To do this, the balloon driver will request memory from the guest OS.
This causes the balloon driver’s memory footprint to grow, or to inflate. The memory
that is granted to the balloon driver is then passed back to the hypervisor. The
hypervisor can use these memory pages to supply memory for other VMs, reducing the
need to swap and minimizing the performance impact of the memory constraints.
When the memory pressure on the Esxi host passes, the balloon driver will deflate, or
return memory to the guest OS.
The key advantage that ESXi gains from using a guest-OS-specific balloon driver in this
fashion is that it allows the guest OS to make the decision about which pages can be
given to the balloon driver process (and thus released to the hypervisor). In some cases,
the inflation of the balloon driver can release memory back to the hypervisor without
any degradation of VM performance because the guest OS is able to give the balloon
driver unused or idle pages.
What is hypervisor swapping?
Guest OS swapping falls strictly under the control of the guest OS and is not
controlled by the hypervisor.
If none of the previously described technologies trim guest OS memory usage enough,
the ESXi host will swap memory pages out to disk in order to reclaim memory that is
needed elsewhere.
 ESXi’s swapping takes place without any regard to the guest OS. As a result guest
OS performance is severely impacted if hypervisor swapping is invoked.
What is memory compression-?
When an ESXi host gets to the point that hypervisor swapping is necessary, then
VMkernel will attempt to compress memory pages and keep them in RAM in
a compressed memory cache. Pages that can be successfully compressed by at least 50
percent are put into the compressed memory cache instead of being written to disk and
can then be recovered much more quickly if the guest OS needs that memory page.
Memory Compression is invoked only when the ESXi host reaches to the point where
swapping is needed.
From where a VM without Reservation gets is memory from-
What is VMkarnel swap?
When an ESXi host doesn’t have enough RAM available to satisfy the memory needs of
the VMs it is hosting and when other technologies such as transparent page sharing, the
balloon driver, and memory compression aren’t enough, then VMkernel is forced to page
some of each VM’s memory out to the individual VM’s VMkernel swap file.
VMkarnel swap:-
VMkernel swap is actually the hypervisor swapping mechanism. VMkernel swap is
implemented as a file with a .vswp extension that is created when a VM is powered on.
These per-VM swap files created by the VMkernel reside, by default, in the same
datastore location as the VM’s configuration file (.VMX) and virtual disk files (.VMDK)
(although you do have the option of relocating the VMkernel swap).
In the absence of a memory reservation — the default configuration is — this file will
be equal in size to the amount of RAM configured for the VM. Thus, a VM configured
for 4 GB of RAM will have a VMkernel swap file that is also 4 GB in size and stored,
by default, in the same location as the VM’s configuration and virtual disk files.
In theory, this means a VM could get its memory allocation entirely from Hypervisor's
physical memory or VMkernel swap ie. from disk. If VMkarnel swap memory is assigned
then some performance degradation for VM is obvious because disk access time is several
orders of magnitude slower than RAM access time.
Configured memory – memory reservation =Expected size of swap file (.vswp)

Do "Transparent Page Sharing (TPS)" works for ‘Reserved Memory’-


Yes
What about "Memory Ballooning"-No
Reserved memory can be shared via transparent page sharing (TPS). Transparent page
sharing does not affect the availability of reserved memory because the shared page is
still accessible to the VM.
What is LIMIT? What are its impact on guest OS?
It sets the actual limit or upper ceiling on how much allocated physical resource may be
utilized by that VM.
The key problem with the use of memory limits is that they are enforced by vmkernel
without any guest OS or VM awareness. The guest OS will continue to behave as if it
can use allocated maximum RAM, completely unaware of the limit that has been placed
on it by the hypervisor. Setting a memory limit will have a significant impact on the
performance of the VM because- as a result the guest OS will constantly be forced
to swap pages to disk (guest OS swapping, not hypervisor swapping).
Why use memory limit?
Use memory limits as a temporary measure to reduce physical memory usage in your
ESXi hosts.
Perhaps you need to perform troubleshooting on an ESXi host and You want to
temporarily push down memory usage on less-important VMs so that you don’t
overcommit memory too heavily and negatively impact lots of VMs. Limits would help in
this situation.
In general, then, you should consider memory limits as a temporary stop-gap
measure when you need to reduce physical memory usage on an ESXi host and a
negative impact to performance is acceptable.

CPU Utilization

Like shares, reservations, and limits, what is the fourth option


available for managing CPU utilization?

CPU affinity. CPU affinity allows an administrator to statically associate a VM to a


specific physical CPU core. CPU affinity is generally not recommended; it has a list of
rather significant drawbacks:
 CPU affinity breaks vMotion.
 Because vMotion is broken, you cannot use CPU affinities in a cluster where vSphere
DRS isn’t set to Manual operation.
 The hypervisor is unable to load-balance the VM across all the processing cores in the
server. This prevents the hypervisor’s scheduling engine from making the most efficient
use of the host’s resources.
Remember:- We use CPU Reservation, Limit and Share to control CPU clock cycle
allocation (Core speed).
What is the difference between Memory Reservation and CPU
Reservation?
A CPU Reservation is very different than a Memory Reservation when it comes to
“sharing” reserved CPU cycles. Reserved Memory, once allocated to the VM, is never
reclaimed, paged out to disk, or shared in any way. The same is not true of CPU
Reservations.
The ESXI host has two idle VMs running. The shares are set at the
defaults for the running VMs. Will the Shares values have any effect
in this scenario-?

No. There’s no competition between VMs for CPU time because both are idle. Share
comes in to play in time of resource contention.
The ESX host with dual, single-core, 3 GHz CPUs has two equally
busy VMs running (both requesting maximum CPU capacity). The
shares are set at the defaults for the running VMs. Will the Shares
values have any effect in this scenario-?

No. Again, there’s no competition between VMs for CPU time, this time because each
VM is serviced by a different core in the host.

Remember:-CPU Affinity Not Available with Fully Automatic DRS enabled Clusters.

If you are using a VSphere Distributed Resource Scheduler–enabled cluster configured in


fully automated mode, CPU affinity cannot be set for VMs in that cluster. You must
configure the cluster for manual or partially automated mode in order to use CPU
affinity.

Describe CPU Reservation, Limit and Share?

 Reservations set on CPU cycles provide guaranteed processing power for VMs. Unlike
memory, reserved CPU cycles can and will be used by ESXi to service other requests
when needed. As with memory, the ESXi host must have enough real, physical CPU
capacity to satisfy a reservation in order to power on a VM.
 Limits on CPU usage simply prevent a VM from gaining access to additional CPU
cycles even if CPU cycles are available to use. Even if the host has plenty of CPU
processing power available to use, a VM with a CPU limit will not be permitted to use
more CPU cycles than specified in the limit. Depending on the guest OS and the
applications, this might or might not have an adverse effect on performance.

 Shares are used to determine CPU clock cycle allocation when the ESXi host is
experiencing CPU contention. CPU shares grant CPU access on a percentage basis
calculated on the number of shares granted out of the total number of shares assigned.
What is Resource Pool? Why it is required?
Resource pool basically is a special type of container object, much like a folder, mainly
used to group VM's with similar resource allocation needs to avoid administrative
overhead. Resource pool uses reservations, limits, and shares to control and modify
resource allocation behavior, but only for memory and CPU.
What is Expandable Reservation in resource Pool?
A Resource Pool provides resources to its child objects. A child object can either be a
virtual machine or another resource pool. This is what called the parent-child
relationship.
But what happens if the child objects in the resource pool are configured with
reservations that exceeds the reservation set on the parent resource pool-- in that case
parent resource pool needs to request protected resources from its parent resource
pool. This can only be done if expandable reservation is enabled.
Please note that the resource pool request protected or reserved resources from its
parent resource pool, it will not accept resources that are not protected by a
reservation.
You want to a understand Resource Pool's resource allocation, from
where you can see allocation of resources to objects within the
vCenter Server hierarchy?
Clusters "Resource Allocation" tab can verify the allocation of resources to objects
within the vCenter Server hierarchy.
Remember:-Shares Apply Only During Actual Resource Contention
Remember that share allocations come into play only when VMs are fighting one
another for a resource — in other words, when an ESXi host is actually unable to
satisfy all the requests for a particular resource. If an ESXi host is running only eight
VMs on top of two quad-core processors, there won’t be contention to manage
(assuming these VMs have only a single vCPU and Shares values won’t apply).
What is Processor core? Thread? what is Hyperthreading? what is
Logical CPU and Virtual CPU?

Processor : responsible of all processing operations. We define multiple processors in a


server by Socket.

Core : one operations unit or processing unit Inside your physical processor.
Hyper-Threading:- Normally a processor Core can handle one thread or one operation at
a same time (time means processor time slot). But with Hyper-Threading enabled a
processor Core can handle two threads at the same time.
Logical Processor : The number of threads that processor Cores can handle in a machine
is the number of logical processor. So if you want to know how much logical processor
do you have, just count the total number of threads a processor can handle.

How much logical processor do you have:-


Cores Count = Processor Count X CoresCountPerProcessor
Logical Processor Count = CoresCount X ThreadCount
so
No-of-Processor-(Socket) X Cores-Per-Processor X ThreadCount = Logical Processor
Count

Virtual Processor: when you create a virtual machine you do assign to it a processor.
Like vRAM, VHD, Virtual network interface, we can assign a Virtual Processor (VP) to
a virtual machine. Actually virtual Processor is nothing but the physical processor’s
TimeSlot slice that will be given to the virtual machine.

What is SMP?
Symmetric Multiprocessing: SMP is processing of a program by multiple processors that
share a common operating system and memory.

What are the maximum Virtual CPU Limitations of VMware?


Maximum number of virtual CPUs in a VM depends on the following:-
(a) Number of logical CPUs on the host,
(b)The ESXi host license,
(c)Type of guest operating system is installed on VM.
What is Network Resource Pool?
A 'network resource pool' is a resource pool which control network utilization. A
network resource pool can control outgoing network traffic or outgoing network
bandwidth with the help of shares and limits. This feature is referred to as vSphere
Network I/O Control (NetIOC).

Network resource pool or NetIOC’s disadvantage


(1) Controls outgoing Traffic Only
(2) Works only on a Distributed Switch

What is System Network Resource Pool? What is Custom Resource


Pool?
When you enable vSphere NetIOC, vSphere activates six predefined system network
resource pools:
1 Fault Tolerance (FT) Traffic
2 Virtual Machine Traffic
3 vMotion Traffic
4 Management Traffic
5 iSCSI Traffic
6 NFS Traffic
Custom Resource Pool is used to control customer’s resource utilization.
Remember:- You Can’t Map Port Groups to System defined resource Pools
Port groups can only be mapped to user-defined network resource pools, not system
network resource pools.

How do you enable NetIOC?


First, you must enable NetIOC on that particular vDS.

Second, you must create and configure custom network resource pools as necessary.

What are three basic settings each network resource pool consists of?
'Physical Adapter Shares'
'Host Limit'
QoS Priority Tag

Physical Adapter Shares'- priority for access to the physical network adapters when
there is network contention.
 'Host Limit'- upper limit on the amount of network traffic a network resource pool
is allowed to consume (in Mbps)
 QoS Priority Tag- The QoS (Quality of Service) priority tag is an 802.1p tag that is
applied to all outgoing packets. Configured upstream network switches can further
enhance and enforce the QoS just beyond the ESXi host.
What are the pre-requisites of storage I/O control (SIOC)?
SIOC has a few requirements you must meet:
(i)All SIOC enabled Datastores should be under a single vCenter Server-
(j)No RDM Support, NO NFS Support
(k) Datastore with Multiple Extents Support not supported-

Remember:- check Array Auto-Tiring compatibility with SIOC


What is auto-tiering — the ability for the array to seamlessly and transparently
migrate data between different storage tiers (SSD, FC, SAS, SATA).

How do you Enabling Storage I/O Control?


First, enable SIOC on one or more datastores.
Second, assign shares or limits to storage I/O resources on individual VMs.

Remember these points about SIOC:


SIOC is enabled on a per-datastore basis.
By default, SIOC is disabled for a datastore, meaning that you have to explicitly enabled
SIOC if you want to take advantage of its functionality.
While SIOC is disabled by default for individual datastores, it is enabled by default for
Storage DRS–enabled datastore clusters that have I/O metrics enabled for Storage DRS.
How Storage I/O control SIOC works?
SIOC uses disk latency as the threshold to enforce Shares values-
SIOC uses disk latency as the threshold to determine when it should activate and
enforce Shares values for access to storage I/O resources. Specifically, when vSphere
detects latency is in excess of a specific threshold value (measured in milliseconds), SIOC
is activated.

For controlling the use of storage I/O by VMs SIOC uses shares and limits-
SIOC provides two mechanisms for controlling the use of storage I/O by VMs: shares
and limits. the Shares value establishes a relative priority as a ratio of the total
number of shares assigned, while the Limit value defines the upper ceiling on the
number of I/O operations per second (IOPS) that a given VM may generate.

To guarantee certain levels of performance, your IT director believes


that all VMs must be configured with at least 8 GB of RAM.
However, you know that many of your applications rarely use this
much memory. What might be an acceptable compromise to help
ensure performance?

One way would be to configure the VMs with 8 GB of RAM and specify a reservation
of only 2 GB. VMware ESXi will guarantee that every VM will get 2 GB of RAM,
including preventing additional VMs from being powered on if there isn’t enough RAM
to guarantee 2 GB of RAM to that new VM. However, the RAM greater than 2 GB is
not guaranteed and, if it is not being used, will be reclaimed by the host for use
elsewhere. If plenty of memory is available to the host, the ESXi host will grant what
is requested; otherwise, it will arbitrate the allocation of that memory according to the
shares values of the VMs.
A fellow VMware administrator is a bit concerned about the use of
CPU reservations. She is worried that using CPU reservations will
“strand” CPU resources, preventing those reserved but unused
resources from being used by other VMs. Are this administrator’s
concerns well founded?

For CPU reservations- No. While it is true that VMware must have enough unreserved
CPU capacity to satisfy a CPU reservation when a VM is powered on, reserved CPU
capacity is not “locked” to a VM like memory reservation. If a VM has reserved but
unused CPU capacity, that capacity can and will be used by other VMs on the same
host. The other administrator’s concerns could be valid, however, for memory
reservations.
Your company runs both test/development workloads and production
workloads on the same hardware. How can you help ensure that
test/development workloads do not consume too many resources and
impact the performance of production workloads?
Create a resource pool and place all the test/development VMs in that resource pool.
Configure the resource pool to have a CPU limit and a lower CPU shares value. This
ensures that the test/development will never consume more CPU time than specified in
the limit and that, in times of CPU contention; the test/development environment will
have a lower priority on the CPU than production workloads.
Name two limitations of Network I/O Control?
NIOC works only with
vSphere Distributed Switches,
The ability to only control outbound network traffic
The fact that it requires vCenter Server in order to operate
” System network resource pools” cannot be assigned to user-created port groups.
What are the requirements for using Storage I/O Control?
All datastores and ESXi hosts that will participate in Storage I/O Control must be
managed by the same vCenter Server instance.
Raw Device Mappings (RDMs) and NFS datastores are not supported for SIOC.
Datastores must have only a single extent; datastores with multiple extents are not
supported
What is vMotion?

vMotion is a feature that allows running VMs to be migrated from one physical ESXi
host to another physical ESXi host with no downtime to end users.

How VMware vSphere helps balance the utilization of resources?


vMotion:- vMotion, which is generically known as live migration, is used to manually
balance resource utilization between two or more ESXi hosts.

Storage vMotion:- Storage vMotion is the storage equivalent of vMotion, and it is used
to manually balance storage utilization between two or more datastores.

vSphere Distributed Resource Scheduler (DRS):- vSphere Distributed Resource Scheduler


(DRS) is used to automatically balance resource utilization among two or more ESXi
hosts.
Storage DRS:- Storage DRS is the storage equivalent of DRS, and it is used
to automatically balance storage utilization among two or more datastores.
What are the configuration requirements of a successful vMotion?

ESXi hosts requirements for vMotion:


Shared storage:-Shared storage for the VM files (a VMFS or NFS datastore).
Dedicated VMkernel port for vMotion:-A Gigabit Ethernet or faster network interface
card (NIC) with a VMkernel port defined and enabled for vMotion on each ESXi host.

Describe briefly how vMotion works?

1. Migration initiated:- 2. Active memory pages of source VM precopied:- 3. Source


ESXi host is 'quiesced' for memory bitmap copy 4. Memory bitmap address contains
copied:- 5. VM starts on target host:- 6. RARP message is sent 7. Source host
memory is deleted:-
1. Migration initiated:- An administrator initiates a migration of a running VM (VM1)
from one ESXi host (HOST-1) to another ESXi host (HOST-2).
2. Active memory pages of source VM precopied:- As the active memory address is
copied from the source host to the target host, pages in memory could be changed
because VM is still servicing. ESXi handles this by keeping a log of changes that occur in
the memory of the VM on the source host. This log is called a memory bitmap.
3. Source ESXi host is 'quiesced' for memory bitmap copy:- VM1 on the source ESXi
host (HOST-1) is quiesced. This means that it is still in memory but is no longer
servicing client requests for data. The memory bitmap file is then transferred to the
target (HOST-2). As source VM is quiesced memory does not change.

Remembar-The Memory Bitmap


[The memory bitmap does not include the contents of the memory address that has
changed; it simply includes the addresses of the memory that has changed — often
referred to as the dirty memory.]
4. Memory bitmap address contains copied:-The target host (HOST-2) reads the
addresses in the memory bitmap file and requests the contents of those addresses from
the source (HOST-1).
5. VM starts on target host:- After copy VM starts on that target host. Note that
this is not a reboot — the VM’s state is in RAM, so the host simply enables it.
6. RARP message is sent:- A Reverse Address Resolution Protocol (RARP) message is
sent by the target host to register its MAC address against the physical switch port to
which it is connected. Now physical switch sends network packets to target ESXi host
instead of source.
7. Source host memory is deleted
What are the points that you should keep in mind for a successful
vMotion?

Networking

Identical VMkernel ports, virtual switches, or same Distributed Switch :-


Identical port group and same subnet or vlan:-

CPU

Processors compatibility:-
# Same CPU vendor :- (Intel or AMD).
# Same CPU family:- (Xeon 55xx, Xeon 56xx, or Opteron).
# Same CPU features:- the presence of SSE2, SSE3, and SSE4, and NX or XD.
# Virtualization enabled:- For 64-bit VMs, CPUs must have virtualization
technology enabled (Intel VT or AMD-v).

Host and VM

No Device physically available to only one host:- The VM must not be connected to any
device physically available to only one ESXi host. This includes disk storage, CD/DVD
drives, floppy drives, serial ports, or parallel ports.
No internal-only vSwitch:-
No CPU affinity Rule:-
Shared Storage for hosts:-

How vMotion provide High Availability futures?


vMotion can prevent planned downtime by live migration of VMs but it is not a high-
availability feature. So it cannot protect against any unplanned host failure.
What is virtual machine CPU masking?
Ability to create custom CPU masks for CPU features on a per-VM basis. It’s also
important to note that, with one exception, this custom CPU masks for easy vMotion
compatibility is completely unsupported by VMware.
What is the one exception?
On a per-VM basis, show or mask the No Execute / Execute Disable (NX/XD) bit in
the host CPU. This specific instance of CPU masking for easy vMotion compatibility is
fully supported by VMware.

What is NX/XD bit?


AMD’s Execute Disable (XD) and Intel’s No-Execute (NX) are features of processors
that mark memory pages as data only, which prevents a virus from running executable
code at that memory address. Windows 2003 SP1 and Windows XP SP2 support this
CPU feature.
A certain vendor has just released a series of patches for some of the
guest OS’s in your virtualized infrastructure. You request an outage
window from your supervisor, but your supervisor says “just use
vMotion to prevent downtime”. Is your supervisor correct? Why or
why not?
Your supervisor is incorrect. vMotion can be used to move running VMs from one
physical host to another, but it does not address outages within a guest OS because of
reboots or other malfunctions.
Is vMotion a solution to prevent unplanned downtime?

No. vMotion is a solution to address planned downtime of the ESXi hosts by manually
moving live VMs to other host.

What is EVC?
EVC:- vMotion requires compatible CPU families on both source and the destination
ESXi hosts. For that vSphere offers “Enhanced vMotion Compatibility (EVC)”. This
can mask differences between CPU families in order to maintain successful vMotion.
Can you change the EVC level for a cluster while there are VMs
running on hosts in the cluster?
No, you cannot change the EVC level when VMs are running on the host. New EVC level
means new CPU masks must be calculated and applied. CPU masks can be applied only
when VMs are powered off.
Describe in details what is VMware Enhanced vMotion Compatibility
(EVC)?

Recognizing that potential processor compatibility issues with vMotion could be a


significant problem, VMware worked closely with both Intel and AMD to craft
functionality that would address this issue. On the hardware side, Intel and AMD put
functions in their CPUs that would allow them to modify the CPU ID value returned by
the CPUs. Intel calls this functionality FlexMigration; AMD simply embedded this
functionality into their existing AMD-V virtualization extensions. On the software side,
VMware created software features that would take advantage of this hardware
functionality to create a common CPU ID baseline for all the servers within a cluster.
This functionality, originally introduced in VMware ESX/ESXi 3.5 Update 2, is called
VMware Enhanced vMotion Compatibility.
vCenter Server performs some validation checks to ensure that the physical hardware
included in the cluster is capable of supporting the selected EVC mode and processor
baseline. If you select a setting that the hardware cannot support, the Change EVC
Mode dialog box will reflect the incompatibility.
When you enable EVC and set the processor baseline, vCenter Server then calculates the
correct CPU masks that are required and communicates that information to the ESXi
hosts. The ESXi hypervisor then works with the underlying Intel or AMD processors to
create the correct CPU ID values that would match the correct CPU mask. When
vCenter Server validates vMotion compatibility by checking CPU compatibility, the
underlying CPUs will return compatible CPU masks and CPU ID values. However, vCenter
Server and ESXi cannot set CPU masks for VMs that are currently powered on.
When setting the EVC mode for a cluster, keep in mind that some CPU-specific
features — such as newer multimedia extensions or encryption instructions, for example
— could be disabled when vCenter Server and ESXi disable them via EVC. VMs that rely
on these advanced extensions might be affected by EVC, so be sure that your workloads
won’t be adversely affected before setting the cluster’s EVC mode.
What is vSphare DRS?
The ESXi hosts groups are called clusters. vSphere Distributed Resource Scheduler
enables vCenter Server to automate the process of conducting vMotion migrations of
VMs to help balance the load across ESXi hosts within a cluster. DRS can be as
automated as desired, and vCenter Server has flexible controls about the behavior of
DRS as well as the behavior of specific VMs within a DRS-enabled cluster.
What are the function of DRS?

DRS has the following two main functions:


Intelligent placement:-
To decide which node of a cluster should run a VM when it’s powered on, a function
often referred to as intelligent placement.
Recommendation or Automation:-
To evaluate the load on the cluster over time and either make recommendations for
migrations or use vMotion to automatically move VMs to create a more balanced cluster
workload.
How DRS works?
@ vSphere DRS runs as a process within vCenter Server, which means that you must
have vCenter Server in order to use vSphere DRS.
@ By default, DRS checks every five minutes (or 300 seconds) to see if the cluster’s
workload is balanced.
@ DRS is also invoked by certain actions within the cluster, such as adding or removing
an ESXi host or changing the resource settings of a VM (such as reservations, shares,
and limits).
@ When DRS is invoked, it will calculate the imbalance of the cluster, apply any
resource controls (such as reservations, shares, and limits), and, if necessary, generate
recommendations for migrations of VMs within the cluster.
@ Depending on the configuration of vSphere DRS, these recommendations could be
applied automatically, meaning that VMs will automatically be migrated between hosts
by DRS in order to maintain cluster balance (or, put another way, to minimize cluster
imbalance).
@ Fortunately, if you can retain control that how aggressively DRS will automatically
move VMs around the cluster.

What are DRS automation level?


(i)Manual (ii) Partially Automated (iii) Fully Automated

Manual-Initial VM placement and VM migrations both are manual.

Partially Automated:-Initial VM placement is automated, but VM migrations are still


manual.

Fully Automated:- Initial VM placement and VM migrations both are automated.

DRS takes Automatic vMotion decisions based on the selected automation level (the
slider bar).
There are five positions for the slider bar on the Fully Automated setting of the DRS
cluster. The values of the slider bar range from Conservative to Aggressive and
automate 5 priority recommendations accordingly.

You want to take advantage of vSphere DRS to provide some load


balancing of virtual workloads (VMs) within your environment.
However, because of business constraints, you have a few workloads
that should not be automatically moved to other hosts using vMotion.
Can you use DRS? If so, how can you prevent these specific workloads
from being affected by DRS?
Yes, you can use DRS. Enable DRS on the cluster, and set the DRS automation level
appropriately. For those VMs that should not be automatically migrated by DRS,
configure the DRS automation level on a per-VM basis to Manual. This will allow DRS
to make recommendations on migrations for these specific workloads (VMs) but will not
actually perform the migrations.

What is maintenance mode?

Maintenance mode is a setting on a ESXi host that prevents the ESXi host from
performing any VM related functions. VMs currently running on a ESXi host being put
into maintenance mode must be shut down or moved to another host before the ESXi
host will actually enter maintenance mode. This means that an ESXi host in a DRS-
enabled cluster will automatically generate priority 1 recommendations to migrate all
VMs to other hosts within the cluster.
What is Distributed Resource Scheduler (DRS) Rules or affinity rules?
vSphere DRS supports three types of DRS rules:-
[1] VM affinity rules- “Keep Virtual Machines Together” -Keeps VMs together on the
same ESXi host
[2] VM anti-affinity rules- “Separate Virtual Machines” - Keeps VMs seperate on the
different ESXi host.
[3] Host affinity rules- “Virtual Machines To Hosts” - Keeps VM DRS group and Host
DRS group together or separate.
The host affinity rule brings together a VM DRS group and a host DRS group according
to preferred rule behavior. There are four host affinity rule behaviors:
1 Must Run On Hosts In Group
2 Should Run On Hosts In Group
3 Must Not Run On Hosts In Group
4 Should Not Run On Hosts In Group
What is per-VM Distributed Resource Scheduler Settings?
The administrator can then selectively choose VMs that are not going to be acted on
by DRS in the same way as the rest of the VMs in the cluster. However, the VMs
should remain in the cluster to take advantage of high-availability features provided by
vSphere HA.
The per-vM automation levels available include the following:
 Manual (Manual intelligent placement and vMotion)
 Partially Automated (automatic intelligent placement, manual vMotion)
 Fully Automated (automatic intelligent placement and vMotion)
 Default (inherited from the cluster setting)
 Disabled (vMotion disabled)

Storage vMotion

What is Storage vMotion?


vMotion and Storage vMotion are like two sides of the same coin. Storage vMotion,
migrates a running VM’s virtual disks from one datastore to another datastore but
leaves the VM executing — and therefore using CPU and memory resources — on the
same ESXi host. This allows you to manually balance the “load” or storage utilization of
a datastore by shifting a VM’s storage from one datastore to another. Like vMotion,
Storage vMotion is a live migration.
How Storage vMotion works?
1. Nonvolatile files copy:- 2. Ghost or shadow VM created on destination
datastore:- 3. Destination disk and mirror driver created:- 4. Single-pass copy of the
virtual disk(s):- 5. vSphere quickly suspends and resumes in order to transfer control
over to the ghost VM:-6. Source datastore files are deleted:-

1. Nonvolatile files copy:- First, vSphere copies over the nonvolatile files that makes up
a VM: Ex- the configuration file (VMX), VMkernel swap file, (.SAWP) log files, and
snapshots.
2. Ghost or shadow VM created on destination datastore:- Next, vSphere starts a ghost
or shadow VM on the destination datastore using the nonvolatile files copied. Because
this ghost VM does not yet have a virtual disk (that hasn’t been copied over yet), it
sits idle waiting for its virtual disk.
3. Destination disk and mirror driver created:- Storage vMotion first creates the
destination disk. Then a mirror device — a new driver that mirrors I/Os between the
source and destination disk — is inserted into the data path between the VM and the
underlying storage.
4. Single-pass copy of the virtual disk(s):- With the I/O mirroring driver in place,
vSphere makes a single-pass copy of the virtual disk(s) from the source to the
destination. As changes are made to the source, the I/O mirror driver ensures those
changes are also reflected at the destination.
5. vSphere quickly suspends and resumes in order to transfer control over to the ghost
VM:- When the virtual disk copy is complete, vSphere quickly suspends and resumes in
order to transfer control over to the ghost VM created on the destination datastore
earlier. This generally happens so quickly that there is no disruption of service, like with
vMotion.
6. Source datastore files are deleted:- The files on the source datastore are deleted. It’s
important to note that the original files aren’t deleted until it’s confirmed that the
migration was successful; this allows vSphere to simply fall back to its original location if
an error occurs. This helps prevent data loss situations or VM outages because of an
error during the Storage vMotion process.
What we should remember when using Storage vMotion with Raw
Device Mappings (RDM)?

There are two type of Raw Device Mappings (RDM’s) - physical mode RDM and virtual
mode RDM. Virtual mode RDM use one VMDK mapping file to give raw LUN access. Be
careful when using Storage vMotion with virtual mode (RDMs).

If you want to migrate only the VMDK mapping file, be sure to select “Same Format
As Source” for the virtual disk format. If you select a different format, virtual mode
RDMs will be converted into VMDKs as part of the Storage vMotion operation (physical
mode RDMs are not affected). Once an RDM has been converted into a VMDK, it
cannot be converted back into an RDM again.

Storage DRS :

What is Storage DRS?

1. Storage DRS is a feature that is new to vSphere 5.


2. Storage DRS brings automation to the process of balancing storage capacity and I/O
utilization.
3.Storage DRS uses datastore clusters and can operate in manual or Fully Automated
mode.
4.Numerous customizations exist — such as custom schedules, VM and VMDK anti-
affinity rules, and threshold settings etc.
5.SDRS can perform this automated storage balancing not only on the basis of space
utilization but also on the basis of I/O load balancing.

Similarities between DRS and SDRS

 Just as vSphere DRS uses clusters as a collection of hosts on which to act, SDRS
uses data store clusters as collections of datastores on which it acts.

 Just as vSphere DRS can perform both initial placement and manual and automatic
balancing, SDRS also performs initial placement of VMDKs and manual or automatic
balancing of VMDKs.
 Just as vSphere DRS offers affinity and anti-affinity rules to influence
recommendations, SDRS offers VMDK affinity and anti-affinity functionality.

Guidelines for datastores cluster?


VMware provides the following guidelines for datastores that are combined into
datastore clusters:
(1)No NFS and VMFS combination:-(2) No replicated and nonreplicated datastore
combination:- (3) No ESX/ESXi 4.x and earlier host connection with ESXi 5 datastore:-
(4) No Datastores shared across multiple datacenters:- (5) Datastores of different sizes
, I/O capacities and vendors are supported

 No NFS and VMFS combination:- Datastores of different sizes and I/O capacities can
be combined in a datastore cluster. Additionally, datastores from different arrays and
vendors can be combined into a datastore cluster. However, you cannot combine NFS
and VMFS datastores in a datastore cluster.
 No replicated and nonreplicated datastore combination:-

 No ESX/ESXi 4.x and earlier host connection with ESXi 5 datastore:- All hosts
attached to a datastore in a datastore cluster must be running ESXi 5 or later.
ESX/ESXi 4.x and earlier cannot be connected to a datastore that you want to add to
a datastore cluster.
 No Datastores shared across multiple datacenters:-
What are the relations between Storage I/O Control and Storage DRS
Latency Thresholds?
You’ll note that the default I/O latency threshold for SDRS (15 ms) is well below the
default for SIOC (30 ms). The idea behind these default settings is that SDRS can
make a migration to balance the load (if fully automated) before throttling becomes
necessary.

What will happen if you put SDRS datastore in to maintenance mode?

When you enable SDRS datastores in to maintenance mode, migration recommendations


are generated for registered VMs. However, SDRS datastore maintenance mode will not
affect templates, unregistered VMs, or ISOs stored on that datastore.
What are Storage DRS Automation levels?
SDRS offers two predefined automation levels, No Automation (Manual Mode) and
Fully Automated.
No Automation (Manual Mode):- Recommendations for initial placement as well as
recommendations for storage migrations based on the configured space and I/O
thresholds.
Fully Automated Mode:- Automatic initial placement as well as storage migrations based
on the configured space and I/O thresholds.

What is Storage DRS Schedule?


Custom schedules enable vSphere administrators to specify times when the SDRS
behavior should be different. For example, in your office SDRS runs in automatic mode
but there are specific times when SDRS should be running in No Automation (Manual
Mode). You can set times (Like night) when the space utilization or I/O latency
thresholds should be different.
What is Storage DRS Rules?
(1)VMDK anti-affinity (2) VM anti-affinity rules (3) Keep VMDKs Together

Just as vSphere DRS has affinity and anti-affinity rules, SDRS offers vSphere
administrators the ability to create VMDK anti-affinity and VM anti-affinity rules.
These rules modify the behavior of SDRS to ensure that specific VMDKs are always
kept separate (VMDK anti-affinity rule) or that all the virtual disks from certain VMs
are kept separate (VM anti-affinity rule).

Administrators can use anti-affinity rules to keep VMs or VMDKs on separate


datastores, but as you’ve already seen, there is no way to create affinity rules. Instead
of requiring you to create affinity rules to keep the virtual disks for a VM together,
vSphere offers a simple check box in the Virtual Machine Settings area of the datastore
cluster properties.

To configure Storage DRS to keep all disks for a VM together, check the boxes in the
Keep VMDKs Together column.
Name the two ways in which an administrator are notified that a
Storage DRS recommendation has been generated?

First, an alarm is generated to note that an SDRS recommendation is present. You can
view this alarm on the "Alarms" tab of the datastore cluster in "Datastores And
Datastore Clusters" inventory view.
Second,In addition, the "Storage DRS" tab of the datastore cluster (visible in
"Datastores And Datastore Clusters" inventory view) will list the current SDRS
recommendations and give you the option to apply those recommendations.

What is a potential disadvantage of using drag and drop to add a


datastore to a datastore cluster?

You can use drag and drop to add a datastore to an existing datastore cluster as well.
Please note, that drag and drop won’t warn you that you’re adding a datastore that
doesn’t have connections to all the hosts that are currently connected to the datastore
cluster. So when using SDRS some host may find that a particular datastore is
unreachable. To avoid this situation you should always use the "Add Storage" dialog box.

[When using drag and drop to add a datastore to a datastore cluster, the user is not
notified if the datastore isn’t accessible to all the hosts that are currently connected
to the datastore cluster. This introduces the possibility that one or more ESXi hosts
could be “stranded” from a VM’s virtual disks if Storage DRS migrates them onto a
datastore that is not accessible from that host.]

A fellow administrator is trying to migrate a VM to a different


datastore and a different host, but the option is disabled (grayed
out). Why?
Storage vMotion, like vMotion, can operate while a VM is running. However, in order to
migrate a VM to both a new datastore and a new host, the VM must be powered off.
VMs that are powered on can only be migrated using Storage vMotion or vMotion, but
not both.
Name two features of Storage vMotion that would help
administrators cope with storage related changes in their vSphere
environment?
Old to new storage array Storage vMotion can be used to facilitate no-downtime
storage migrations from one type of storage array to a new or new type of storage
array.
Between different types of storage Storage vMotion can also migrate between
different types of storage (FC to NFS, iSCSI to FC or FCoE).
Between different types of VMDKMigration between different type of VMDK format
(Thick, Thin)

VMWare Storage:

What is VMFS?
vSphere Virtual Machine File System (VMFS) is similar to NTFS for Windows Server
and EXT3 for Linux. Like these files systems, it is native; it’s included with vSphere
and operates on top of block storage objects. If you’re leveraging any form of block
storage in vSphare, then you’re using VMFS. VMFS creates a shared storage pool that
is used for one or more VMs.

Why it is different than NTFS and EXT3?


Clustered File system-
Simple and transparent distributed locking mechanism-
Steady-state I/O with High throughput at a low CPU overhead-
Locking is handled using metadata in a hidden section of the file system-
What is VMFS metadata?
The metadata portion of the file system contains critical information in the form of
on-disk lock structures (files), such as which ESXi host is the current owner of a given
VM in the clustered file system, ensuring that there is no contention or corruption of
the VM.
When VMFS metadata updates occurs?
1. The creation of a file in the VMFS datastore- (powering on a VM, creating/deleting a
VM, or taking a snapshot, for example)
2. Changes to the structure VMFS file system itself- (extending the file system or
adding a file system extent)
3. Actions that change the ESXi host that owns a VM- (vMotion and vSphere HA that
cause VM files ownership change)
VMFS extents:-
1.VMFS can reside on one or more partitions called extents in vSphare-

2.VMFS metadata is always stored in the first extent-

3.In a single VMFS-3 datastore, 32 extents are supported for a maximum size of up
to 64 TB-
4.Except the first extent, where VMFS metadata resides, Removing the LUN
supporting a VMFS-3 extent will not make the spanned VMFS datastore unavailable-

5.Removing an extent affects only the portion of the datastore supported by that
extent-

6.VMFS-3 and VMs are relatively resilient to a hard shutdown or crash-

7.VMFS-3 allocates the initial blocks for a new file (VMs) randomly in the file system
(or Extents), and subsequent allocations for that file are sequential-

8.VMFS datastore that spans multiple extents across multiple LUNs reduces LUN queue
depth-

VMFS block sizes?


1.The VMFS block size determined the maximum file size on the file system-
2. Once set, the VMFS-3 block size could not be changed-
3. Only hosts running ESXi 5.0 or later support VMFS-5
Advantages of using VMFS-5 over VMFS-3:
VMFS-5 offers a number of advantages over VMFS-3:-

1.Maximum 64 TB in size using only a single extent- Multiple extents are still also
limited to 64 TB-
2.A single block size of 1 MB allowed that support a 2TB file creation-
3.More efficient sub-block allocation, only 8 kb compare to VMFS-3s 64 kb-
4.Not limited to 2 TB for the creation of physical-mode RDMs-
5.Non-disruptive in place and online upgrade to VMFS-5 from VMFS-3 datastores-

multipathing:
Multipathing is the term used to describe how a host, such as an ESXi host, manages
storage devices that have multiple ways (or paths) to access them. Multipathing is
extremely common in Fiber Channel and FCoE environments and is also found in iSCSI
environments. Multipathing for NFS is also available but handled much differently than
for block storage.
Pluggable Storage Architecture (PSA):
Elements of vSphere storage stack that deal with multipathing is called Pluggable
Storage Architecture (PSA)
What is LUN queues:
Queues exist on the server (in this case the ESXi host), generally at both the HBA and
LUN levels. Queues exist on the storage array also. . Block-centric storage arrays
generally have these queues at the target ports, array-wide, and array LUN levels, and
finally at the spindles themselves.

How to view the LUN queue:-


For determining outstanding items are in the queue Use resxtop, hit 'U' and go to
storage screen for checking QUED column-

vSphare Storage API’s :-


vSphere Storage APIs for Array Integration(VAAI)
vSphere Storage APIs for Storage Awareness (VASA)
vSphere Storage APIs for Site Recovery (VASR)
vSphere Storage APIs for Multipathing
vSphere Storage APIs for Data Protection(VADP)
VAAI:- A means of offloading storage-related operations from the ESXi hosts to the
storage array.
VASA:- enables more advanced out-of-band communication between storage arrays and
the virtualization layer.
vSphere Storage APIs for Site Recovery (VASR):-
SRM (Site Recovery Manager) is an extension to VMware vCenter Server that delivers
a disaster recovery solution. SRM can discover and manage replicated datastores, and
automate migration of inventory between vCenter Server instances. VMware vCenter
Site Recovery Manager (SRM) provides an API so third party software can control
protection groups and test or initiate recovery plans.
vSphere Storage APIs for Multipathing :- third-party storage vendor's multipath
software
vSphere Storage APIs for Data Protection (VADP):- Enables third party backup
software to perform centralized virtual machine backups without the disruption and
overhead of running backup tasks from inside each virtual machine.

Profile driven storage:-


Working in conjunction with VASA, which facilitate communication between storage
arrays and the virtualization layer, the principle behind profile-driven storage is simple:
allow vSphere administrators to build VM storage profiles that describe the specific
storage attributes that a VM requires. Then, based on that VM storage profile, allow
vSphere administrators to place VMs on datastores that are compliant with that
storage profile, thus ensuring that the needs of the VM are properly serviced by the
underlying storage.

Keep in mind that the bulk of the power of profile-driven storage comes from the
interaction with VASA (Virtualization layer to array communication) to automatically
gather storage capabilities from the underlying array. However, you might find it
necessary or useful to define one or more additional storage capabilities that you can
use in building your VM storage profiles.
[Please remember:- Keep in mind that you can assign only one user-defined storage
capability per datastore. The VASA provider can also only assign a single system-
provided storage capability to each datastore. This means that datastores may have up
to 2 capabilities assigned: one system-provided capability and one user-defined
capability].
Upgrade from VMFS -3 to VMFS - 5:-
Yes it is possible to non disruptively upgrade fromVMFS -3 to VMFS -5.

VMFS - 5 to VMFS -3 downgrade:- No it is not possible to downgrade from


VMFS -5 to VMFS -3

Note1: Upgrading VMFS 5 partitions will retain the partition characteristics of the
original VMFS 3 datastore including file block-size, sub-block size of 64k etc.
Note2: Increasing the size of upgraded VMFS datastore beyond 2TB changes the
partition type MBR to GPT.However all other features/characteristics continue to
remain same.
Disk signature:-
Every VMFS datastore has a Universally Unique Identifier (UUID) embedded in the
filesystem. This Unique 'UUID" differentiate a datastore from others. When you clone or
replicate a VMFS datastore, the copy of the datastore will be a byte-for-byte copy of
the datastore, right down to the UUID. If you attempt to mount the LUN that has
the copy of the VMFS data store, vSphere will see this as a duplicate copy and will
require that you do one of two things:
1 Unmount the original and mount the copy with the same UUID.
2 Keep the original mounted and write a new disk signature to the copy.
Raw Device Mapping ( RDM):-
Normally VMs use shared storage pool mechanisms like VMFS or NFS datastores. But
there are certain use cases where a storage device must be presented directly to the
guest operating system inside a VM.

vSphere provides this functionality via a "Raw Device Mapping". RDMs are presented to
your ESXi hosts and then via vCenter Server directly to a VM. Subsequent data I/O
bypasses the VMFS and volume manager completely. I/O management is handled via a
mapping file that is stored on a VMFS volume.
You can configure RDMs in two different modes: pRDM and vRDM.

Physical Compatibility Mode (pRDM):- In this mode, all I/O passes directly through to
the underlying LUN device, and the mapping file is used solely for locking and vSphere
management tasks. You might also see this referred to as a pass-through disk.

Virtual Mode (vRDM):- In this mode, all I/O travels through the VMFS layer.
Advantages and disadvantages of pRDM and vRDM:-
1) pRDMs do not support vSphere snapshot:
2) pRDMs do not covert to virtual disk through SvMotion:
3) pRDMs can easily be moved to a physical host:

RDM use cases:-


1) Windows clusters services:
2) When using Storage array's application-integrated snapshot tools:
3) When virtual configurations are not supported for a software. Software such as
Oracle is an example:
Difference between VMFS datastore and NFS datastore:
1.NFS file system is not managed or controlled by ESXi host.
2.All file system issues (like HA performance tuning) are part of vSphere networking
stack not Storage Stack.
Remember: NFS datastores support only thin provisioning unless the NFS server
supports the ‘VAAlv2 NAS extensions’ and vSphere VMFS has been configured with the
vendor-supplied plug-in.
Virtual SCSI adapter available for Vmware:-

1. BusLogic Parallel- Well supported for older guest OSes.... doesn’t perform well.....
various Linux flavors use it.
2. LSI Logic Parallel- well supported by newer guest OSes..... default for Windows
Server 2003.
3. LSI Logic SAS- guest OS's are phasing out support for parallel SCSI in favor of
SAS....... default SCSI adapter suggested for VMs running Windows Server 2008 and
2008 R2,
4.VMware Paravirtual- Paravirtualized devices (and their corresponding drivers) are
specifically optimized to communicate more directly with the underlying VM Monitor
(VMM)..... They deliver higher throughput and lower latency, and they usually make
lower CPU impact for the I/O operations but at the cost of guest OS compatibility.
VM storage profile:-
VM storage profiles are a key component of profile-driven storage. By leveraging system-
provided storage capabilities supplied by a VASA provider (which is provided by the
storage vendor), as well as user-defined storage capabilities, you can build VM storage
profiles that help shape and control how VMs are allocated to storage.
Keep in mind that a datastore may have, at most, two capabilities assigned: one
system-provided capability and one user-defined capability.

Other than Raw Device Mapping what is the way to present storage
devices directly to a VM:-
Other than RDM You also have the option of using an in-guest iSCSI initiator to bypass
the hypervisor and access storage directly. Keep in mind the following tips about in-
guest iSCSI initiator:-
1 Separate datastore needed other than VMFS or NFS:-
2 More physical NICs needed in your server.... for defining redundant connections and
multipathing separately... as you are bypassing hypervisor level:-
3 No storage vMotion:-
4 No vSphare Snapshots:-
iSCSI:-
Ideal for customer those who are just getting started with vSphere and have no existing
Fiber Channel SAN infrastructure.
Fibre Channel:-
Ideal for customer those who have VMs with high-bandwidth (200 MBps+)
requirements (not in aggregate but individually).
NFS:-
Ideal for customer where there are many VMs with a low-bandwidth requirement
individually (and in aggregate)...and those who have less than a single link’s worth of
bandwidth.
vSphere storage models:-
vSphere has three fundamental storage presentation models:
(1) VMFS on block storage
(2) RDM and in-guest iSCSI initiator
(3) NFS.
The most flexible configurations use all three, predominantly via a shared-container
model and selective use of RDMs.
vSphere has three fundamental storage presentation models:
(1) VMFS on block storage
(2) RDM and in-guest iSCSI initiator
(3) NFS.
The most flexible configurations use all three, predominantly via a shared-container
model and selective use of RDMs.
Basic Storage Performance parameters:-

MBps:- Data transfer speed per second in megabytes.


IOps / throughput:- Maximum number of reads and writes (input and output operation)
per seconds.
Disk latency:- Time it takes for the selected sector to be positioned under the
read/write head.

How to quickly estimate storage configuration:-

Bandwidth:-For NFS it is the NIC link speed.

For FC, it is the HBA speed.


For iSCSI, it is the NIC speed.
IOps:- In all cases, the 'throughput' (IOps) is primarily a function of the
number of spindles (HDD) [ assuming no cache benefit and no RAID loss].
A quick rule of thumb is that:-
The total number of IOps = IOps × the number of spindle.
Latency;- Latency is generally measured in milliseconds.....though can get
to tens of milliseconds in situations where the storage array is overtaxed.
What are the way’s of monitoring VMs and Hosts?
vCenter Server provides some exciting new features for monitoring your VMs and hosts.
1 Alarms- for proactive monitoring
2 Performance graphs and charts-
3 Performance information gathering using command-line tools
4 Monitor CPU, memory, network, and disk usage by ESXi hosts and VMs
What are the components of setting an alarm?
Scope- Alarm applies to which object. Ex-vCenter, Datacenter, ESXI host.
Monitor objects-Which object to monitor. Ex-Virtual machine
Monitor for (Item-) Monitor object for specific condition or state-
Ex- CPU Usage, Power State etc.
Trigger type- Which component can trigger the alarm.
Ex.- VM Snapshot Size
Condition- Above or less. Ex- Is Above
Warning value- 500- condition lengths 5 minutes
Alert value-1000- condition lengths 5 minutes
Reporting - -Range =Threshold + Tolerance level
-Frequency value= period of time during which a triggered
alarm is not Reported again
Action-send email, run scripts, SNMP trap.
How to Monitor CPU, memory, network, and disk usage by ESXi hosts
and VMs?
In particular, using customized performance graphs can expose the right
information.
How to gather performance information using command-line tools?
For VMware ESXi hosts, resxtop provides real-time information about CPU, memory,
network, or disk utilization. You should run resxtop from the VMware vMA. Finally, the
vm-support tool can gather performance information that can be played back later using
resxtop.

STORAGE BASICS:

What are the storage option available for a ESXi host?


1 Local SAS/SATA/SCSI storage

2 Fibre Channel

3 Fibre Channel over Ethernet (FCoE)

4 iSCSI using software and hardware initiators

5 NAS (specifically, NFS)

6 InfiniBand

Other than Local Storage, how we can boot ESXi host?


1.Booting from Fibre Channel/iSCSI SAN

2.Network-based boot methods like vSphere Auto Deploy

3.USB boot

What is SAN?
A storage area network (SAN) is a dedicated network that provides access to
consolidated, block level data storage. SAN refers to a network topology, not a
connection Protocol.

What is fiber channel or FC?

Fibre Channel, or FC, is a high-speed network technology primarily used to connect


computer and data storage devices and for interconnecting storage controllers and drives.

Fibre Channel is three times as fast as Small Computer System Interface (SCSI) as the
transmission interface between servers and clustered storage devices.

Fibre channel is more flexible; devices can be as far as ten kilometers (about six miles)
apart if optical fiber is used as the physical medium.

Optical fiber is not required for shorter distances; however, because Fibre Channel also
works using coaxial cable and ordinary telephone twisted pair you can use it in shorter
distances.

REMEMBER CAREFULLY:-
(1) ESXi boot from SAN and (2) Raw device mapping (RDM) are not supported in
NFS.
The Fibre Channel protocol can operate in three modes: point-to-point (FC-P2P),
arbitrated loop (FC-AL), and switched (FC-SW). Point-to-point and arbitrated loop are
rarely used today for host connectivity, and they generally predate the existence of
Fibre Channel switches.

What is SAN, NAS, DAS?


N A S- Network Attached Storage (File level storage) [ex-SMB, NFS]

D A S- Direct Attached Storage (Block level storage) [SATA, PATA]

S A N- Storage Area Network (Block level storage area network)[iSCSI,FC, FCOE]

What is “World Wide Port No” or “World Wide Node No”?


All the objects (initiators, targets, and LUNs) on a Fibre Channel SAN are identified by
a unique 64-bit identifier called a worldwide name (WWN). WWNs can be worldwide
port names (a port on a switch) or node names (a port on an endpoint). For anyone
unfamiliar with Fibre Channel, this concept is simple. It’s the same technique as Media
Access Control (MAC) addresses on Ethernet.

50:00:00:25:b5:01:00:00 20:00:00:25:b5:01:00:0f

Like Ethernet MAC addresses, WWNs have a structure. The most significant two bytes
are used by the vendor (the four hexadecimal characters starting on the left) and are
unique to the vendor, so there is a pattern for QLogic or Emulex HBAs or array
vendors. In the previous example, these are Cisco CNAs connected to an EMC
Symmetrix VMAX storage array.

How different is FCoE from FC?

Aside from discussions of the physical media (Eathernet) and topologies,


the concepts for FCoE are almost identical to those of Fibre Channel. This is because
FCoE was designed to be seamlessly inter-operable with existing Fiber Channel–based
SANs.

What is VSAN?

Like VLANs, VSANs provide isolation between multiple logical SANs that exist on a
common physical platform. This enables SAN administrators greater flexibility and
another layer of separation in addition to zoning.

What is Zoning? Why it is required?

1. It ensures that a LUN that is required to be visible to multiple hosts


with common visibility needs in a cluster is visible, while the rest of the
host in the cluster that should not have visibility to that LUN do not.

2. To create fault and error domains on the SAN fabric, where noise,
chatter, and errors are not transmitted to all the initiators/targets
attached to the switch. Again, it’s somewhat analogous to one of the
uses of VLANs to partition very dense Ethernet switches into broadcast
domains.

How do you configure ‘Zoing’ in ‘FC’? What are the types of ‘Zoning’
you can configure in FC?

Zoning is configured on the Fibre Channel switches via simple GUIs or CLI tools and can
be configured by (I) Port or by (II) WWN:

1. Using port-based zoning:- Using port-based zoning, you would zone by configuring
your Fibre Channel switch for example “put port 5 and port 10 into a zone that we’ll
call zone_5_10.” Any device (and therefore any WWN) you physically plug into port 5
could communicate only to a device (or WWN) physically plugged into port 10.
2. Using WWN-based zoning:- Using WWN-based zoning, you would zone by configuring
your Fibre Channel switch to “put WWN from this HBA and these array ports into a
zone we’ll call ESXi_4_host1_CX_SPA_0.” In this case, if you moved the cables, the
zones would move to the ports with the matching WWNs.

Initiator No +Fc Switch Port No + Network Address Authority Identifier=LUN No

What Is LUN Masking?


Zoning should not be confused with LUN masking. Masking is the ability of a host or an
array to intentionally ignore WWNs that it can actively see (in other words, that are
zoned to it).

Masking is used to further limit what LUNs are presented to a host

What is FCoE?
FCoE was designed to be interoperable and compatible with Fiber Channel. In fact, the
FCoE standard is maintained by the same T11 body as Fiber Channel. At the upper
layers of the protocol stacks, Fiber Channel and FCoE look identical. It’s at the lower
levels of the stack that the protocols diverge.

In FCoE Fiber Channel frames are encapsulated into Ethernet frames, and transmitted in
a lossless manner.

What is FCoE CNA’s?


In practice, the debate of iSCSI versus FCoE versus NFS on 10 Gb Ethernet
infrastructure is not material. All FCoE adapters are converged adapters, referred to as
converged network adapters (CNAs). They support native 10 GbE (and therefore also
NFS and iSCSI) as well as FCoE simultaneously, and they appear in the ESXi host as
multiple 10 GbE network adapters and multiple Fiber Channel adapters. If you have
FCoE support, in effect you have it all. All protocol options are yours.

What is iSCSI?
iSCSI brings the idea of a block storage SAN to customers with no Fiber Channel
infrastructure. iSCSI is an IETF standard for encapsulating SCSI control and data in
TCP/IP packets, which in turn are encapsulated in Ethernet frames. The following shows
how iSCSI is encapsulated in TCP/IP and Ethernet frames. TCP retransmission is used
to handle dropped Ethernet frames or significant transmission errors. Storage traffic can
be intense relative to most LAN traffic. This makes it important that you minimize
retransmits, minimize dropped frames, and ensure that you have “betthe- business”
Ethernet infrastructure when using iSCSI.

What is iSCSI Qualified Name?


An iSCSI qualified name (IQN) serves the purpose of the WWN in Fibre Channel SANs;
it is the unique identifier for an iSCSI initiator, target, or LUN. The format of the
IQN is based on the iSCSI IETF standard.

What is NFS?

Stands for Network File System. NFS protocol is a standard originally developed by Sun
Microsystems to enable remote systems to access a file system on another host as if it
were locally attached. vSphere implements a client compliant with NFSv3 using TCP.
When NFS datastores are used by vSphere, no local file system (such as VMFS) is used.
The file system will be on the remote NFS server. This means that NFS datastores
need to handle the same access control and file-locking requirements that vSphere
delivers on block storage using the vSphere Virtual Machine File System, or VMFS. NFS
servers accomplish this through traditional NFS file locks.

VMware vSphare High Availability (HA):-

Network Load Balancing (NLB) clustering:- The Network Load Balancing configuration
involves an aggregation of servers that balances the requests for applications or services.
In a typical NLB cluster, all nodes are active participants in the cluster and are
consistently responding to requests for services. If one of the nodes in the NLB cluster
goes down, client connections are simply redirected to another available node in the NLB
cluster. NLB clusters are most commonly deployed as a means of providing enhanced
performance and availability.
Example- NLB on IIS server, ISA server, VPN server etc.
Windows Failover Clustering (WFC):- The Network Load Balancing configuration involves
an aggregation of servers that balances the requests for applications or services. In a
typical NLB cluster, all nodes are active participants in the cluster and are consistently
responding to requests for services. If one of the nodes in the NLB cluster goes down,
client connections are simply redirected to another available node in the NLB cluster.
NLB clusters are most commonly deployed as a means of providing enhanced performance
and availability.
Example- NLB on IIS server, ISA server, VPN server etc.

Windows Failover Clustering (WFC):- it is used solely for the sake of availability. Server
clusters or WFC do not provide performance enhancements outside of high availability. In
a typical server cluster, multiple nodes are configured to be able to own a service or
application resource, but only one node owns the resource at a given time. Each node
requires at least two network connections: one for the production network and one for
the cluster service heartbeat network between nodes. A common datastore is also
needed that houses the information accessible by the online active node and all the
other passive nodes. When the current active resource owner experiences a failure,
causing a loss in the heartbeat between the cluster nodes, another passive node becomes
active and assumes ownership of the resource to allow continued access with minimal
data loss.

Raw device mapping (RDM):- An RDM is a combination of direct access to a LUN, and
a normal virtual hard disk file.
An RDM can be configured in either Physical Compatibility mode or Virtual Compatibility
mode. The Physical Compatibility mode option allows the VM to have direct raw LUN
access. The Virtual Compatibility mode, however, is the hybrid configuration that allows
raw LUN access but only through a VMDK file acting as a proxy.
So, why choose one over the other? Because the RDM in Virtual Compatibility mode
uses a VMDK proxy file, it offers the advantage of allowing snapshots to be taken. By
using the Virtual Compatibility mode, you will gain the ability to use snapshots on top
of the raw LUN access in addition to any SAN-level snapshot or mirroring software.

Cluster with Windows Server 2008 VMs:-


Cluster in a Box:- The clustering of two VMs on the same ESXi host
Cluster across Boxes- The clustering of two VMs that are running on different ESXi
hosts.
Physical to Virtual Clustering- The clustering of a physical server and a VM together.

What is VMware HA?


As per VMware Definition:-
VMware® High Availability (HA) provides easy to use, cost effective high availability for
applications running in virtual machines. In the event of server failure, affected virtual
machines are automatically restarted on other production servers with spare capacity.
What are pre-requites for HA to work?

1.Shared storage for the VMs running in HA cluster


2.Essentials plus, standard, Advanced, Enterprise and Enterprise Plus Licensing
3.Create VMHA enabled Cluster
4.Management network redundancy to avoid frequent isolation response in case of
temporary network issues (preferred not a requirement)
FDM- vSphere HA uses a new VMware-developed tool known as Fault Domain Manager
(FDM) for supporting HA.

AAM- Earlier versions of vSphere used Automated Availability Manager (AAM), which
had a number of notable limitations, like a strong dependence on name resolution and
scalability limits.

What is the command to restart /Start/Stop HA agent in the ESXi


host?
# /etc/init.d/vmware-fdm stop

# /etc/init.d/vmware-fdm start

# /etc/opt/init.d/vmware-fdm restart

Where to locate HA related logs in case of troubleshooting?


/var/log/fdm.log

HA-MASTER- When vSphere HA is enabled, the vSphere HA agents participate in an


election to pick a vSphere HA master. The vSphere HA master is responsible for a
number of key tasks within a vSphere HA–enabled cluster. If the existing master fails, a
new vSphere HA master is automatically elected. The new master will then take over
the responsibilities listed here, including communication with vCenter Server.

HA-Slaves- Once an ESXi host in a vSphere HA–enabled cluster elects a vSphere HA


master, all other hosts become slaves connected to that master.
HA Master's responsibilities:-
monitors slave hosts:-
sends heartbeat messages to the slave hosts:-
manages addition and removal of Hosts:-
monitors the power state of VMs:-

reports state information to vCenter Server:-


keeps list of protected VMs:-
notifies cluster configuration change to slave hosts:-

HA Slave Host's responsibilities:


HA master's health check:-
implement some vSphere HA features like local vm's health check:-
watches local VM's runtime states:-

network partition:-
"Network partition" is the term used to describe the situation in which one or more
slave hosts cannot communicate with the master even though they still have network
connectivity between themselves. In this case, vSphere HA is able to use the heartbeat
datastores to detect whether the partitioned hosts are still live and whether action
needs to be taken to protect VMs on those hosts.

network isolation:-
Network isolation is the situation in which one or more slave hosts have lost all
management network connectivity. Isolated hosts can neither communicate with the
vSphere HA master nor communicate with other ESXi hosts.

datastore heart-beating:-
In this case, the slave host uses heartbeat datastores to notify the master that it is
isolated. The slave host uses a special binary file, the host-X-poweron file, to notify
the master. The vSphere HA master can then take the appropriate action to ensure
that the VMs are protected.”

What is the maximum number is of hosts per HA cluster?


Maximum number of hosts in the HA cluster is 32

How is the Host Isolation is detected?

In HA cluster, ESXi hosts uses heartbeats to communicate among other hosts in the
cluster. By default, Heartbeat will be sent every 1 second.
If a master ESXi host in the HA enabled cluster didn’t received heartbeat from any
other hosts in the cluster then the master host assumes that the slave host may be in
isolated state. It then checks that the slave host is capable of pinging its configured
isolation address(default gateway by default) or not. If the ping fails, VMware HA will
execute the Host isolation response.
In VMware vSphere 5.x, if the agent which fails is from a master host, then isolation
is declared in 5 seconds. If it is a slave, isolation is declared in 30 seconds.
In vmware 5.x then master host uses another technique to check live-ness of the slave
hosts in the cluster before declaring it as isolated. It is called datastore heartbeating.
Datastore heartbeating is used to determine whether the slave host has failed, is in a
network partition, or is network isolated. If the slave host has stopped datastore
heartbeating, it is considered to have failed and its virtual machines are restarted
elsewhere.

Vsphare HA requirements:-
Same shared storage for all hosts:-
Identical virtual networking configuration:-

Do HA uses vMotion to transfer live VM's to other HA hosts when source Hosts fails?
No because HA restarts VMs to other Hosts when source hosts fails. It is not live
migration and involves few minutes of downtime.

Vsphare Height Availability admission control :-


It control the behavior Of the vSphere HA–enabled cluster with regard to cluster
capacity or cluster tolerance. Specifically, should vSphere HA allow the user to power on
more VMs than it has capacity to support in the event of a failure?

Or should the cluster prevent more VMs from being powered on than it can actually
protect? That is the basis for the admission control — and by extension, the admission
control policy — settings.

Admission Control has two settings:-


 Enable: Disallow VM power-on operations that violate availability constraints.
 Disable: Allow VM power-on operations that violate availability constraints.

Admission control policy:-


When Admission Control is enabled, the Admission Control Policy settings control its
behavior by determining how much resources need to be reserved and the limit that the
cluster can handle and still be able to tolerate failure.

Vmware HA isolation response:-When an ESXi host in a vSphere HA–enabled cluster is


isolated — that is, it cannot communicate with the master host nor can it
communicate with any other ESXi hosts or any other network devices — then the ESXi
host triggers the isolation response settings configured. The default isolation response is
"Leave Powered On". it can be Shut Down or Power Off also.

High Availability VM Monitoring:- vSphere HA has the ability to look for guest OS and
application failures. When a failure is detected, vSphere HA can restart the VM or the
specific application. The foundation for this functionality is built into the VMware Tools
which provide a series of heartbeats from the guest OS up to the ESXi host on which
that VM is running. By monitoring these heartbeats in conjunction with disk and
network I/O activity, vSphere HA can attempt to determine if the guest OS has failed.

vSphere Fault Tolerance:- vSphere Fault Tolerance (FT) is the evolution of “continuous
availability” that works by utilizing VMware vLockstep technology to keep a primary
machine and a secondary machine in a virtual lockstep. This virtual lockstep is based on
the record/playback technology. vSphere FT will stream data that will be recorded, and
then replayed. By doing it this way, VMware has created a process that matches
instruction for instruction and memory for memory to get identical results on the
secondary VM. So, the record process will take the data stream from primary VM, and
the playback will perform all the keyboard actions and mouse clicks on the secondary
VM.

Perquisites or Requirements of vSphere Fault Tolerance:-

Cluster level
 Same FT version or build number on at least 2 host:-
 HA must be enabled:-
 VMware EVC must be enabled:-
ESXi host level
 vSphere FT compatible CPUs:-
 Hosts must be licensed for vSphere FT.
 Hardware Virtualization (HV) must be enabled:-
 Access to the same datastores:-
 vSphere FT logging network with at least Gigabit Ethernet connectivity:-

VM level
 VMs with a single vCPU:-
 Supported guest OS's:-
 VM files on share storage:-
 Thick provisioned (eagerzeroedthick) or a Virtual mode RDM
 No VM snapshots:-
 No NIC passthrough or the older vlance NIC driver:-
 No Paravirtualized kernel:-
 No USB devices, sound devices, serial ports, or parallel ports:-
 No mapped CD-ROM or floppy devices:-
 No N_Port ID Virtualization:-
 No Nested page tables/extended page tables (NPT/EPT):-
 Not a linked clone VM:-
Operational changes or recommendations for FT:-
 Power management must be turn off in the host BIOS:-
 No sVmotion or sDRS for vSphere FT:-
 No Hot-plugging devices:-
 No Hardware Changes:- No Hardware Changes Includes No Network Changes.

 NO snapshots based backup solutions:-

What the basic troubleshooting steps in case of HA agent installs failed on hosts in HA
cluster?

1. Check for some network connectivity issues.

2. Check the DNS is configured properly.


3. Check HA related ports are open in firewall to allow for the communication.

Ex- 8182- (TCP/UDP) (Inbound/outbound )Traffic between hosts for vSphere High
Availability (vSphere HA)

4.Troubleshoot FDM :-
A.> Verify that all the configuration files of the FDM agent were pushed successfully
from the vCenter Server to your ESXi host:
Location: /etc/opt/vmware/fdm
File Names:
clusterconfig (cluster configuration),
compatlist (host compatibility list for virtual machines),
hostlist(host membership list), and
fdm.cfg.

B.> Search the log files for any error message:


/var/log/fdm.log or /var/run/log/fdm* (one log file for FDM operations)
/var/log/fdm-installer.log (FDM agent installation log)

5. Check the network settings like port group, switch configuration, etc are properly
configured and named exactly as other hosts in the cluster.

6. First try to restart /stop/start the VMware HA agent on the affected host using
the below commands. In addition u can also try to restart vpxa and management agent
in the Host.

# /etc/init.d/vmware-fdm stop

# /etc/init.d/vmware-fdm start

# /etc/opt/init.d/vmware-fdm restart

7. Right Click the affected host and click on “Reconfigure for VMWare HA” to re-install
the HA agent that particular host.
8. Remove the affected host from the cluster. Removing ESXi host from the cluster will
not be allowed untill that host is put into maintenance mode.

Alternative solution for 3 step is, Goto cluster settings and uncheck the vmware HA in
to turnoff the HA in that cluster and re-enable the vmware HA to get the agent
installed.

S-ar putea să vă placă și