Sunteți pe pagina 1din 35

Page 1 of 35

vSphere 5.0, being a cloud operating system, virtualizes the entire IT infrastructure such as servers,
storage, and networks. It groups these heterogeneous resources and transforms the rigid, inflexible
infrastructure into a simple and unified manageable set of elements in the virtualized environment.
vSphere 5.0 logically comprises of three layers: virtualization, management, and interface layers.
- The Virtualization or Infrastructure layer of vSphere 5.0 includes two services, infrastructure and
application.
- Infrastructure Services such as compute, storage, and network services abstract, aggregate, and
allocate hardware or infrastructure resources.
Examples include but are not limited to VMFS and Distributed Switch.

- Application Services are the set of services provided to ensure availability, security, and
scalability for applications.
Examples include but are not
limited to VMware vSphere High
Availability (HA) and VMware
Fault Tolerance (FT).

- The Management layer of vSphere 5.0 comprises of the vCenter Server, which acts as a central
point for configuring,
provisioning, and managing
virtualized IT environments.

- The Interface layer of
vSphere 5.0 comprises of clients
that allow a user to access
the vSphere datacenter,
for example, vSphere Client and
vSphere Web Client.
Page 2 of 35



A typical vSphere 5.0 datacenter consists of basic physical building blocks such as x86 computing
servers, storage networks and arrays, IP networks, a management server, and desktop clients. It
includes the following components:

Compute Servers: The computing servers are industry standard x86 servers that run ESXi 5.0 on the
bare metal. The ESXi 5.0 software provides resources for and runs the virtual machines.

Storage Networks and Arrays: The Fibre Channel Storage Area Network (FC SAN) arrays, iSCSI (Small
Computer System Interface) SAN arrays, and the Network Attached Storage (NAS) arrays are widely
used storage technologies supported by vSphere 5.0 to meet different datacenter storage needs.
IP Networks: Each compute server can have multiple physical network adapters to provide high
bandwidth and reliable networking to the entire vSphere datacenter.

vCenter Server: vCenter Server provides a single point of control to the datacenter. It provides
essential datacenter services, such as access control, performance monitoring, and configuration. It
unifies the resources from the individual computing servers to be shared among virtual machines in
the entire datacenter.

Management Clients: vSphere 5.0 provides several interfaces such as vSphere Client and vSphere
Web Client for datacenter management and virtual machine access.
Now, let us discuss the vSphere 5.0 Virtualization Layer.

vSphere 5.0 virtualizes and aggregates resources including servers, storage, and networks and
presents a uniform set of elements in the virtual environment. With vSphere 5.0, you can manage IT
resources like a shared utility and dynamically provision resources to different business units and
projects.





Page 3 of 35

The vSphere 5.0 Virtual Datacenter consists of:

Computing and memory resources called hosts, clusters, and resource pools.
vSphere Distributed Services such as vSphere vMotion, vSphere Storage vMotion, vSphere
DRS, vSphere Storage DRS, Storage I/O Control, VMware HA, and FT that enable efficient
and automated resource management and high availability for virtual machines.

Storage resources called datastores and datastore clusters.

Networking resources called standard virtual switches and distributed virtual switches.

Virtual machines.



Hosts, clusters, and resource pools provide flexible and dynamic ways to organize the aggregated
computing and memory resources in the physical environment and provide those resources to virtual
machines.

A host represents the aggregate computing and memory resources of a physical x86 server running
ESXi 5.0 server.

A cluster acts and can be managed as a single entity. It represents the aggregate computing and
memory resources of a group of physical x86 servers sharing the same network and storage arrays.

Resource pools are partitions of computing and memory resources from a single host or a cluster.
Resource pools can be hierarchical and nested. You can partition any resource pool into smaller
resource pools to divide and assign resources to different groups or for different purposes.

Page 4 of 35



Let us move on to discuss Distributed Services. vMotion, Storage vMotion, DRS, Storage DRS, Storage
I/O Control, VMware HA, and FT are distributed services that enable efficient and automated
resource management and high availability for virtual machines.

Live migration using vMotion is the first step towards an automated and much more flexible IT
environment because it truly frees the OS and application workloads from the underlying physical
hardware. Virtual machines run on and consume resources from the ESXi host. vMotion enables the
migration of live virtual machines from one physical server to another without service interruption.
Simply saying, the notion of planned downtime goes away. This Live migration capability allows
virtual machines to move from a heavily loaded server to a lightly loaded one. The result is a more
efficient assignment of resources. With vMotion, resources can be dynamically reallocated across
physical hosts.


Page 5 of 35



Storage vMotion enables live migration of a virtual machines storage to a new datastore with no
downtime.

Migrating single virtual machines and their disks from one datastore to another is possible because a
virtual machine is composed of a set of files. Even the virtual machine's disks are encapsulated in
files. Migrating a virtual machine's disks is accomplished by moving all the files associated with the
virtual machine from one datastore to another. Extending the vMotion technology to storage helps
the vSphere administrator to leverage storage tiering, perform tuning and balancing, and control
capacity with no application downtime.

vSphere 5.0 uses a mirrored-mode approach for Storage vMotion. In this new architecture, Storage
vMotion copies disk blocks between source and destination and replaces the need for the iterative
pre-copy phase. This was used in the Changed Block Tracking (CBT) method in earlier versions of
vSphere. With I/O mirroring, a single-pass copy of the disk blocks from the source to the destination
is performed. I/O mirroring ensures that any newly changed blocks in the source are mirrored at the
destination. There is also a block-level bitmap that identifies hot and cold blocks of the disk, or
whether the data in a given block is already mirrored in the destination disk.
Page 6 of 35



vSphere HA provides high availability for virtual machines and the applications running within them
by pooling the ESXi hosts they reside on into a cluster. Hosts in the cluster are continuously
monitored. In the event of failure, the virtual machines on a failed host are attempted to be
restarted on alternate hosts.

When a host is added to a vSphere HA cluster, an agent known as Fault Domain Manager (FDM)
starts on it. These agents communicate amongst themselves and transfer state and status
information. An agent can perform the role of a master or a slave. The role is determined through an
election algorithm. This election algorithm determines the agent that will play the role of the master.
All other hosts then play the role of a slave. As a master, the agent acts as the interface to vCenter
Server, monitors the slave hosts and any virtual machines running on it, and ensures that information
is distributed amongst the other nodes within the cluster as needed. The slave host sends heartbeat
to the master host to check for its health and availability. If the master host is not available, the slave
host participates in another election to select the master host. They also monitor the virtual
machines running on them and update the master about their status.

If a host fails, the virtual machines hosted on it restart. vSphere HA also detects other issues, such as
an isolated host or a network partition and takes action if required. While dealing with failures,
vSphere HA can now take advantage of heartbeat datastores. Heartbeat datastores is a new feature
that allows communication between the nodes in the cluster in the event of failure of the
management network.
Page 7 of 35



Using the VMwares vLockstep technology, FT on the ESXi host platform provides continuous
availability by protecting a virtual machine (the Primary virtual machine) with a shadow copy
(secondary virtual machine). The secondary virtual machine runs in virtual lockstep on a separate
host. Inputs and events performed on the primary virtual machine are recorded and replayed on the
secondary virtual machine ensuring that the two remain in an identical state. For example, mouse-
clicks and keystrokes are recorded on the Primary virtual machine and replayed on the Secondary
virtual machine.

The Secondary virtual machine can take over execution at any point without service interruption or
loss of data because it is in virtual lockstep with the primary virtual machine.




Page 8 of 35



DRS helps you manage a cluster of physical hosts as a single compute resource by balancing CPU and
memory workload across the physical hosts. You can assign a virtual machine to a cluster and DRS
finds an appropriate host on which to run the virtual machine. DRS places virtual machines in such a
way so as to ensure that load across the cluster is balanced, and cluster-wide resource allocation
policies (for example, reservations, priorities, and limits) are enforced. When a virtual machine is
powered on, DRS performs an initial placement of the virtual machine on a host. As the cluster
conditions change (for example, load and available resources), DRS uses vMotion to migrate virtual
machines to other hosts as necessary.

When you add a new physical server to a cluster, DRS enables virtual machines to immediately take
advantage of the new resources because it distributes the running virtual machines.



Page 9 of 35



Storage DRS provides the same benefits in storage as available in DRS, such as resource aggregation,
automated load balancing, and bottleneck avoidance. You can group and manage a cluster of similar
datastores as a single load-balanced storage resource called a datastore cluster. Storage DRS collects
the resource usage information for this datastore cluster and makes recommendations to you about
the initial VMDK file placement and migration to avoid I/O and space utilization bottlenecks on the
datastores in the cluster.

Storage DRS also includes Affinity/Anti-Affinity Rules for virtual machines and VMDK files. VMDK
Affinity rules keep a virtual machines VMDK files together on the same LUN and this is the default
affinity rule. VMDK Anti-Affinity rules keep a virtual machines VMDK files separate on different
LUNs. Virtual Machine Anti-Affinity rules keep virtual machines separate on different LUNs. Affinity
rules cannot be violated. These are hard rules.

When you apply Storage DRS recommendations, vCenter Server uses Storage vMotion to migrate
virtual machine disks to other datastores in the datastore cluster to balance the resources.
Page 10 of 35



When DPM is enabled, the system compares cluster-level and host-level capacity to the demands of
virtual machines running in the cluster. If the resource demands of the running virtual machines can
be met by a subset of hosts in the cluster, DPM migrates the virtual machines to this subset and
powers down the hosts that are not needed. When resource demands increase, DPM powers these
hosts back on and migrates the virtual machines to them. This dynamic cluster right-sizing that DPM
performs reduces the power consumption of the cluster without sacrificing the virtual machine
performance or availability and this reduction reduces expenses.







Page 11 of 35



vSphere 5.0 is the only solution that brings about a rich set of virtual networking elements that
makes networking the virtual machines in the datacenter as easy and simple as in the physical
environment. Furthermore, it enables a new set of capabilities not possible in the physical
environment because many of the limitations in the physical world do not apply.

The virtual environment provides similar networking elements as the physical world, such as virtual
network interface cards, vSphere Distributed Switches (VDS), distributed port groups, vSphere
Standard Switches (VSS), and port groups.

Like a physical machine, each virtual machine has its own virtual NIC called a vNIC. The operating
system and applications talk to the vNIC through a standard device driver or a VMware optimized
device driver just as though the vNIC is a physical NIC. To the outside world, the vNIC has its own
MAC address and one or more IP addresses and responds to the standard Ethernet protocol exactly
as a physical NIC would. In fact, an outside agent can determine that it is communicating with a
virtual machine only if it checks the 6-byte vendor identifier in the MAC address. A virtual switch, or
vSwitch, works like a layer-2 physical switch. With VSS, each host maintains its own virtual switch
configuration while in a VDS, a single virtual switch configuration spans many hosts.
Page 12 of 35



A VDS functions as a single switch across all associated hosts. This enables you to set network
configurations that span across all member hosts, and allows virtual machines to maintain consistent
network configuration as they migrate across multiple hosts.

A distributed switch can forward traffic internally between virtual machines or link to an external
network by connecting to physical Ethernet adapters, also known as uplink adapters. Each
distributed switch can also have one or more distributed port groups assigned to it. The distributed
port groups combine multiple ports under a common configuration and provide a stable anchor point
for virtual machines connecting to labeled networks.

If Network I/O Control has been enabled, VDS traffic is divided into network resource pools such as
FT traffic, iSCSI traffic, vMotion traffic, management traffic, NFS traffic, and virtual machine traffic
and these network resource pools determine the priority that different network traffic types are
given on a VDS by setting the physical adapter shares and host limits for each network resource pool.

vSphere 5.0 incorporates new enhancements in VDS:

- Enhanced Network I/O Control:
Network I/O Control is a traffic-management feature of the VDS. NetIOC implements a software
scheduler within the VDS to isolate and prioritize specific traffic types contending for bandwidth on
the uplinks connecting ESXi hosts with the physical network. vSphere 5.0 builds on this feature to
allow user-defined network resource pools, enabling multi-tenancy deployment, and to bridge
virtual and physical infrastructure QoS with per resource pool 802.1 tagging.

- Netflow: Distributed switch now provides improved visibility into virtual machine traffic through
Netflow.

- Enhanced Monitoring: It provides enhanced monitoring and troubleshooting capabilities
through Ethernet Switched Port Analyzer (SPAN) and Link Layer Discovery Protocol (LLDP).

Page 13 of 35



With VSS, each server maintains its own virtual switch configuration and they handle network traffic
at the host level in a vSphere environment. A VSS can route traffic internally between virtual
machines and link to external networks.











Page 14 of 35



VMware vShield is a suite of security virtual appliances and APIs that are built to work with vSphere,
protecting virtualized datacenters from attacks and misuse.

The vShield suite includes vShield Zones, vShield Edge, vShield App, and vShield Endpoint.

- vShield Zones provide firewall protection for traffic between virtual machines. For each Zones
Firewall Rule, you can specify the source IP, destination IP, source port, destination port, and service.
- vShield Edge provides network edge security and gateway services to isolate the virtual machines in
a port group, distributed port group, or Cisco Nexus 1000V.

vShield Edge connects isolated, stub networks to shared, uplink networks by providing common
gateway services such as DHCP, VPN, NAT, and load balancing. Common deployments of vShield
Edge include the DMZ, VPN extranets, and multitenant cloud environments where vShield Edge
provides perimeter security for virtual datacenters (VDCs).

- vShield App is an interior, virtual-NIC-level firewall that allows you to create access control policies
regardless of network topology. vShield App monitors all traffic in and out of an ESXi host, including
between virtual machines in the same port group. vShield App includes traffic analysis and container-
based policy creation.

- vShield Endpoint delivers an introspection-based antivirus solution. vShield Endpoint uses the
hypervisor to scan guest virtual machines from the outside without an agent. vShield
Endpoint avoids resource bottlenecks while optimizing memory use.

Page 15 of 35



The vSphere 5.0 storage architecture consists of layers of abstraction that hide and manage the
complexity and differences among physical storage subsystems. The VMkernel contains drivers for
the physical storage and works with the Virtual Machine Monitor (VMM) to provide virtual disks to
the virtual machines. The VMM provides virtual SCSI disks to the applications and guest operating
systems inside each virtual machine. A guest operating system sees only a virtual disk file, which is
presented to the guest through a virtual Bus Logic or LSI Logic disk controller and is not exposed to FC
SAN, iSCSI SAN, direct attached storage, and NAS.

From outside the virtual machine, virtual disks are simply large files identified with the extension
.vmdk. They can be copied, archived and backed up as easily as any other files. Virtual disk files are
also hardware independent, so for example, you can move one from a local attached SCSI disk used
by one virtual machine to a SAN where the exact same virtual disk can be used by a second virtual
machine.

Page 16 of 35


The Virtual Machine File System (VMFS) is a clustered file system that leverages shared storage to
allow multiple physical hosts to read and write to the same storage simultaneously. VMFS provides
on-disk locking to ensure that the same virtual machine is not powered on by multiple servers at the
same time. If a physical host fails, the on-disk lock for each virtual machine is released so that virtual
machines can be restarted on other physical hosts.

VMFS also features failure consistency and recovery mechanisms, such as distributed journaling, a
failure consistent virtual machine I/O path, and virtual machine state snapshots. These mechanisms
can aid in quick identification of the cause and recovery from virtual machine, physical host, and
storage subsystem failures. VMFS also supports Raw Device Mapping (RDM). The RDM provides a
mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem
(FC or iSCSI only).

vSphere 5.0 introduces a new version of VMFS i.e. VMFS5 which has many improvements to its
predecessors, VMFS3 and VMFS2, like 2TB plus LUN support, increased resource limits, scalability
along with online in-place upgrades.


The virtual disk Thin Provisioning (TP) makes it possible to have thin virtual machine disks to start
with. The VMFS does not reserve disk space until needed and is distinct from array-based thin
volumes.

This technology equips the administrator with more flexibility in disk space provisioning such as:

Improved disk utilization Improved disk-related operations such as backup

Tools for monitoring actual and promised disk
usage, and

The implementation of alarms and alerts in
vCenter Server as well as the VMFS volume grow
feature enable dynamic expansion of shared
storage pools cutting down allocated but unused
space within the datastore.

Page 17 of 35





vCenter Server 5.0 comes under the management layer of vSphere 5.0. The vCenter Server provides
a convenient single point of control to the datacenter. It runs on top of a Windows 64-bit operating
system to provide many essential datacenter services, such as access control, performance
monitoring, and configuration. It unifies the resources from the individual computing servers to be
shared among virtual machines in the entire datacenter. It accomplishes this by managing the
assignment virtual machines to the computing servers and the assignment of resources to the virtual
machine within a given computing server based on the policies set by the system administrator.

A low-cost alternative to the vCenter Server Windows implementation is available in the form of
vCenter Server appliance, which is a vCenter Server implementation running on a pre-configured
Linux-based appliance.

Computing servers will continue to function even in the unlikely event that vCenter Server becomes
unreachable, for example, the network is severed. The computing servers can be managed
separately and will continue to run the virtual machines assigned to them based on the resource
assignment that was last set. After the vCenter Server becomes reachable, it can manage the
datacenter as a whole again.

vCenter Server 5.0 provides a selection of interfaces for datacenter management and virtual machine
access. Users can choose the interface that best meets their needs, such as vSphere Client 5.0,
vSphere Web Client, or terminal services such as Windows Terminal Services or Xterm.

While vCenter Server can run without additional components, such as the Extensions and Add-Ons,
most datacenters include them as they simplify management of a virtualized IT environment.
Page 18 of 35



The vCenter Server components include User Access Control, Core Services, Distributed Services,
vCenter Server Plugins, and vCenter Server Interfaces.

- The User Access Control component allows the system administrator to create and manage
different levels of access to vCenter Server for different classes of users.
- Core Services are basic management services for a virtual datacenter.
- Distributed Services are solutions that extend vSphere capabilities beyond a single physical server.
vMotion, Storage vMotion, DRS, VMware HA, and FT are distributed services that enable efficient
and automated resource management and high availability for virtual machines.
- vCenter Server Plugins extend the capabilities of vCenter Server by
providing more features and functions.
- vCenter Server Interfaces integrate vCenter Server with third party
products and applications.

vCenter Servers core services include several basic management services for a virtual datacenter:

- Resources and VM Inventory Management Service organizes ESXi hosts, virtual machines, and
resources in the virtual environment and facilitates their management.
- Task Scheduler Service schedules actions such as vMotion to happen at a given time.
- Statistics and Logging Service logs and reports on the performance and resource utilization statistics
of datacenter elements, such as virtual machines and hosts.
- Alarms and Event Management Service tracks and warns users on potential resource over-
utilization or event conditions.
- Virtual Machine Provisioning Service guides and automates the provisioning of ESXi hosts and
virtual machines.
- Host and Virtual Machine Configuration Service allows the configuration of hosts and virtual
machines.

The database interface connects to Oracle, Microsoft SQL Server, or IBM DB2 to store information,
such as host configurations, resources and inventory for ESXi hosts and virtual machines,
performance statistics, events, alarms, user permissions, and roles.
Page 19 of 35


The database versions along with patch requirements supported by vCenter Server are:

- IBM DB2 9.5 - Fix pack 5 is required.
Fix pack 7 is recommended.
- IBM DB2 9.7 - Fix pack 2 is required.
Fix pack 3a is recommended.
- Microsoft SQL Server 2008 R2
Express is bundled with vCenter
Server and is used for small
deployments containing upto 5
hosts and 50 virtual machines.
- Microsoft SQL Server 2005 32-bit
and 64-bit versions are supported.
SP3 is required but SP4 is
recommended.
- Microsoft SQL Server 2008 32-bit
and 64-bit versions are supported.
SP1 is required but SP2 is
recommended.
- Microsoft SQL Server 2008 R2

The vSphere API connects with third-party solutions and VMware management clients. The vSphere
5.0 API is public and therefore available for custom application development. Using one API for both
third-party applications and ESXi host communication reduces the need to maintain two APIs and
assures that the API upon which custom applications are based will always be up-to-date.

The Active Directory interface connects to Active Directory to obtain user access control information.
User Access Control allows the system administrator to create and manage different levels of access
to the vCenter Server for different users. For example, there might be a user class that manages the
configuration of the physical servers in the datacenter and there might be a different user class that
manages only virtual resources within a particular resource pool.
vMotion, Storage vMotion, VMware HA, FT, DRS, Storage DRS, and DPM are distributed services that
extend vSphere 5.0s capabilities to the next level. They enable fine-grain, policy-driven resource
allocation, and high availability of the entire virtual datacenter. Distributed Services allow the
configuration and management of these solutions centrally from vCenter Server.

These Distributed Services enable an IT organization to establish and meet their production Service
Level Agreements with their customers in a cost effective manner.
Plugins are applications that you can install on top of vCenter Server 5.0. They add additional
features and functionality. This functionality ranges from Infrastructure Management, Managing
Development and Test environments, Service Delivery Management, Application and Performance
Management, and Disaster Recovery Management.

Plugins such as vCenter Storage Monitoring, vCenter Hardware Status, and vCenter Service Status are
installed as part of the base vCenter Server product. Other plugins such as vSphere Update Manager,
vShield Zones, and Data Recovery are packaged separately and need to be installed separately.



Page 20 of 35

Plugins introduced in vCenter Server 5.0 are:

- Auto Deploy simplifies the task of managing ESXi installation and upgrade for hundreds of
machines by combining the features of host profiles, image builder, and PXE. New hosts are
automatically provisioned based on rules defined by the user.

- Authentication Proxy eliminates the need to store Active Directory credentials on an ESXi host. You
can join a host to a domain by supplying the domain name of the Active Directory server and the IP
address of the authentication proxy server.

- Network Core Dump allows you to configure ESXi server to dump the VMkernel memory to a
network server when the system has encountered a critical failure.

- vSphereSyslog Collector allows ESXi system logs to be directed to a server on the network, rather
than to a local disk.



















Page 21 of 35


The VMware vSphere Update Manager enables centralized management for vSphere 5.0 and offers
support for VMware ESX/ESXi hosts, virtual machines, and virtual appliances. With Update Manager,
you can perform tasks such as:

- Upgrade and patch ESX/ESXi hosts.
- Install and update third-party software on hosts.
- Upgrade virtual machine hardware, VMware Tools, and virtual appliances.








Page 22 of 35



When you first add a host to vCenter Server, the vCenter Server sends a vCenter Server agent to run
on the host.

The vCenter Server agent acts as a small vCenter Server to perform several functions, including:
- Relay and enforce resource allocation decisions made in vCenter Server, including those that the
DRS engine sends.

- Pass virtual machine provisioning and configuration change commands to the host agent.
- Pass host configuration change commands to the host agent.
- Collect performance statistics, alarms, and error conditions from the host agent and send
them to the vCenter Server.
- Allow management of ESXi hosts at different release versions.



Page 23 of 35



You can access a vSphere datacenter through the vSphere Client, through a Web browser with
vSphere Web Client, through a command-line interface, or through terminal services such as
Windows Terminal Services.

The vSphere Client is a required component and the primary interface for creating, managing, and
monitoring the ESXi host, virtual machines, and their resources. It also provides console access to
virtual machines. The vSphere Client is installed on a Windows machine with network access to the
ESXi host or the vCenter Server system. Depending upon whether the vSphere Client connects to the
ESXi host or vCenter Server system, the interface displays slightly different options. While all the
management activities are performed by vCenter Server, you must use the vSphere Client to
monitor, manage, and control the server. An administrator can login to a single vSphere Client to
manage multiple vCenter Servers or ESXi hosts. This is made possible through a technique called
vCenter Linked Mode.

The vSphere Web Client is a browser-based interface for configuring and administering virtual
machines. It enables you to connect to a vCenter Server system to manage an ESXi host through a
browser.

The vSphere includes command-line tools such as vSphere PowerCLI and vSphere CLI (vCLI) for
provisioning, managing, and monitoring hosts and virtual machines. vSphere SDKs provide standard
interfaces for VMware and third-party solutions to access vSphere.

Terminal services such as Windows Terminal Services or Xterm provide direct access to virtual
machine consoles. Accessing ESXi hosts directly should be done only by physical host administrators
in special circumstances.
Page 24 of 35



The vSphere Client is a locally installed application on a Windows OS and is used to perform a
majority of administrative tasks within the vSphere datacenter. It uses VMware API to access vCenter
Server. After the user is authenticated, a session starts in vCenter Server, and the user sees the
resources and virtual machines that are assigned to the user.

The vSphere Web Client is a Web application, which can connect to the vCenter Server only and
includes a subset of the functionality available in vSphere Client. It can perform virtual machine
deployment and some monitoring functions only. You cannot use this client to configure hosts,
clusters, networks, datastores, or datastore clusters.





Page 25 of 35



The management of multiple vCenter Server instances is possible through the vSphere Client using
vCenter Linked Mode. The reasons why you may have multiple vCenter Server instances include:

- Remote offices
- Multiple datacenters
- Disaster recovery purposes

If you log into one vSphere Client, it automatically logs into all vCenter Server instances for which you
have valid credentials and provides visibility to the entire inventory. One can now easily navigate
across multiple datacenters and take action from the same console.

You cannot migrate hosts or virtual machines between the vCenter Server systems connected in
Linked Mode.

Page 26 of 35

vSphere 5.0 offers two versions of ESXi:

- ESXi Installable Edition can be installed in a number of ways such as Interactive, Script,
and Auto Deploy.

- ESXi Embedded version comes pre-installed as firmware embedded on hardware that
you purchase from a vendor.

Note that the vSphere 5.0 includes only the ESXi architecture.


ESXi is the next-generation hypervisor, providing a new foundation for virtual infrastructure. This
innovative architecture operates independently from any general-purpose OS, offering improved
security, increased reliability, and simplified management. The compact architecture is designed for
integration directly into virtualization-optimized server hardware, enabling rapid installation,
configuration, and deployment.

The ESXi architecture comprises the underlying OS, called VMkernel and processes that run on top of
it. VMkernel provides means for running all processes on the system, including management
applications and agents as well as virtual machines. It has control of all hardware devices on the
server, and manages resources for the applications.
Page 27 of 35



VMkernel is a POSIX-like OS developed by VMware and provides certain functionalities similar to that
found in other OSs, such as process creation and control, signals, file system, and process threads. It
is designed specifically to support running multiple virtual machines and provides core functionalities
such as:
- Resource scheduling
- I/O stacks
- Device drivers


The key component of each ESXi host is a process called VMM. One VMM runs in the VMkernel for
each powered on virtual machine. When a virtual machine starts running, the control transfers to the
VMM, which in turn begins executing instructions from the virtual machine. The VMkernel sets the
system state so that the VMM runs directly on the hardware. However, the OS in the virtual machine
has no knowledge of this transfer and thinks that it is running on the hardware.
Page 28 of 35


The VMM allows the virtual machine to perform as if it were a physical machine yet it remains
isolated from the host and other virtual machines. So, if a virtual machine crashes, it does not affect
the host machine or any other virtual machines on the host machine.

The ESXi host provides a base x86 platform and you choose devices to install on that platform. The
base virtual machine provides everything needed for the system compliance with x86 standards from
the motherboard up. The ESXi host lets you choose from a collection of virtual hardware that
includes the devices shown here.

You also select the amount of RAM for use by the virtual machine. The processors that the virtual
machine sees are the same as those on the physical host. All the configuration details of a virtual
machine are recorded in a small configuration file stored on the ESXi host.

The VMkernel maps the virtual machines devices to the physical devices on the host machine. For
example, a virtual SCSI disk drive can be mapped to a virtual disk file on the SAN LUN attached to the
ESXi host or a virtual Ethernet NIC can be connected to a specific host NIC through a virtual switch
port.

All the files that make up a virtual machine are typically stored in a single directory on either an NFS
or VMFS file system. These include:

- A non-volatile RAM file that stores the state of the virtual machines BIOS.
- Log files, one or more virtual disk files, a configuration file and a swap file for memory allocation.
- The folder may also contain
snapshot files, which preserve
the machine at a specific
point in time. The key files
to move to a new datastore
are the vmx configuration
file and the vmdk disk files.

Page 29 of 35

An important thing to understand is that the virtual machine platform provided by the ESXi host is
independent of its physical hardware. Every VMware platform product provides the same set of
virtualized hardware regardless of the system on which it is running. That means you can move a
virtual machine from VMware Server installed on HP Server with an AMD processor and local SCSI
disks to IBM Server with an Intel processor and a FC SAN and the virtual machine can power on and
run unaffected. That uniform layer of virtual hardware is the key to VMware virtual machine
portability and ease of provisioning.

ESXi 5.0 introduces new virtual machine hardware version 8 which:

- Supports virtual machines
with Up to 32 virtual CPUs
- Assigns up to 1TB of RAM to ESXi
5.0 virtual machines
- Supports 3D graphics to run
Windows Aero and Basic 3D
applications in virtual machines
- Supports USB 3.0 devices
- Allows virtual machines running
on ESXi 5.0 to boot from
and use the Unified Extended
Firmware Interface (UEFI)

Virtual machines with hardware versions earlier than version 8 can run on ESXi 5.0 hosts, but do not
have all the capabilities available in hardware version 8.

These are the standard VMware device drivers as seen in any Windows virtual machines. Standard
virtual device drivers allow portability without having to reconfigure the OS of each virtual machine.
If you copy these files to any other ESXi host, they will run without the need to reconfigure the
hardware, even if the hardware is totally different.
Page 30 of 35


The VMware Tools is a suite of utilities that enhances the performance of the virtual machines guest
OS and improves management of the virtual machine. Installing VMware Tools in the guest OS is
vital. Although the guest OS can run without VMware Tools, you lose important functionality and
convenience. The VMware Tools service is a service that performs various duties within the guest OS.
The service starts automatically when the guest OS boots.

The functions that the service performs include:

- Passing messages from the
ESXi host to the guest OS.
- Sending a heartbeat to the
ESXi host so that it knows
the guest OS is running.
- Synchronizing the time in the
guest OS with the time in the
host OS.
- Running scripts and executing
commands in a virtual machine.
- Providing support for guest
OS-bound calls created
with the VMware VIX API,
except in Mac OS X guest OSs.
- Allowing the pointer to move
freely between the guest
and the workstation in Windows
guest OSs.
- Helping create snapshots used
by certain backup applications
in Windows guest OSs.

When installed in the guest OS, VMware Tools also provide the VMware device drivers, including an
SVGA display driver, the vmxnet networking driver for some guest OSs, the BusLogic SCSI or LSI Logic
Page 31 of 35

driver for some guest OSs, the memory control driver for efficient memory allocation between virtual
machines, the sync driver to quiesce I/O for backup using either VMware Data Recovery or the
VMware vStorage API for Data Recovery, Kernel Module for folder sharing, and the VMware mouse
driver.

The power of the ESXis bare-metal architecture is apparent in its memory optimization features that
increase memory use efficiency. Memory management on the ESXi host makes it possible to safely
overcommit memory. The sum of the memory allocated to each virtual machine can exceed the total
physical memory installed on the host. The ESXi host uses some clever methods to permit safe
memory overcommitment. For example, a 2:1 overcommitment ratio often has only a minimal
impact on performance.

TPS is a memory optimization method unique to VMware. The VMkernel examines every memory
page stored by the virtual machines to identify identical pages and store only one copy. For example,
if several virtual machines are running Windows Server 2003, they will have many identical memory
pages. TPS consolidates those identical pages into a single memory location. If a page changes after
Page 32 of 35

consolidation, the VMkernel uses a copy-on-write mechanism to create a new page and no longer
consolidates it with
the previously identical pages.

Idle or busy virtual machines would normally consume their full allocation of memory. To make
better use of those wasted resources, the ESXi host uses a memory balloon driver included in the
VMware Tools and installed in each virtual machine. When there is ample memory, the balloon
remains un-inflated. But when memory becomes scarce, the VMkernel chooses a virtual machine and
inflates its balloon, that is, it tells the balloon driver in that virtual machine to demand memory from
the guest OS. The guest OS complies by yielding memory and the VMkernel assigns the relinquished
pages to other virtual machines. These relinquished pages are often the most lightly used pages.

To deflate the balloon, the balloon driver relinquishes the memory, and the VMkernel grants the
memory back to the guest OS.

Page 33 of 35

Memory Compression helps improve the performance of a virtual machine when memory
overcommit is used. It is enabled by default so when a hosts memory becomes overcommitted,
ESXi compresses virtual pages and stores them back in memory before it attempts to swap the pages
to disk.

Accessing compressed memory is faster than accessing memory swapped to disk. When a virtual
page needs to be swapped, ESXi first attempts to compress the page. Pages that can be compressed
to 2Kb or smaller are stored in the virtual machines compression cache, increasing the capacity of
the host.

Each virtual machine includes a VMkernel swap file. If multiple virtual machines need their full
allocation of memory, then the ESXi host will swap their memory regions to disk on a fair-share basis
governed by the memory resource settings you have assigned to each virtual machine. The VMkernel
uses this feature only as a last resort because it causes performance to be noticeably slower.

In vSphere 5.0, the VMkernel allows the ESXi swap to extend to local or network Solid State Drives
(SSD) devices, which enables memory over commitment and minimizes performance impact.
Datastores that are created on SSD can be used to allocate space for host cache. The host reserves a
certain amount of space for swapping to SSD.
Page 34 of 35




This concludes the VMware vSphere 5.0 Architecture Essentials course. Lets review some of the key
areas covered:

- A typical vSphere 5.0 datacenter
consists of basic physical building
blocks such as x86 computing
servers, storage networks and
arrays, IP networks, a management
server, and desktop clients.
- The central point for configuring,
provisioning, and managing
virtualized IT environments is
vCenter Server 5.0. Several
Page 35 of 35

other components can be installed
on the same vCenter Server
machine, or on separate systems to
manage patches, convert and
import physical machines.
- ESXi 5.0 host is the platform on
which virtual machines run.
The different components of an
ESXi host system work together to
run virtual machines and give them
access to resources.
- The main architectural differences
between ESXi 5.0 and ESX 4.1
are that with ESXi 5.0, the
hypervisor is integrated directly
in the server system and there
is no service console.
- Finally, vSphere 5.0 revolutionizes
the scalability at the datacenter
level making the resources available
round the clock to the workload,
managing hundreds of servers and
thousands of virtual machines.

S-ar putea să vă placă și