Documente Academic
Documente Profesional
Documente Cultură
0
Knowledge Transfer Kit
Architecture Overview
VMware ESXi
Content Library
Virtual machines
VMware Certificate Authority (CA)
VMware vCenter Server
New Platform Services Controller
Storage
Recommendations iSCSI Storage Architecture
2
Architecture Overview
High-Level VMware vSphere Architectural Overview
VMware vSphere
Availability Scalability
Manage VMware
VMware vSphere
vSphere vMotion
vMotion DRS and DPM
Application VMware
VMware vSphere
vSphere Storage
Storage
vMotion
vMotion Hot Add
Services VMware Over
VMware vSphere
vSphere High
High
Availability
Availability Commitment
VMware
VMware vSphere
vSphere FT
FT Content Library
VMware
VMware Data
Data Recovery
Recovery
Cluster
Storage Network
vSphere VMFS
VMware Virtual Standard
Standard vSwitch
vSwitch
Infrastructure ESXi ESXi ESXi Volumes Distributed
Distributed vSwitch
vSwitch
Services VMware Virtual SAN VMware
VMware NSX
NSX
Thin Provisioning VMware
VMware vSphere
vSphere
Network
Network I/O
I/O Control
Control
VMware vSphere
Storage I/O Control
4
Physical Resources
How Does This Fit With the Software-Defined Data Center
(SDDC)?
Application Service
App
Self-Service
App Deployment
App VMware vRealize Application Services
Blueprinting Standardizati
Development
on Cloud
App
Publishi
TEXT
Infrastructure Service ng
SDDC Foundation
Monitoring Orchestratio
VMware
Core
vRealize
with vRealize
n with
Operations
vSphere
Virtualization Performance
vSphere
Manager
Orchestrator
Workflow
and Capacity
Infrastructure Library
Navigator
Hyperic
Software-Defined
Software-Defined Networking
Networking
vRealize Log
vRealize Log
Virtualization of Physical
Insight Assets
VMware
VMware
vRealize SRM Hybrid
VMware VMware
Financial
SDS
Virtual SDN
NSX
Compliance
Config. BCDR
VR vCloud
Cloud
vRealize
SAN Manager vDPA Connector Business
5
VMware ESXi
ESXi 6.0
ESXi is bare metal VMware vSphere
Hypervisor
ESXi installs directly onto the physical
server enabling direct access to all server
resources
ESXi is in control of all CPU, memory, network
and storage resources VMware ESXi
Allows for virtual machines to be run at near
native performance, unlike hosted hypervisors
ESXi 6.0 allows
Utilization of up to 480 physical CPUs per host
Utilization of up to 12 TB of RAM per host
Deployment of up to 2048 virtual machines per
host
7
ESXi Architecture
CLI Commands
for Configuration ESXi Host
and Support
Agentless Agentless
Systems Hardware
Management Monitoring
VMkernel
10
Virtual Machine Architecture
Virtual machines consist of files stored on a vSphere VMFS or NFS datastore
Configuration file (.vmx)
Swap files (.vswp)
BIOS files (.nvram)
Log files (.log)
Template file (.vmtx)
Raw device map file (<VM_name>-rdm.vmdk)
Disk descriptor file (.vmdk)
Disk data file (VM_name>-flat.vmdk)
Suspend state file (.vmss)
Snapshot data file (.vmsd)
Snapshot state file (.vmsn)
Snapshot disk file (<VM_name>-delta.vmdk)
11
VMware vCenter Server
VMware vCenter 6.0
vCenter is the management platform for
vSphere environments
Provides much of the feature set that comes
with vSphere, such as vSphere High
Availability
Also provides SDK access into the
environment for solutions such as VMware
vRealize Automation
vCenter Server is available in two flavors
vCenter for Windows
vCenter Server Appliance
In vSphere 6.0, both versions offer feature
parity
A single vCenter Server 6.0 can manage
1000 hosts
10,000 virtual machines 13
vCenter 6.0 Architecture
In vCenter 6.0, the architecture has changed dramatically
Provided by Platform Services Controllers
VMware vCenter Single Sign-On
License service
Lookup service All services are provided
VMware Directory Services from either a
VMware Certificate Authority Platform Services Controller
Provided by vCenter Server Service
or
vCenter Server
vCenter Server instance
VMware vSphere Web Client
Inventory Service
VMware vSphere Auto Deploy
VMware vSphere ESXi Dump Collector
vSphere Syslog Collector on Windows and vSphere Syslog Service for
VMware vCenter Server Appliance
14
vCenter 6.0 Architecture (cont.)
Two basic architectures are supported as a result of this change
Platform Services Controller is either Embedded or External to vCenter Server
Choosing a mode depends on the size and feature requirements for the environment
External Embedded
Platform Services Platform Services
Controller Controller
15
vCenter 6.0 Architecture (cont.)
These architectures are Recommended
Enhanced Linked Mode is a major feature that impacts the architecture
When using Enhanced Linked Mode it is recommended to use an external Platform Service Controller
For details about architectures that VMware recommends and the Implications of using them, see
VMware KB article, List of Recommended topologies for vSphere 6.0 (2108548 (http
://kb.vmware.com/kb/2108548)
Scalability
Description
Maximum
Number of Platform Services Controllers per domain 8
Maximum Platform Services Controllers per vSphere site (behind a single load
4
balancer)
18
vCenter Architecture vCenter Server Components
Platform Services
Additional Services: Controller (Including
VMware vSphere Update vCenter Single Sign-On)
Manager
vRealize Orchestrator User vSphere Web Client
Access Microsoft Active
Database Core and VMware
Control VMware vSphere Directory Domain
Server Distributed vSphere Client
Services API
Third-Party
Applications
ESXi Management
Plug-Ins
ESXi hosts
vCenter Server
Database
19
vCenter Architecture ESXi and vCenter Server Communication
How vCenter Server components and ESXi hosts communicate
vCenter Server
& Platform
TCP 443 Services Controller
vpxd
TCP
443, 9443
TCP/UDP
TCP/UDP
902
902
hostd vpxa
ESXi Host
20
VMware vSphere vMotion
vSphere vMotion
vSphere vMotion allows for live migration
of virtual machines between compatible
ESXi hosts
Compatibility determined by CPU, network,
and storage access
With vSphere 6.0, migrations can occur
Between clusters
Between datastores
NEW
Between networks
NEW
Between vCenter Servers
NEW
Over long distances as long as RTT is
<100ms
22
vSphere vMotion Architecture
vSphere vMotion involves transferring the
entire execution state of the virtual machine
from the source host to the destination
Primarily happens over a high-speed network
The execution state primarily consists of the
following components
The virtual device state, including the state of the
CPU, network and disk adaptors, SVGA, and so
on
External connections with devices, including
networking and SCSI devices
The virtual machines physical memory
23
vSphere vMotion Architecture Pre-Copy
When a vSphere vMotion is initiated a second VM container is started and pre-copy of the memory is
initiated
VM A VM A
Memory
Bitmap
Memory Pre-Copy
vMotion
Network
Production
Network
VM End User
24
vSphere vMotion Architecture Memory Checkpoint
When enough data is copied, the VM is quiesced
Checkpoint data is sent with the final changes
ARP is sent and VM is active on the destination host
The source VM is stopped
ESXi Host 1 ESXi Host 2
VM A VM A
Memory
Bitmap
Checkpoint Data
vMotion
Network
Production
Network
25
VM End User
VMware vSphere Storage vMotion Architecture
vSphere Storage vMotion works in very Read/Write
I/O to Virtual
much the same way as vSphere vMotion, Disk
only the disks are migrated instead
It works as follows VM VM
26
vSphere Storage vMotion Architecture
Simultaneously Change
vSphere vMotion also allows both storage and host to be changed at the same time
New in vSphere 6.0 the VM can be migrated between networks and vCenter Servers
ESXi
Datastore Network vCenter
Host
vCenter vCenter
Server Server
27
Availability
VMware vSphere High Availability
VMware vSphere Fault Tolerance
VMware vSphere Distributed Resource Scheduler
vSphere vMotion Architecture
Long-Distance vSphere vMotion
Cross Continental
Performance
29
Availability
VMware vSphere High Availability
vSphere High Availability
vSphere High Availability is an availability
solution that monitors hosts and restarts
virtual machines in the case of a host failure Infrastructure Connectivity Application
31
vSphere High Availability Architecture Overview
Cluster of ESXi hosts created up to 64 hosts
One of the hosts is elected as master when HA is enabled
Network heartbeats
Storage heartbeats
Master
32
vSphere High Availability Architecture Host Failures
Master
33
vSphere High Availability Architecture Host Failures
Master declares
Master slave host dead
34
vSphere High Availability Architecture Host Failures
35
vSphere High Availability Architecture Network Partition
A B
Master
36
vSphere High Availability Architecture Host Isolation
Master
37
vSphere High Availability Architecture VM Monitoring
Master
38
vSphere High Availability Architecture VM Component Protection
Master
39
Availability
VMware vSphere Fault Tolerance
vSphere FT
vSphere FT is an availability solution that
provides continuous availability for virtual
machines
Zero downtime
Zero data loss
Primary VM Secondary VM
42
vSphere FT Architecture Memory Checkpoint
vSphere FT in vSphere 6.0 uses fast checkpoint technology
This is similar to how vSphere vMotion works, but it is done continuously (rather than once)
The fast checkpoint is a snapshot of all data not just memory (memory, disks, devices, and so on)
vSphere FT logging network has a minimum requirement of 10 Gbps NIC
ESXi Host 1 ESXi Host 2
VM A VM A
Memory
bitmap
Production
network
VM End User 43
Availability
VMware Sphere Distributed Resource Scheduler
DRS DRS
DRS is a technology that monitors load and
resource usage and will use vSphere vMotion to
balance virtual machines across hosts in a
cluster
DRS also Includes VMware Distributed Power
Management (DPM) which allows for hosts to be
evacuated and powered off during periods of low VMware DPM
utilization
DRS uses vSphere vMotion functionality migrate
VMs
Can be used in three ways
Fully automated where DRS acts on
recommendations automatically
Partially automated where DRS only acts for initial
VM power-on placement and an administrator has to
approve recommendations
45
Manual where administrator approval is required
DRS Architecture
ESXi Host 1 ESXi Host 1
DRS generates migration recommendations
based on how aggressive it has been
configured
For example
The three hosts on the left side of the following ESXi Host 2 ESXi Host 2
figure are unbalanced
Host 1 has six virtual machines, its resources
might be overused while ample resources are
available on Host 2 and Host 3
DRS migrates (or recommends the migration
ESXi Host 3 ESXi Host 3
of) virtual machines from Host 1 to Host 2 and
Host 3
On the right side of the diagram, the properly
load balanced configuration of the hosts that
results appears
46
Distributed Power Management Architecture
ESXi Host 1 ESXi Host 1
DPM generates migration recommendations
similar to DRS, but in terms of achieving
power savings
It can be configured for how aggressively you
want to save power
ESXi Host 2 ESXi Host 2
For example
The three hosts on the left side of the following
figure have virtual machines running, but they
are mostly idle
DPM determines that given the load of the
environment shutting down Host 3 will not ESXi Host 3 ESXi Host 3
impact the level of performance for the VMs
DPM migrates (or recommends the migration Host
nd by
of) virtual machines from Host 3 to Host 2 and Sta
Host 1 and puts Host 3 into standby mode
On the right side of the diagram, the power
47
managed configuration of the hosts appears
Content Library
Content Library
The Content Library is new to vSphere 6.0 and is a distributed template, media and script
library for vCenter Server
Similar to the VMware vCloud 5.5 Content Catalog and VMware vCloud Connector Content
Sync
Tracks versions for generational content, cannot be used to revert to older versions
vCenter vCenter
3
21 2
3
Content Library Content Library
Subscribe
(Publisher) (Subscriber)
2
1 1 2
1 Sync
1
2 1 2
1
49
Content Library Architecture Publication and Subscription
Publication and subscription allow libraries to be shared between vCenter Servers
Provides a single source for information that can be configured to download and sync
according to schedules or timeframes
vCenter vCenter
Other
Password (optional)
50
Content Library Architecture Content Synchronization
Content Synchronization occurs when content changes
Simple versioning used to denote the modification, and the item is transferred
vCenter vCenter
HTTP GET
51
VMware Certificate Authority
Certificates in vSphere 6.0
vCenter 5.x solutions had its TCP/IP connections secured with SSL
Required a unique certificate for each solution
In vSphere 6.0, the various listening ports have been replaced with a single endpoint
Reverse
Web Proxy
(port 443)
This is the reverse HTTP proxy, which will route traffic to the appropriate service based on the
type of request
This means only one endpoint certificate is needed
53
VMware Certificate Authority
In vSphere 6.0, vCenter ships with an internal Certificate Authority (CA)
Called the VMware Certificate Authority
An instance of the VMware CA is included with each Platform Services Controller node
Issues certificates for VMware components under its personal authority in the vSphere eco-
system
Runs as part of the Infrastructure Identity Core Service Group
Directory service
Certificate service
Authentication framework
VMware CA issues certificates only to clients that present credentials from VMDirectory in its
own identity domain
It also posts its root certificate to its own server node in VMware Directory Services
54
How is the VMware Certificate Authority Used?
Machines SSL certificate
Used by reverse proxy on every vSphere node
Used by the VMware Directory Service on Platform Services Controller and Embedded nodes
Used by VPXD on Management and Embedded nodes
55
VMware Endpoint Certificate Store
Certificate Storage and Trust are now handled by the VMware Endpoint Certificate Store
Serves as a local wallet for certificates, for private keys and secret keys, which can be stored
in key stores
Runs as part of the Authentication Framework Service
Runs on every Embedded, Platform Services Controller and Management node
Some key-stores are special
Trusted certificates key store
Machine SSL cert key store
56
How is VMware Endpoint Certificate Store Used?
Machine SSL store
Holds the machine SSL certificate
Solution key-stores
Following key stores hold private keys and solution user certificates
Machine Account Key Store (Platform Service Controller, Management, Embedded nodes)
VPXD Key Store (Management, Embedded nodes)
VPXD Extension Key Store (Management, Embedded nodes)
VMware vSphere Client Key Store (Management, Embedded nodes)
57
Storage
iSCSI Storage Architecture
NFS Storage Architecture
Fibre Channel Architecture
Other Storage Architectural Concepts
Storage
Both local and/or shared storage are a core
requirement for full utilization of ESXi VMware
ESXi
features hosts
Many kinds of storage can be used with
vSphere
Local disks Datastore NFS
types VMware vSphere VMFS
Fibre Channel (FC) SANs
iSCSI SANs
File
NAS SANs system
Fibre Channel
FCoE
iSCSI
NFS
Direct Attached
Storage
Virtual SAN
VMware Virtual
Volumes
60
Storage
iSCSI Storage Architecture
Storage Architecture iSCSI
iSCSI storage utilizes regular IP traffic over a standard network to transport iSCSI commands
The ESXi host connects through one of several types of iSCSI initiator
62
Storage Architecture iSCSI Components
All iSCSI systems share a common set of components that are used to provide the storage
access
63
Storage Architecture iSCSI Addressing
Other than the standard IP addresses, iSCSI targets are identified by names as well
64
Storage
NFS Storage Architecture
Storage Architecture NFS Components
Much like iSCSI, NFS accesses storage
over the network
NAS device or a Directory to share
server with with the ESXi host
storage over the network
66
Storage Architecture Addressing and Access Control with NFS
ESXi Accesses NFS through NFS Server
address / name through a VMkernel port
NFS version 4.1 and NFS version 3 are
available with vSphere 6.0
Different features are supported with 192.168.81.33
different versions of the protocol
NFS 4.1 supports multipathing unlike NFS 3
NFS 3 supports all features, NFS 4.1 does not
support Storage DRS, VMware vSphere
Storage I/O Control, VMware vCenter Site
Recovery Manager, and Virtual Volumes
Dedicated switches are not required for NFS 192.168.81.72
VMkernel port
configurations configured with
IP address
67
Storage
Fibre Channel Architecture
Storage Architecture Fibre Channel
Unlike network storage such as NFS or iSCSI, Fibre Channel does not generally use an IP
network for storage Access.
The exception here is when using Fibre Channel over Ethernet (FCoE)
69
Storage Architecture Fibre Channel Addressing and Access
Control
Zoning and LUN masking are used for access control to storage LUNs
70
Storage Architecture FCoE Adapters
Hardware FCoE Software FCoE
FCoE adapters allow access to Fibre
Channel Storage over Ethernet ESXi Host ESXi 5.x Host
connections
Network FC Network Software
Enables expansion to Fibre Channel
Driver Driver Driver FC
SANs when no Fibre Channel
infrastructure exits in many cases Converged NIC
10 Gigabit
Network with FCoE
Ethernet
Both hardware and software adapters Adapter Support
are allowed
Hardware adapters are often called FCoE Switch
converged network adapters (CNAs)
Many times both a NIC and a HBA are Ethernet IP Frames FC Frames to FC
presented from the single card in the to LAN Devices Storage Arrays
clients
FC
LAN
SAN
71
Storage
Other Storage Architectural Concepts
Multipathing
Multipathing enables continued access to
SAN LUNs if hardware fails
It also can provide load balancing based on
the path policy selected
73
vSphere Storage I/O Control
With
With vSphere
vSphere
Without
Without vSphere
vSphere Storage
Storage
vSphere Storage I/O Control allows traffic to Storage
Storage I/O
I/O
Control
Control
I/O
I/O Control
Control
be prioritized during periods of contention Data Print Online Microsoft Data Print Online Microsoft
Mining Server Store Exchange Mining Server Store Exchange
Brings the compute style shares/limits to
storage infrastructure
Monitors device latency and acts when it
over exceeds a threshold
Allows for important virtual machines to
have priority access to resources
74
Datastore Clusters
A collection of datastores with shared resources similar to ESXi host clusters
Allow for management to be done as a shared management interface
Storage DRS can be used to manage the resource and ensure they are balanced
Can be managed by using the following constructs
Space utilization
I/O latency load balancing
Affinity rules for virtual disks
75
Software-Defined Storage
Software-defined storage is a software
construct which is used by
Virtual Volumes
Virtual SAN
Management
Network
VMkernel
Distributed vSwitch
Standard vSwitch
80
Network Architecture NIC Teaming and Load Balancing
NIC Teaming enables multiple NICs to be connected to a single virtual switch for continued
access to networks if hardware fails
This also can enables load balancing (if appropriate)
81
VMware vSphere Network I/O Control
vSphere Network I/O Control allows traffic to
be prioritized during periods of contention
Brings the compute style of shares/limits to
storage infrastructure
Monitors device latency and acts when it
over exceeds a threshold
Virtual Switch
Allows important virtual machines or
services to have priority access to resources
10 GigE
82
Software-Defined Networking
Software-Defined Networking is a software
construct that allows your physical network
to be treated as a pool of transport capacity,
with network and security services attached
to VMs with a policy-driven approach
Decouples the network configuration from
the physical infrastructure
Allows for security and micro-segmentation
of traffic
Key tenant to the software-defined data
center (SDDC)
83
Questions
84
VMware vSphere 6.0
Knowledge Transfer Kit
VMware, Inc.
3401 Hillview Ave
Palo Alto, CA 94304