Sunteți pe pagina 1din 85

VMware vSphere 6.

0
Knowledge Transfer Kit
Architecture Overview

2015 VMware Inc. All rights reserved.


Agenda
Architecture Overview

VMware ESXi
Content Library
Virtual machines
VMware Certificate Authority (CA)
VMware vCenter Server
New Platform Services Controller
Storage
Recommendations iSCSI Storage Architecture

VMware vSphere vMotion NFS Storage Architecture


Fibre Channel Architecture
Availability
Other Storage Architectural Concepts
VMware vSphere High Availability
VMware vSphere Fault Tolerance Networking

VMware vSphere Distributed Resource


Scheduler

2
Architecture Overview
High-Level VMware vSphere Architectural Overview
VMware vSphere

VMware vCenter Server

Availability Scalability
Manage VMware
VMware vSphere
vSphere vMotion
vMotion DRS and DPM
Application VMware
VMware vSphere
vSphere Storage
Storage
vMotion
vMotion Hot Add
Services VMware Over
VMware vSphere
vSphere High
High
Availability
Availability Commitment
VMware
VMware vSphere
vSphere FT
FT Content Library
VMware
VMware Data
Data Recovery
Recovery

Cluster
Storage Network
vSphere VMFS
VMware Virtual Standard
Standard vSwitch
vSwitch
Infrastructure ESXi ESXi ESXi Volumes Distributed
Distributed vSwitch
vSwitch
Services VMware Virtual SAN VMware
VMware NSX
NSX
Thin Provisioning VMware
VMware vSphere
vSphere
Network
Network I/O
I/O Control
Control
VMware vSphere
Storage I/O Control

4
Physical Resources
How Does This Fit With the Software-Defined Data Center
(SDDC)?
Application Service
App
Self-Service
App Deployment
App VMware vRealize Application Services
Blueprinting Standardizati
Development
on Cloud
App
Publishi
TEXT
Infrastructure Service ng

Self-Service Catalogs and Low Admin


VMware vRealize
User Portal
StandardAutomation
Overhead
Templates Cloud Ready

SDDC Foundation
Monitoring Orchestratio
VMware
Core
vRealize
with vRealize
n with
Operations
vSphere
Virtualization Performance
vSphere
Manager
Orchestrator
Workflow
and Capacity
Infrastructure Library
Navigator
Hyperic
Software-Defined
Software-Defined Networking
Networking
vRealize Log
vRealize Log
Virtualization of Physical
Insight Assets

VMware
VMware
vRealize SRM Hybrid
VMware VMware
Financial
SDS
Virtual SDN
NSX
Compliance
Config. BCDR
VR vCloud
Cloud
vRealize
SAN Manager vDPA Connector Business

5
VMware ESXi
ESXi 6.0
ESXi is bare metal VMware vSphere
Hypervisor
ESXi installs directly onto the physical
server enabling direct access to all server
resources
ESXi is in control of all CPU, memory, network
and storage resources VMware ESXi
Allows for virtual machines to be run at near
native performance, unlike hosted hypervisors
ESXi 6.0 allows
Utilization of up to 480 physical CPUs per host
Utilization of up to 12 TB of RAM per host
Deployment of up to 2048 virtual machines per
host
7
ESXi Architecture
CLI Commands
for Configuration ESXi Host
and Support

Agentless Agentless
Systems Hardware
Management Monitoring

VMware Common VMware VMware


Management Information Management Management
Framework Model (CIM) Framework Framework

Local Support Console (ESXi Shell)

VMkernel

Network and Storage 8


Virtual Machines
Virtual Machines
Virtual Machine
The software computer and consumer of
resources that ESXi is in charge of App App App
VMs are containers that can run any almost Operating System
any operating system and application.
Segregated environment which does not
cross boundaries unless via network or CPU RAM Disk Network /
Video Cards
otherwise permitted through SDK access
Each VM has access to its own resources Keyboard Mouse SCSI
Controller
CD / DVD

VMs generally do not realize that they are


ESXi Host
virtualized

10
Virtual Machine Architecture
Virtual machines consist of files stored on a vSphere VMFS or NFS datastore
Configuration file (.vmx)
Swap files (.vswp)
BIOS files (.nvram)
Log files (.log)
Template file (.vmtx)
Raw device map file (<VM_name>-rdm.vmdk)
Disk descriptor file (.vmdk)
Disk data file (VM_name>-flat.vmdk)
Suspend state file (.vmss)
Snapshot data file (.vmsd)
Snapshot state file (.vmsn)
Snapshot disk file (<VM_name>-delta.vmdk)

11
VMware vCenter Server
VMware vCenter 6.0
vCenter is the management platform for
vSphere environments
Provides much of the feature set that comes
with vSphere, such as vSphere High
Availability
Also provides SDK access into the
environment for solutions such as VMware
vRealize Automation
vCenter Server is available in two flavors
vCenter for Windows
vCenter Server Appliance
In vSphere 6.0, both versions offer feature
parity
A single vCenter Server 6.0 can manage
1000 hosts
10,000 virtual machines 13
vCenter 6.0 Architecture
In vCenter 6.0, the architecture has changed dramatically
Provided by Platform Services Controllers
VMware vCenter Single Sign-On
License service
Lookup service All services are provided
VMware Directory Services from either a
VMware Certificate Authority Platform Services Controller
Provided by vCenter Server Service
or
vCenter Server
vCenter Server instance
VMware vSphere Web Client
Inventory Service
VMware vSphere Auto Deploy
VMware vSphere ESXi Dump Collector
vSphere Syslog Collector on Windows and vSphere Syslog Service for
VMware vCenter Server Appliance
14
vCenter 6.0 Architecture (cont.)
Two basic architectures are supported as a result of this change
Platform Services Controller is either Embedded or External to vCenter Server
Choosing a mode depends on the size and feature requirements for the environment

External Embedded
Platform Services Platform Services
Controller Controller

15
vCenter 6.0 Architecture (cont.)
These architectures are Recommended
Enhanced Linked Mode is a major feature that impacts the architecture
When using Enhanced Linked Mode it is recommended to use an external Platform Service Controller
For details about architectures that VMware recommends and the Implications of using them, see
VMware KB article, List of Recommended topologies for vSphere 6.0 (2108548 (http
://kb.vmware.com/kb/2108548)

Enhanced Linked Mode Enhanced Linked Mode


(No High Availability) (With High Availability) 16
vCenter 6.0 Architectures (cont.)
These architectures are Not Recommended

Enhanced Linked Mode Enhanced Linked Mode


(Embedded PSCs) (Embedded PSC with External vCenter)

Enhanced Linked Mode


(Embedded PSC linked with External PSC) 17
vCenter 6.0 Architecture (cont.)
Enhanced Linked Mode has the following maximums
The architecture should also adhere to these maximums to be supported

Scalability
Description
Maximum
Number of Platform Services Controllers per domain 8
Maximum Platform Services Controllers per vSphere site (behind a single load
4
balancer)

Maximum objects in a vSphere domain (users, groups, solution users) 1,000,000

Maximum number of VMware solutions connected to a single Platform Services


4
Controller

Maximum number of VMware products/solutions per vSphere domain 10

18
vCenter Architecture vCenter Server Components
Platform Services
Additional Services: Controller (Including
VMware vSphere Update vCenter Single Sign-On)
Manager
vRealize Orchestrator User vSphere Web Client
Access Microsoft Active
Database Core and VMware
Control VMware vSphere Directory Domain
Server Distributed vSphere Client
Services API
Third-Party
Applications
ESXi Management
Plug-Ins

ESXi hosts

vCenter Server
Database
19
vCenter Architecture ESXi and vCenter Server Communication
How vCenter Server components and ESXi hosts communicate

vCenter Server
& Platform
TCP 443 Services Controller

vpxd
TCP
443, 9443
TCP/UDP
TCP/UDP
902
902

hostd vpxa

ESXi Host

20
VMware vSphere vMotion
vSphere vMotion
vSphere vMotion allows for live migration
of virtual machines between compatible
ESXi hosts
Compatibility determined by CPU, network,
and storage access
With vSphere 6.0, migrations can occur
Between clusters
Between datastores
NEW
Between networks
NEW
Between vCenter Servers
NEW
Over long distances as long as RTT is
<100ms

22
vSphere vMotion Architecture
vSphere vMotion involves transferring the
entire execution state of the virtual machine
from the source host to the destination
Primarily happens over a high-speed network
The execution state primarily consists of the
following components
The virtual device state, including the state of the
CPU, network and disk adaptors, SVGA, and so
on
External connections with devices, including
networking and SCSI devices
The virtual machines physical memory

Generally a single ping is lost, and users do


not even know a VM has changed hosts

23
vSphere vMotion Architecture Pre-Copy
When a vSphere vMotion is initiated a second VM container is started and pre-copy of the memory is
initiated

ESXi Host 1 ESXi Host 2

VM A VM A

Memory
Bitmap

Memory Pre-Copy
vMotion
Network

Production
Network

VM End User
24
vSphere vMotion Architecture Memory Checkpoint
When enough data is copied, the VM is quiesced
Checkpoint data is sent with the final changes
ARP is sent and VM is active on the destination host
The source VM is stopped
ESXi Host 1 ESXi Host 2

VM A VM A

Memory
Bitmap

Checkpoint Data
vMotion
Network

Production
Network

25
VM End User
VMware vSphere Storage vMotion Architecture
vSphere Storage vMotion works in very Read/Write
I/O to Virtual
much the same way as vSphere vMotion, Disk
only the disks are migrated instead
It works as follows VM VM

1. Initiate storage migration Mirror Driver

2. Use the VMkernel data mover or VMware VMkernel Data Mover


vSphere Storage APIs - Array Integration
(VAAI) to copy data
3. Start a new virtual machine process
Storage Array
4. Use the mirror driver to mirror I/O calls to file
blocks that have already been copied to
virtual disk on the destination datastore VAAI
Source Datastore Destination Datastore
5. Cut over to the destination VM process to
begin accessing the virtual disk copy

26
vSphere Storage vMotion Architecture
Simultaneously Change
vSphere vMotion also allows both storage and host to be changed at the same time
New in vSphere 6.0 the VM can be migrated between networks and vCenter Servers

ESXi
Datastore Network vCenter
Host

vSphere vMotion Network


Network A Network B

VMware ESXi VMware ESXi

vCenter vCenter
Server Server

27
Availability
VMware vSphere High Availability
VMware vSphere Fault Tolerance
VMware vSphere Distributed Resource Scheduler
vSphere vMotion Architecture
Long-Distance vSphere vMotion

Cross Continental

Targeting cross continental USA


Up to 100ms RTT

Performance

Maintain standard vSphere


vMotion guarantees

29
Availability
VMware vSphere High Availability
vSphere High Availability
vSphere High Availability is an availability
solution that monitors hosts and restarts
virtual machines in the case of a host failure Infrastructure Connectivity Application

VM component protection (monitoring for Host failures Host network Guest OS


NEW isolated hangs/crashes
APD and PDL events)
VM crashes Datastore incurs Application
Agents on the PDLhosts
ESXi or APD monitor
hangs/crashes
for the
OS and application-independent, requiring event
no complex configuration changes following types of failures

31
vSphere High Availability Architecture Overview
Cluster of ESXi hosts created up to 64 hosts
One of the hosts is elected as master when HA is enabled

Availability heartbeats occur through network and storage


HAs agent communicates on the following networks by default
Management network (or)
VMware Virtual SAN network (if Virtual SAN is enabled)

Network heartbeats

Storage heartbeats

Master

32
vSphere High Availability Architecture Host Failures

Master

33
vSphere High Availability Architecture Host Failures

Master declares
Master slave host dead

34
vSphere High Availability Architecture Host Failures

New master elected


and resumes master
Master duties

35
vSphere High Availability Architecture Network Partition

A B

Master

36
vSphere High Availability Architecture Host Isolation

Master

37
vSphere High Availability Architecture VM Monitoring

Master

38
vSphere High Availability Architecture VM Component Protection

Master
39
Availability
VMware vSphere Fault Tolerance
vSphere FT
vSphere FT is an availability solution that
provides continuous availability for virtual
machines
Zero downtime
Zero data loss

No loss of TCP connections

Completely transparent to guest software

No dependency on guest OS, applications

No application specific management and


learning
Supports up to 4 NEW
vCPUs in VMs with
vSphere 6.0
Uses fast check pointing rather than
record/replay functionality 41
vSphere FT Architecture
vSphere FT creates two complete virtual machines when enabled with vSphere 6.0
This includes a complete copy of
VMX configuration files
VMDK files including the ability to use separate datastores

Primary VM Secondary VM

.vmx file .vmx file

VMDK VMDK VMDK VMDK


Datastore 1 VM Network Datastore 2 VM Network

42
vSphere FT Architecture Memory Checkpoint
vSphere FT in vSphere 6.0 uses fast checkpoint technology
This is similar to how vSphere vMotion works, but it is done continuously (rather than once)
The fast checkpoint is a snapshot of all data not just memory (memory, disks, devices, and so on)
vSphere FT logging network has a minimum requirement of 10 Gbps NIC
ESXi Host 1 ESXi Host 2

VM A VM A

Memory
bitmap

vSphere FT Fast Checkpoint Data


Logging network

Production
network

VM End User 43
Availability
VMware Sphere Distributed Resource Scheduler
DRS DRS
DRS is a technology that monitors load and
resource usage and will use vSphere vMotion to
balance virtual machines across hosts in a
cluster
DRS also Includes VMware Distributed Power
Management (DPM) which allows for hosts to be
evacuated and powered off during periods of low VMware DPM
utilization
DRS uses vSphere vMotion functionality migrate
VMs
Can be used in three ways
Fully automated where DRS acts on
recommendations automatically
Partially automated where DRS only acts for initial
VM power-on placement and an administrator has to
approve recommendations
45
Manual where administrator approval is required
DRS Architecture
ESXi Host 1 ESXi Host 1
DRS generates migration recommendations
based on how aggressive it has been
configured
For example
The three hosts on the left side of the following ESXi Host 2 ESXi Host 2
figure are unbalanced
Host 1 has six virtual machines, its resources
might be overused while ample resources are
available on Host 2 and Host 3
DRS migrates (or recommends the migration
ESXi Host 3 ESXi Host 3
of) virtual machines from Host 1 to Host 2 and
Host 3
On the right side of the diagram, the properly
load balanced configuration of the hosts that
results appears

46
Distributed Power Management Architecture
ESXi Host 1 ESXi Host 1
DPM generates migration recommendations
similar to DRS, but in terms of achieving
power savings
It can be configured for how aggressively you
want to save power
ESXi Host 2 ESXi Host 2
For example
The three hosts on the left side of the following
figure have virtual machines running, but they
are mostly idle
DPM determines that given the load of the
environment shutting down Host 3 will not ESXi Host 3 ESXi Host 3
impact the level of performance for the VMs
DPM migrates (or recommends the migration Host
nd by
of) virtual machines from Host 3 to Host 2 and Sta
Host 1 and puts Host 3 into standby mode
On the right side of the diagram, the power
47
managed configuration of the hosts appears
Content Library
Content Library
The Content Library is new to vSphere 6.0 and is a distributed template, media and script
library for vCenter Server
Similar to the VMware vCloud 5.5 Content Catalog and VMware vCloud Connector Content
Sync
Tracks versions for generational content, cannot be used to revert to older versions
vCenter vCenter

3
21 2
3
Content Library Content Library
Subscribe
(Publisher) (Subscriber)
2
1 1 2
1 Sync
1
2 1 2
1

49
Content Library Architecture Publication and Subscription
Publication and subscription allow libraries to be shared between vCenter Servers
Provides a single source for information that can be configured to download and sync
according to schedules or timeframes
vCenter vCenter

Templates HTTP GET

Other

Transfer Service Transfer Service


Subscribe using URL
Content Library Service Content Library Service

Subscription URL (to lib.json)

Password (optional)

50
Content Library Architecture Content Synchronization
Content Synchronization occurs when content changes
Simple versioning used to denote the modification, and the item is transferred
vCenter vCenter

HTTP GET

Transfer Service VMware Content Transfer Service


Subscription Protocol Content Library Service
Content Library Service
(vCSP)

lib.json items.json item.json


VCDB VCDB

51
VMware Certificate Authority
Certificates in vSphere 6.0
vCenter 5.x solutions had its TCP/IP connections secured with SSL
Required a unique certificate for each solution

In vSphere 6.0, the various listening ports have been replaced with a single endpoint

Reverse
Web Proxy
(port 443)

vCenter vCenter vSphere Storage


Inventory vSphere
Server Single Update Policy
Service Web Client
Service Sign-On Manager Service

This is the reverse HTTP proxy, which will route traffic to the appropriate service based on the
type of request
This means only one endpoint certificate is needed
53
VMware Certificate Authority
In vSphere 6.0, vCenter ships with an internal Certificate Authority (CA)
Called the VMware Certificate Authority

An instance of the VMware CA is included with each Platform Services Controller node
Issues certificates for VMware components under its personal authority in the vSphere eco-
system
Runs as part of the Infrastructure Identity Core Service Group
Directory service
Certificate service
Authentication framework

VMware CA issues certificates only to clients that present credentials from VMDirectory in its
own identity domain
It also posts its root certificate to its own server node in VMware Directory Services

54
How is the VMware Certificate Authority Used?
Machines SSL certificate
Used by reverse proxy on every vSphere node
Used by the VMware Directory Service on Platform Services Controller and Embedded nodes
Used by VPXD on Management and Embedded nodes

Solution users certificates


Single Sign-On signing certificates

55
VMware Endpoint Certificate Store
Certificate Storage and Trust are now handled by the VMware Endpoint Certificate Store
Serves as a local wallet for certificates, for private keys and secret keys, which can be stored
in key stores
Runs as part of the Authentication Framework Service
Runs on every Embedded, Platform Services Controller and Management node
Some key-stores are special
Trusted certificates key store
Machine SSL cert key store

56
How is VMware Endpoint Certificate Store Used?
Machine SSL store
Holds the machine SSL certificate

Trusted roots store


Holds trusted root certificates from all VMware CA instances running on every infrastructure controller in
the SSO identity domain
Holds third-party trusted root certificates that were uploaded to VMDir and were downloaded to every
VMware Endpoint Certificate Store instance
Solutions use the contents of this key-store to verify certificates

Solution key-stores
Following key stores hold private keys and solution user certificates
Machine Account Key Store (Platform Service Controller, Management, Embedded nodes)
VPXD Key Store (Management, Embedded nodes)
VPXD Extension Key Store (Management, Embedded nodes)
VMware vSphere Client Key Store (Management, Embedded nodes)

57
Storage
iSCSI Storage Architecture
NFS Storage Architecture
Fibre Channel Architecture
Other Storage Architectural Concepts
Storage
Both local and/or shared storage are a core
requirement for full utilization of ESXi VMware
ESXi
features hosts
Many kinds of storage can be used with
vSphere
Local disks Datastore NFS
types VMware vSphere VMFS
Fibre Channel (FC) SANs
iSCSI SANs
File
NAS SANs system

Virtual SAN Storage VSAN


Local
technology FC FCoE iSCSI or NAS
Disks
Virtual Volumes (VVOLs) VVOL

They are generally formatted either:


A VMFS file system
The file system of the NFS Server
59
Storage Protocol Features
Each different protocol has its own set of supported features
All major of features are supported by all protocols
Supports Supports
Supports Boot Supports Raw
Storage Protocol VMware vSphere vSphere High Supports DRS
from SAN Device Mapping
vMotion Availability

Fibre Channel

FCoE

iSCSI

NFS
Direct Attached

Storage
Virtual SAN
VMware Virtual

Volumes
60
Storage
iSCSI Storage Architecture
Storage Architecture iSCSI
iSCSI storage utilizes regular IP traffic over a standard network to transport iSCSI commands
The ESXi host connects through one of several types of iSCSI initiator

62
Storage Architecture iSCSI Components
All iSCSI systems share a common set of components that are used to provide the storage
access

63
Storage Architecture iSCSI Addressing
Other than the standard IP addresses, iSCSI targets are identified by names as well

iSCSI target name:


iqn.1992-08.com.mycompany:stor1-47cf3c25
or
eui.fedcba9876543210
iSCSI alias: stor1
IP address: 192.168.36.101

iSCSI initiator name:


iqn.1998-01.com.vmware:train1-64ad4c29
or
eui.1234567890abcdef
iSCSI alias: train1
IP address: 192.168.36.88

64
Storage
NFS Storage Architecture
Storage Architecture NFS Components
Much like iSCSI, NFS accesses storage
over the network
NAS device or a Directory to share
server with with the ESXi host
storage over the network

ESXi host with VMkernel port


NIC mapped to defined on virtual
virtual switch switch

66
Storage Architecture Addressing and Access Control with NFS
ESXi Accesses NFS through NFS Server
address / name through a VMkernel port
NFS version 4.1 and NFS version 3 are
available with vSphere 6.0
Different features are supported with 192.168.81.33
different versions of the protocol
NFS 4.1 supports multipathing unlike NFS 3
NFS 3 supports all features, NFS 4.1 does not
support Storage DRS, VMware vSphere
Storage I/O Control, VMware vCenter Site
Recovery Manager, and Virtual Volumes
Dedicated switches are not required for NFS 192.168.81.72
VMkernel port
configurations configured with
IP address

67
Storage
Fibre Channel Architecture
Storage Architecture Fibre Channel
Unlike network storage such as NFS or iSCSI, Fibre Channel does not generally use an IP
network for storage Access.
The exception here is when using Fibre Channel over Ethernet (FCoE)

69
Storage Architecture Fibre Channel Addressing and Access
Control
Zoning and LUN masking are used for access control to storage LUNs

70
Storage Architecture FCoE Adapters
Hardware FCoE Software FCoE
FCoE adapters allow access to Fibre
Channel Storage over Ethernet ESXi Host ESXi 5.x Host
connections
Network FC Network Software
Enables expansion to Fibre Channel
Driver Driver Driver FC
SANs when no Fibre Channel
infrastructure exits in many cases Converged NIC
10 Gigabit
Network with FCoE
Ethernet
Both hardware and software adapters Adapter Support
are allowed
Hardware adapters are often called FCoE Switch
converged network adapters (CNAs)
Many times both a NIC and a HBA are Ethernet IP Frames FC Frames to FC
presented from the single card in the to LAN Devices Storage Arrays
clients
FC
LAN
SAN
71
Storage
Other Storage Architectural Concepts
Multipathing
Multipathing enables continued access to
SAN LUNs if hardware fails
It also can provide load balancing based on
the path policy selected

73
vSphere Storage I/O Control
With
With vSphere
vSphere
Without
Without vSphere
vSphere Storage
Storage
vSphere Storage I/O Control allows traffic to Storage
Storage I/O
I/O
Control
Control
I/O
I/O Control
Control

be prioritized during periods of contention Data Print Online Microsoft Data Print Online Microsoft
Mining Server Store Exchange Mining Server Store Exchange
Brings the compute style shares/limits to
storage infrastructure
Monitors device latency and acts when it
over exceeds a threshold
Allows for important virtual machines to
have priority access to resources

During high I/O from non-critical application

74
Datastore Clusters
A collection of datastores with shared resources similar to ESXi host clusters
Allow for management to be done as a shared management interface
Storage DRS can be used to manage the resource and ensure they are balanced
Can be managed by using the following constructs
Space utilization
I/O latency load balancing
Affinity rules for virtual disks

75
Software-Defined Storage
Software-defined storage is a software
construct which is used by
Virtual Volumes
Virtual SAN

Uses storage policy-based management to


assign policies to virtual machines for
storage access
Policies are assigned on a per disk basis,
rather than a per datastore basis
Key tenant to the software-defined data
center
Both Virtual Volumes and Virtual SAN are
discussed in much greater detail in the
Software-Defined Storage Knowledge
Transfer Kit 76
Networking
Networking
Networking is also a core resource for vSphere
Two core types of switches are provided
Standard virtual switches
Virtual switch configuration for a single host

Distributed virtual switches


Data center level virtual switches that provide a consistent network configuration for virtual machines as they
migrate across multiple hosts
Third-Party switches are also allowed
Cisco Nexus 1000v

There are two basic types of connectivity as well


Virtual machine port groups
VMkernel port groups
For IP storage, vSphere vMotion migration, vSphere FT, Virtual SAN, provisioning, and so on
For the ESXi management network
78
Networking Architecture

VM1 VM2 VM3

Management
Network

VMkernel

Test VLAN 101


Production VLAN 102
IP Storage VLAN 103
Management VLAN 104 79
Network Architecture Standard Compared to Distributed

Distributed vSwitch
Standard vSwitch

80
Network Architecture NIC Teaming and Load Balancing
NIC Teaming enables multiple NICs to be connected to a single virtual switch for continued
access to networks if hardware fails
This also can enables load balancing (if appropriate)

Load Balancing Policies


Route based on Originating Virtual Port
Route based on Source MAC Hash
Route based on IP Hash
Route based on Physical NIC Load
Use Explicit Failover Order

Many available policies are configured on any type of switch


Route based on Physical NIC Load is only available on VMware vSphere Distributed SwitchTM

81
VMware vSphere Network I/O Control
vSphere Network I/O Control allows traffic to
be prioritized during periods of contention
Brings the compute style of shares/limits to
storage infrastructure
Monitors device latency and acts when it
over exceeds a threshold
Virtual Switch
Allows important virtual machines or
services to have priority access to resources

10 GigE

82
Software-Defined Networking
Software-Defined Networking is a software
construct that allows your physical network
to be treated as a pool of transport capacity,
with network and security services attached
to VMs with a policy-driven approach
Decouples the network configuration from
the physical infrastructure
Allows for security and micro-segmentation
of traffic
Key tenant to the software-defined data
center (SDDC)

83
Questions

84
VMware vSphere 6.0
Knowledge Transfer Kit

VMware, Inc.
3401 Hillview Ave
Palo Alto, CA 94304

Tel: 1-877-486-9273 or 650-427-5000


Fax: 650-427-5001

S-ar putea să vă placă și