Sunteți pe pagina 1din 30

VMware vSphere

Distributed Switch
Best Practices
T E CH NI C AL WHI T E PAP E R
VMware vSphere Distributed Switch
Best Practices
T E C H NI C AL WH I T E PAP E R / 2
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Infrastructure Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Infrastructure Component Congurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
Virtual Infrastructure Trac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
Example Deployment Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
VMware vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Network Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Virtual Infrastructure Trac Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Important Virtual and Physical Switch Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
VDS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .o
Host Uplink Connections (vmnics) and dvuplink Parameters . . . . . . . . . . . . . . . . . . . .o
Trac Types and dvportgroup Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
dvportgroup Specic Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
NIOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1C
Bidirectional Trac Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1C
Physical Network Switch Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Link Aggregation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Link-State Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Maximum Transmission Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Rack Server in Example Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Rack Server with Eight 1GbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Design Option 1 Static Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
dvuplink Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Physical Switch Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1o
Design Option 2 Dynamic Conguration with NIOC and LBT . . . . . . . . . . . . . . . . . 17
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
VMware vSphere Distributed Switch
Best Practices
T E C H NI C AL WH I T E PAP E R / 3
Rack Server with Two 1CGbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2C
Design Option 1 Static Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
dvuplink Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
Physical Switch Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Design Option 2 Dynamic Conguration with NIOC and LBT . . . . . . . . . . . . . . . . .2
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Blade Server in Example Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Blade Server with Two 1CGbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2o
Design Option 1 Static Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2o
Design Option 2 Dynamic Conguration with NIOC and LBT . . . . . . . . . . . . . . . . .2o
Blade Server with Hardware-Assisted Logical Network Adaptors
(HP Flex-1C or Cisco UCSlike Deployment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Operational Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
VMware vSphere Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2o
VMware vSphere API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2o
Virtual Network Monitoring and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
vCenter Server on a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
T E C H NI C AL WH I T E PAP E R / 4
VMware vSphere Distributed Switch
Best Practices
Introduction
This paper provides best practice guidelines for deploying the VMware vSphere distributed switch (VDS) in a
vSphere environment. The advanced capabilities of VDS provide network administrators with more control of
and visibility into their virtual network infrastructure. This document covers the diferent considerations that
vSphere and network administrators must take into account when designing the network with VDS. It also
discusses some standard best practices for conguring VDS features.
The paper describes two example deployments, one using rack servers and the other using blade servers. For
each of these deployments, diferent VDS design approaches are explained. The deployments and design
approaches described in this document are meant to provide guidance as to what physical and virtual switch
parameters, options and features should be considered during the design of a virtual network infrastructure. It is
important to note that customers are not limited to the design options described in this paper. The exibility of
the vSphere platform allows for multiple variations in the design options that can fulll an individual customers
unique network infrastructure needs.
This document is intended for vSphere and network administrators interested in understanding and deploying
VDS in a virtual datacenter environment. With the release of vSphere 5, there are new features as well as
enhancements to the existing features in VDS. To learn more about these new features and enhancements, refer
to the Whats New in Networking paper: http://www.vmware.com/resources/techresources/10194.
Readers are also encouraged to refer to basic virtual and physical networking concepts before reading through
this document. The following link provides technical resources for virtual networking concepts:
http://www.vmware.com/technical-resources/virtual-networking/resources.html
For physical networking concepts, readers should refer to any physical network switch vendors documentation.
Design Considerations
The following three main aspects inuence the design of a virtual network infrastructure:
) Customers infrastructure design goals
) Customers infrastructure component congurations
) Virtual infrastructure trafc requirements
Lets take a look at each of these aspects in a little more detail.
Infrastructure Design Goals
Customers want their network infrastructure to be available 24/7, to be secure from any attacks, to perform
efciently throughout day-to-day operations, and to be easy to maintain. In the case of a virtualized
environment, these requirements become increasingly demanding as growing numbers of business-critical
applications run in a consolidated setting. These requirements on the infrastructure translate into design
decisions that should incorporate the following best practices for a virtual network infrastructure:
Avo|d any :|nq|e oo|n o a||Jre |n |e nevor|.
:o|ae eac| rac yoe or |ncrea:ed re:|||ency and :ecJr|y.
|a|e J:e o rac nanaqenen and oo|n|.a|on caoab||||e:.
T E C H NI C AL WH I T E PAP E R / 5
VMware vSphere Distributed Switch
Best Practices
Infrastructure Component Congurations
In every customer environment, the utilized compute and network infrastructures difer in terms of
conguration, capacity and feature capabilities. These diferent infrastructure component congurations
inuence the virtual network infrastructure design decisions. The following are some of the congurations
and features that administrators must look out for:
Server conqJra|on. rac| or b|ade :erver:
Nevor| adaoor conqJra|on. lCb| or l0Cb| nevor| adaoor:, nJnber o ava||ab|e adaoor:,
ofoad function on these adaptors, if any
||y:|ca| nevor| :v|c| |nra:rJcJre caoab||||e:. :v|c| c|J:er|nq
It is impossible to cover all the diferent virtual network infrastructure design deployments based on the various
combinations of type of servers, network adaptors and network switch capability parameters. In this paper, the
following four commonly used deployments that are based on standard rack server and blade server
congurations are described:
Pac| :erver v|| e|q| lCb| nevor| adaoor:
Pac| :erver v|| vo l0Cb| nevor| adaoor:
||ade :erver v|| vo l0Cb| nevor| adaoor:
||ade :erver v|| |ardvarea::|:ed nJ||o|e |oq|ca| ||erne nevor| adaoor:
It is assumed that the network switch infrastructure has standard layer 2 switch features (high availability,
redundant paths, fast convergence, port security) available to provide reliable, secure and scalable connectivity
to the server infrastructure.
Virtual Infrastructure Trafc
vSphere virtual network infrastructure carries diferent trafc types. To manage the virtual infrastructure trafc
efectively, vSphere and network administrators must understand the diferent trafc types and their
characteristics. The following are the key trafc types that ow in the vSphere infrastructure, along with their
trafc characteristics:
|anaqenen rac. T||: rac ov: |roJq| a vn|n|c and carr|e: V|vare |SX|' |o:oV|vare vCener'
conqJra|on and nanaqenen connJn|ca|on a: ve|| a: |SX| |o:o|SX| |o: ||q| ava||ab|||y (|A)
related communication. This trafc has low network utilization but has very high availability and security
requirements.
V|vare vSo|ere' v|o|on' rac. \|| advancenen |n v|o|on ec|no|oqy, a :|nq|e v|o|on |n:ance can
con:Jne a|no: a J|| l0Cb bandv|d|. A nax|nJn o e|q| :|nJ|aneoJ: v|o|on |n:ance: can be oerorned
on a l0Cb Jo||n|, oJr :|nJ|aneoJ: v|o|on |n:ance: are a||oved on a lCb Jo||n|. v|o|on rac |a: very
high network utilization and can be bursty at times. Customers must make sure that vMotion trafc doesnt
|noac o|er rac yoe:, becaJ:e | n|q| con:Jne a|| ava||ab|e /O re:oJrce:. Ano|er orooery o v|o|on
trafc is that it is not sensitive to throttling and makes a very good candidate on which to perform trafc
management.
|aJ|o|eran rac. \|en V|vare |aJ| To|erance (|T) |oqq|nq |: enab|ed or a v|rJa| nac||ne, a|| |e
logging trafc is sent to the secondary fault-tolerant virtual machine over a designated vmknic port. This
process can require a considerable amount of bandwidth at low latency because it replicates the I/O trafc
and memory-state information to the secondary virtual machine.
|SCS/N|S rac. | :oraqe rac |: carr|ed over vn|n|c oor:. T||: rac var|e: accord|nq o d|:| /O
reoJe::. \|| endoend Jnbo rane conqJra|on, nore daa |: ran:erred v|| eac| ||erne rane,
decreasing the number of frames on the network. This larger frame reduces the overhead on servers/targets
and |norove: |e | :oraqe oerornance. On |e o|er |and, conqe:ed and |over:oeed nevor|: can caJ:e
|aency |::Je: |a d|:rJo acce:: o | :oraqe. |: reconnended |a J:er: orov|de a ||q|:oeed oa| or
| :oraqe and avo|d any conqe:|on |n |e nevor| |nra:rJcJre.
T E C H NI C AL WH I T E PAP E R / 6
VMware vSphere Distributed Switch
Best Practices
V|rJa| nac||ne rac. |eoend|nq on |e vor||oad: |a are rJnn|nq on |e qJe: v|rJa| nac||ne, |e rac
patterns will vary from low to high network utilization. Some of the applications running in virtual machines
n|q| be |aency :en:||ve a: |: |e ca:e v|| VO| vor||oad:.
Table 1 summarizes the characteristics of each trafc type.
TRAFFI C TYPE BANDWI DTH USAGE OTHER TRAFFI C
REQUI REMENTS
MANAGEMENT Low ||q||y re||ab|e and :ecJre c|anne|
vMOTI ON ||q| Isolated channel
FT Medium to high ||q||y re||ab|e, |ov|aency c|anne|
I SCSI ||q| Reliable, high-speed channel
VI RTUAL MACHI NE Depends on application Depends on application
Table 1. Trac Types and Characteristics
To understand the diferent trafc ows in the physical network infrastructure, network administrators use
network trafc management tools. These tools help monitor the physical infrastructure trafc but do not provide
v|:|b|||y |no v|rJa| |nra:rJcJre rac. \|| |e re|ea:e o vSo|ere S, V|S nov :Jooor: |e Ne||ov eaJre,
v||c| enab|e: exoor|nq |e |nerna| (v|rJa| nac||neov|rJa| nac||ne) v|rJa| |nra:rJcJre ov |norna|on
o :andard nevor| nanaqenen oo|:. Adn|n|:raor: nov |ave |e reoJ|red v|:|b|||y |no v|rJa| |nra:rJcJre
trafc. This helps administrators monitor the virtual network infrastructure trafc through a familiar set of
network management tools. Customers should make use of the network data collected from these tools during
the capacity planning or network design exercises.
Example Deployment Components
Aer |oo||nq a |e d|eren de:|qn con:|dera|on:, ||: :ec|on orov|de: a ||: o conoonen: |a are J:ed
in an example deployment. This example deployment helps illustrate some standard VDS design approaches.
The following are some common components in the virtual infrastructure. The list doesnt include storage
components that are required to build the virtual infrastructure. It is assumed that customers will deploy
| :oraqe |n ||: exano|e deo|oynen.
Hosts
|oJr |SX| |o:: orov|de conoJe, nenory and nevor| re:oJrce: accord|nq o |e conqJra|on o |e
hardware. Customers can have diferent numbers of hosts in their environment, based on their needs. One VDS
can span across 350 hosts. This capability to support large numbers of hosts provides the required scalability to
build a private or public cloud environment using VDS.
Clusters
A c|J:er |: a co||ec|on o |SX| |o:: and a::oc|aed v|rJa| nac||ne: v|| :|ared re:oJrce:. CJ:oner: can
have as many clusters in their deployment as are required. With one VDS spanning across 350 hosts, customers
have the exibility of deploying multiple clusters with a diferent number of hosts in each cluster. For simple
illustration purposes, two clusters with two hosts each are considered in this example deployment. One cluster
can have a maximum of 32 hosts.
T E C H NI C AL WH I T E PAP E R / 7
VMware vSphere Distributed Switch
Best Practices
VMware vCenter Server
V|vare vCener Server' cenra||y nanaqe: a vSo|ere env|ronnen. CJ:oner: can nanaqe V|S |roJq|
this centralized management tool, which can be deployed on a virtual machine or a physical host. The vCenter
Server system is not shown in the diagrams, but customers should assume that it is present in this example
deployment. It is used only to provision and manage VDS conguration. When provisioned, hosts and virtual
nac||ne nevor|: ooerae |ndeoenden|y o vCener Server. A|| conoonen: reoJ|red or nevor| :v|c||nq
re:|de on |SX| |o::. |ven | |e vCener Server :y:en a||:, |e |o:: and v|rJa| nac||ne: v||| :||| be ab|e
to communicate.
Network Infrastructure
||y:|ca| nevor| :v|c|e: |n |e acce:: and aqqreqa|on |ayer orov|de connec|v|y beveen |SX| |o:: and o
the external world. These network infrastructure components support standard layer 2 protocols providing
secure and reliable connectivity.
A|onq v|| |e oreced|nq oJr conoonen: o |e o|y:|ca| |nra:rJcJre |n ||: exano|e deo|oynen, :one o
the virtual infrastructure traf c types are also considered during the design. The following section describes the
diferent traf c types in the example deployment.
Virtual Infrastructure Traf c Types
In this example deployment, there are standard infrastructure traf c types, including iSCSI, vMotion, FT,
management and virtual machine. Customers might have other traf c types in their environment, based on their
c|o|ce o :oraqe |nra:rJcJre (|C, N|S, |Co|). ||qJre l :|ov: |e d|eren ra c yoe: a|onq v|| a::oc|aed
oor qroJo: on an |SX| |o:. a|:o :|ov: |e naoo|nq o |e nevor| adaoor: o |e d|eren oor qroJo:.
VM
VDS
iSCSI
Trafc
vmk1
PG-A PG-B PG-C PG-D PG-E
Host
ESXi
FT
Trafc
vmk2
Mgmt
Trafc
vmk3
vMotion
Trafc
vmk4
Figure 1. Dierent Tra c Types Running on a Host
T E C H NI C AL WH I T E PAP E R / 8
VMware vSphere Distributed Switch
Best Practices
Important Virtual and Physical Switch Parameters
|eore qo|nq |no |e d|eren de:|qn oo|on: |n |e exano|e deo|oynen, |e': a|e a |oo| a |e v|rJa| and
physical network switch parameters that should be considered in all of the design options. These are some key
parameters that vSphere and network administrators must take into account when designing VMware virtual
nevor||nq. |ecaJ:e |e conqJra|on o v|rJa| nevor||nq qoe: |and |n |and v|| o|y:|ca| nevor|
conguration, this section will cover both the virtual and physical switch parameters.
VDS Parameters
VDS simplies the challenges of the conguration process by providing one single pane of glass to perform
v|rJa| nevor| nanaqenen a:|:. A: oooo:ed o conqJr|nq a vSo|ere :andard :v|c| (VSS) on eac|
|nd|v|dJa| |o:, adn|n|:raor: can conqJre and nanaqe one :|nq|e V|S. A|| cenra||y conqJred nevor|
policies on VDS get pushed down to the host automatically when the host is added to the distributed switch.
In this section, an overview of key VDS parameters is provided.
Host Uplink Connections (vmnics) and dvuplink Parameters
V|S |a: a nev ab:rac|on, ca||ed dvJo||n|, or |e o|y:|ca| ||erne nevor| adaoor: (vnn|c:) on eac| |o:.
It is dened during the creation of the VDS and can be considered as a template for individual vmnics on each
|o:. A|| |e orooer|e:|nc|Jd|nq nevor| adaoor ean|nq, |oad ba|anc|nq and a||over oo||c|e: on V|S and
dvoorqroJo:are conqJred on dvJo||n|:. T|e:e dvJo||n| orooer|e: are aJona|ca||y aoo||ed o vnn|c: on
individual hosts when a host is added to the VDS and when each vmnic on the host is mapped to a dvuplink.
This dvuplink abstraction therefore provides the advantage of consistently applying teaming and failover
conqJra|on: o a|| |e |o:': o|y:|ca| ||erne nevor| adaoor: (vnn|c:).
||qJre 2 :|ov: vo |SX| |o:: v|| oJr ||erne nevor| adaoor: eac|. \|en |e:e |o:: are added o |e
VDS, with four dvuplinks congured on a dvuplink port group, administrators must assign the network adaptors
(vmnics) of the hosts to dvuplinks. To illustrate the mapping of the dvuplinks to vmnics, Figure 2 shows one type
o naoo|nq v|ere |SX| |o: vnn|c0 |: naooed o dvJo||n|l, vnn|cl o dvJo||n|2, and :o on. CJ:oner: can c|oo:e
diferent mapping, if required, where vmnic0 can be mapped to a diferent dvuplink instead of dvuplink1. VMware
recommends having consistent mapping across diferent hosts because it reduces complexity in the environment.
VM VM VM VM VM VM VM VM
ESXi Host1 ESXi Host2
dvuplink1 dvuplink4
PG-A
Legend:
PG-B
dvuplink
Port Group
dvuplink
vSphere Distributed Switch
vmnic0 vmnic1 vmnic2 vmnic3 vmnic0 vmnic1 vmnic2 vmnic3
Figure 2. dvuplink-to-vmnic Mapping
T E C H NI C AL WH I T E PAP E R / 9
VMware vSphere Distributed Switch
Best Practices
A: a be: orac|ce, cJ:oner: :|oJ|d a|:o ry o deo|oy |o:: v|| |e :ane nJnber o o|y:|ca| ||erne
nevor| adaoor: and v|| :|n||ar oor :oeed:. A|:o, becaJ:e |e nJnber o dvJo||n|: on V|S deoend: on |e
nax|nJn nJnber o o|y:|ca| ||erne nevor| adaoor: on a |o:, adn|n|:raor: :|oJ|d a|e |a |no accoJn
during dvuplink port group conguration. Customers always have an option to modify this dvuplink
conguration based on the new hardware capabilities.
Trafc Types and dvportgroup Parameters
Similar to port groups on standard switches, dvportgroups dene how the connection is made through the VDS
o |e nevor|. T|e V|AN |, rac :|ao|nq, oor :ecJr|y, ean|nq and |oad ba|anc|nq oaraneer: are
congured on these dvportgroups. The virtual ports (dvports) connected to a dvportgroup share the same
properties congured on a dvportgroup. When customers want a group of virtual machines to share the security
and teaming policies, they must make sure that the virtual machines are part of one dvportgroup. Customers
can choose to dene diferent dvportgroups based on the diferent trafc types they have in their environment
or based on the diferent tenants or applications they support in the environment. If desired, multiple
dvoorqroJo: can :|are |e :ane V|AN |.
In this example deployment, the dvportgroup classication is based on the trafc types running in the virtual
|nra:rJcJre. Aer adn|n|:raor: Jnder:and |e d|eren rac yoe: |n |e v|rJa| |nra:rJcJre and |den|y
specic security, reliability and performance requirements for individual trafc types, the next step is to create
Jn|oJe dvoorqroJo: a::oc|aed v|| eac| rac yoe. A: va: orev|oJ:|y nen|oned, |e dvoorqroJo
conguration dened at VDS level is automatically pushed down to every host that is added to the VDS. For
exano|e, |n ||qJre 2, |e vo dvoorqroJo:, |CA (ye||ov) and |C| (qreen), dened a |e d|:r|bJed :v|c|
|eve| are ava||ab|e on eac| o |e |SX| |o:: |a are oar o |a V|S.
dvportgroup Specic Conguration
Aer cJ:oner: dec|de on |e nJnber o Jn|oJe dvoorqroJo: |ey van o creae |n |e|r env|ronnen, |ey
can start conguring them. The conguration options/parameters are similar to those available with port groups
on vSphere standard switches. There are some additional options available on VDS dvportgroups that are
related to teaming setup and are not available on vSphere standard switches. Customers can congure the
following key parameters for each dvportgroup.
NJnber o v|rJa| oor: (dvoor:)
|or b|nd|nq (:a|c, dynan|c, eo|enera|)
V|AN rJn||nq/or|vae V|AN:
Tean|nq and |oad ba|anc|nq a|onq v|| ac|ve and :andby ||n|:
||d|rec|ona| rac:|ao|nq oaraneer:
|or :ecJr|y
A: oar o |e ean|nq a|qor||n :Jooor, V|S orov|de: a Jn|oJe aooroac| o |oad ba|anc|nq rac acro:: |e
eaned nevor| adaoor:. T||: aooroac| |: ca||ed |oadba:ed ean|nq (||T), v||c| d|:r|bJe: |e rac acro::
|e nevor| adaoor: ba:ed on |e oercenaqe J|||.a|on o rac on |o:e adaoor:. ||T a|qor||n vor|: on
both ingress and egress direction of the network adaptor trafc, as opposed to the hashing algorithms that work
on|y |n eqre:: d|rec|on (rac ov|nq oJ o |e nevor| adaoor). A|:o, ||T oreven: |e vor:ca:e :cenar|o
that might happen with hashing algorithms, where all trafc hashes to one network adaptor of the team while
other network adaptors are not used to carry any trafc. To improve the utilization of all the links/network
adaoor:, V|vare reconnend: |e J:e o ||: advanced eaJre, ||T, o V|S. T|e ||T aooroac| |: reconnended
over |e ||erC|anne| on o|y:|ca| :v|c|e: and roJeba:ed | |a:| conqJra|on on |e v|rJa| :v|c|.
T E C H NI C AL WH I T E PAP E R / 1 0
VMware vSphere Distributed Switch
Best Practices
|or :ecJr|y oo||c|e: a oor qroJo |eve| enab|e cJ:oner oroec|on ron cera|n ac|v|y |a n|q|
compromise security. For example, a hacker might impersonate a virtual machine and gain unauthorized access
by :ooonq |e v|rJa| nac||ne': |AC addre::. V|vare reconnend: :e|nq |e |AC addre:: C|anqe:' and
|orqed Tran:n|:' o Peec' o |e|o oroec aqa|n: aac|: |aJnc|ed by a roqJe qJe: ooera|nq :y:en.
CJ:oner: :|oJ|d :e |e |ron|:cJoJ: |ode' o Peec' Jn|e:: |ey van o non|or |e rac or nevor|
troubleshooting or intrusion detection purposes.
NIOC
Nevor| /O conro| (NOC) |: |e rac nanaqenen caoab|||y ava||ab|e on V|S. T|e NOC conceo revo|ve:
aroJnd re:oJrce ooo|: |a are :|n||ar |n nany vay: o |e one: ex|:|nq or C|U and nenory. vSo|ere and
nevor| adn|n|:raor: nov can a||ocae /O :|are: o d|eren rac yoe: :|n||ar|y o a||oca|nq C|U and
memory resources to a virtual machine. The share parameter species the relative importance of a trafc type
over other trafc and provides a guaranteed minimum when the other trafc competes for a particular network
adaptor. The shares are specied in abstract units numbered 1 to 100. Customers can provision shares to
diferent trafc types based on the amount of resources each trafc type requires.
This capability of provisioning I/O resources is very useful in situations where there are multiple trafc types
competing for resources. For example, in a deployment where vMotion and virtual machine trafc types are
owing through one network adaptor, it is possible that vMotion activity might impact the virtual machine
rac oerornance. n ||: :|Ja|on, :|are: conqJred |n NOC orov|de |e reoJ|red |:o|a|on o |e v|o|on
and v|rJa| nac||ne rac yoe and oreven one ov (rac yoe) ron don|na|nq |e o|er ov. NOC
conguration provides one more parameter that customers can utilize if they want to put any limits on a particular
rac yoe. T||: oaraneer |: ca||ed |e ||n|.' T|e ||n| conqJra|on :oec|e: |e ab:o|Je nax|nJn bandv|d|
or a rac yoe on a |o:. T|e conqJra|on o |e ||n| oaraneer |: :oec|ed |n |bo:. NOC ||n|: and :|are:
oaraneer: vor| on|y on |e oJboJnd rac, |.e., rac |a |: ov|nq oJ o |e |SX| |o:.
VMware recommends that customers utilize this trafc management feature whenever they have multiple trafc
yoe: ov|nq |roJq| one nevor| adaoor, a :|Ja|on |a |: nore oron|nen v|| l0 C|qab| ||erne (Cb|)
nevor| deo|oynen: bJ can |aooen |n lCb| nevor| deo|oynen: a: ve||. T|e connon J:e ca:e or J:|nq
NOC |n lCb| nevor| adaoor deo|oynen: |: v|en |e rac ron d|eren vor||oad: or d|eren cJ:oner
v|rJa| nac||ne: |: carr|ed over |e :ane nevor| adaoor. A: nJ||o|evor||oad rac ov: |roJq| a nevor|
adaptor, it becomes important to provide I/O resources based on the needs of the workload. With the release of
vSphere 5, customers now can make use of the new user-dened network resource pools capability and can
allocate I/O resources to the diferent workloads or diferent customer virtual machines, depending on their
needs. This user-dened network resource pools feature provides the granular control in allocating I/O resources
and nee|nq |e :erv|ce|eve| aqreenen (S|A) reoJ|renen: or |e v|rJa||.ed |er l vor||oad:.
Bidirectional Trafc Shaping
|e:|de: NOC, |ere |: ano|er rac:|ao|nq eaJre |a |: ava||ab|e |n |e vSo|ere o|aorn. can be
congured on a dvportgroup or dvport level. Customers can shape both inbound and outbound trafc using
three parameters: average bandwidth, peak bandwidth and burst size. Customers who want more granular
trafc-shaping controls to manage their trafc types can take advantage of this capability of VDS along with the
NOC eaJre. |: reconnended |a nevor| adn|n|:raor: |n yoJr orqan|.a|on be |nvo|ved v|||e conqJr|nq
|e:e qranJ|ar rac oaraneer:. T|e:e conro|: na|e :en:e on|y v|en |ere are over:Jb:cr|o|on :cenar|o:
caJ:ed by |e over:Jb:cr|bed o|y:|ca| :v|c| |nra:rJcJre or v|rJa| |nra:rJcJre|a are caJ:|nq nevor|
performance issues. So it is very important to understand the physical and virtual network environment before
making any bidirectional trafc-shaping congurations.
T E C H NI C AL WH I T E PAP E R / 1 1
VMware vSphere Distributed Switch
Best Practices
Physical Network Switch Parameters
The congurations of the VDS and the physical network switch should go hand in hand to provide resilient,
secure and scalable connectivity to the virtual infrastructure. The following are some key switch conguration
parameters the customer should pay attention to.
VLAN
V|AN: are J:ed o orov|de |oq|ca| |:o|a|on beveen d|eren rac yoe:, | |: |nooran o na|e :Jre |a
|o:e V|AN: are carr|ed over o |e o|y:|ca| :v|c| |nra:rJcJre. To do :o, enab|e v|rJa| :v|c| aqq|nq (VST)
on |e v|rJa| :v|c|, and rJn| a|| V|AN: o |e o|y:|ca| :v|c| oor:. |or :ecJr|y rea:on:, | |: reconnended
|a cJ:oner: no J:e |e V|AN | l (deaJ|) or any V|vare |nra:rJcJre rac.
Spanning Tree Protocol
Soann|nq Tree |rooco| (ST|) |: no :Jooored on v|rJa| :v|c|e:, :o no conqJra|on |: reoJ|red on V|S.
|J | |: |nooran o enab|e ||: orooco| on |e o|y:|ca| :v|c|e:. ST| na|e: :Jre |a |ere are no |ooo: |n
|e nevor|. A: a be: orac|ce, cJ:oner: :|oJ|d conqJre |e o||ov|nq.
U:e |or|a: on an |SX| |o:ac|nq o|y:|ca| :v|c| oor:. \|| ||: :e|nq, nevor| converqence on |e:e
:v|c| oor: v||| a|e o|ace oJ|c||y aer |e a||Jre becaJ:e |e oor v||| ener |e ST| orvard|nq :ae
immediately, bypassing the listening and learning states.
U:e |e |or|a: |r|dqe |rooco| |aa Un| (|||U) qJard eaJre o enorce |e ST| boJndary. T||: conqJra|on
oroec: aqa|n: any |nva||d dev|ce connec|on on |e |SX| |o:ac|nq acce:: :v|c| oor:. A: va: orev|oJ:|y
nen|oned, V|S doe:n' :Jooor ST|, :o | doe:n' :end any |||U rane: o |e :v|c| oor. |ovever, | any
|||U |: :een on |e:e |SX| |o:ac|nq acce:: :v|c| oor:, |e |||U qJard eaJre oJ: |a oar|cJ|ar
switch port in error-disabled state. The switch port is completely shut down and prevents afecting the
Spanning Tree Topology.
T|e reconnenda|on o enab||nq |or|a: and |e |||U qJard eaJre on |e :v|c| oor: |: va||d on|y v|en
customers connect nonswitching/bridging devices to these ports. The switching/bridging devices can be
hardware-based physical boxes or servers running a software-based switching/bridging function. Customers
:|oJ|d na|e :Jre |a |ere |: no :v|c||nq/br|dq|nq Jnc|on enab|ed on |e |SX| |o:: |a are conneced o
the physical switch ports.
|ovever, |n |e :cenar|o v|ere |e |SX| |o: |a: a qJe: v|rJa| nac||ne |a |: conqJred o oerorn a
br|dq|nq Jnc|on, |e v|rJa| nac||ne v||| qenerae |||U rane: and :end |en oJ o |e V|S, v||c| |en
orvard: |e |||U rane: |roJq| |e nevor| adaoor o |e o|y:|ca| :v|c| oor. \|en |e :v|c| oor
conqJred v|| |||U qJard rece|ve: |e |||U rane, |e :v|c| v||| d|:ab|e |e oor and |e v|rJa| nac||ne
will lose connectivity. To avoid this network failure scenario when running the software bridging function on
an |SX| |o:, cJ:oner: :|oJ|d d|:ab|e |e |or|a: and |||U qJard conqJra|on on |e o|y:|ca| :v|c| oor
and rJn ST|.
cJ:oner: are concerned aboJ |ac|: |a can qenerae |||U rane:, |ey :|oJ|d na|e J:e o
V|vare vS||e|d Aoo', v||c| can b|oc| |e rane: and oroec |e v|rJa| |nra:rJcJre ron :Jc| |ayer 2
aac|:. Peer o V|vare vS||e|d' orodJc docJnena|on or nore dea||: on |ov o :ecJre yoJr vSo|ere
virtual infrastructure: http://www.vmware.com/products/vshield/overview.html.
Link Aggregation Setup
Link aggregation is used to increase throughput and improve resiliency by combining multiple network
connections. There are various proprietary solutions on the market along with vendor-independent
||| 802.ad (|AC|) :andardba:ed |no|enena|on. A|| :o|J|on: e:ab||:| a |oq|ca| c|anne| beveen |e vo
endpoints, using multiple physical links. In the vSphere virtual infrastructure, the two ends of the logical channel
are the VDS and physical switch. These two switches must be congured with link aggregation parameters
before the logical channel is established. Currently, VDS supports static link aggregation conguration and does
no orov|de :Jooor or dynan|c |AC|. \|en cJ:oner: van o enab|e ||n| aqqreqa|on on a o|y:|ca| :v|c|,
|ey :|oJ|d conqJre :a|c ||n| aqqreqa|on on |e o|y:|ca| :v|c| and :e|ec | |a:| a: nevor| adaoor
teaming on the VDS.
T E C H NI C AL WH I T E PAP E R / 1 2
VMware vSphere Distributed Switch
Best Practices
When establishing the logical channel with multiple physical links, customers should make sure that the
||erne nevor| adaoor connec|on: ron |e |o: are ern|naed on a :|nq|e o|y:|ca| :v|c|. |ovever, |
cJ:oner: |ave deo|oyed c|J:ered o|y:|ca| :v|c| ec|no|oqy, |e ||erne nevor| adaoor connec|on: can
be terminated on two diferent physical switches. The clustered physical switch technology is referred to by
diferent names by networking vendors. For example, Cisco calls their switch clustering solution Virtual
Sv|c||nq Sy:en, |rocade ca||: |e|r: V|rJa| C|J:er Sv|c||nq. Peer o |e nevor||nq vendor qJ|de||ne:
and conguration details when deploying switch clustering technology.
Link-State Tracking
Link-state tracking is a feature available on Cisco switches to manage the link state of downstream ports, ports
connected to servers, based on the status of upstream ports, ports connected to aggregation/core switches.
When there is any failure on the upstream links connected to aggregation or core switches, the associated
downstream link status goes down. The server connected on the downstream link is then able to detect the
failure and reroute the traf c on other working links. This feature therefore provides the protection from network
a||Jre: dJe o |e a||ed Jo:rean oor: |n nonne:| ooo|oq|e:. UnorJnae|y, ||: eaJre |: no ava||ab|e on
all vendors switches, and even if it is available, it might not be referred to as link-state tracking. Customers
should talk to the switch vendors to nd out whether a similar feature is supported on their switches.
Figure 3 shows the resilient mesh topology on the left and a simple loop-free topology on the right. VMware
highly recommends deploying the mesh topology shown on the left, which provides highly reliable redundant
design and doesnt need a link-state tracking feature. Customers who dont have high-end networking expertise
and are also limited in number of switch ports might prefer the deployment shown on the right. In this
deo|oynen, cJ:oner: don' |ave o rJn ST| becaJ:e |ere are no |ooo: |n |e nevor| de:|qn. T|e dovn:|de
of this simple design is seen when there is a failure in the link between the access and aggregation switches.
In that failure scenario, the server will continue to send traf c on the same network adaptor even when the
access layer switch is dropping the traf c at the upstream interface. To avoid this blackholing of server traf c,
customers can enable link-state tracking on the virtual and physical switches and indicate any failure between
access and aggregation switch layers to the server through link-state information.
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
vSphere Distributed Switch
Access
Layer
Aggregation
Layer
Loop
Resilient mesh topology
with loops. Need STP.
Resilient topology with no loops.
No STP but need link-state tracking.
Figure 3. Resilient Loop and No-Loop Topologies
T E C H NI C AL WH I T E PAP E R / 1 3
VMware vSphere Distributed Switch
Best Practices
V|S |a: deaJ| nevor| a||over deec|on conqJra|on :e a: ||n| :aJ: on|y.' CJ:oner: :|oJ|d |eeo ||:
conguration if they are enabling the link-state tracking feature on physical switches. If link-state tracking
capability is not available on physical switches, and there are no redundant paths available in the design,
customers can make use of the beacon probing feature available on VDS. The beacon probing function is a
software solution available on virtual switches for detecting link failures upstream from the access layer physical
:v|c| o |e aqqreqa|on/core :v|c|e:. |eacon orob|nq |: no: J:eJ| v|| |ree or nore Jo||n|: |n a ean.
Maximum Transmission Unit
|a|e :Jre |a |e nax|nJn ran:n|::|on Jn| (|TU) conqJra|on nac|e: acro:: |e v|rJa| and o|y:|ca|
network switch infrastructure.
Rack Server in Example Deployment
Aer |oo||nq a |e naor conoonen: |n |e exano|e deo|oynen and |ey v|rJa| and o|y:|ca| :v|c|
parameters, lets take a look at the diferent types of servers that customers can have in their environment.
CJ:oner: can deo|oy an |SX| |o: on e||er a rac| :erver or a b|ade :erver. T||: :ec|on d|:cJ::e: a
deo|oynen |n v||c| |e |SX| |o: |: rJnn|nq on a rac| :erver. Tvo yoe: o rac| :erver conqJra|on v||| be
described in the following section:
Pac| :erver v|| e|q| lCb| nevor| adaoor:
Pac| :erver v|| vo l0Cb| nevor| adaoor:
The various VDS design approaches will be discussed for each of the two congurations.
Rack Server with Eight 1GbE Network Adaptors
n a rac| :erver deo|oynen v|| e|q| lCb| nevor| adaoor: oer |o:, cJ:oner: can e||er J:e |e rad||ona|
static design approach of allocating network adaptors to each trafc type or make use of advanced features of
V|S :Jc| a: NOC and ||T. T|e NOC and ||T eaJre: |e|o orov|de a dynan|c de:|qn |a ec|en|y J|||.e: /O
resources. In this section, both the traditional and new design approaches are described, along with their pros
and cons.
Design Option 1 Static Conguration
This design option follows the traditional approach of statically allocating network resources to the diferent
v|rJa| |nra:rJcJre rac yoe:. A: :|ovn |n ||qJre 4, eac| |o: |a: e|q| ||erne nevor| adaoor:. |oJr
are conneced o one o |e r: acce:: |ayer :v|c|e:, |e o|er oJr are conneced o |e :econd acce:: |ayer
switch, to avoid single point of failure. Lets look in detail at how VDS parameters are congured.
T E C H NI C AL WH I T E PAP E R / 1 4
VMware vSphere Distributed Switch
Best Practices
Access
Layer
Aggregation
Layer
PG-A
Legend:
PG-B
. . . . . . . . . . . . . . . . . . .
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Cluster 2 Cluster 1
Figure 4. Rack Server with Eight 1GbE Network Adaptors
dvuplink Conguration
To :Jooor |e nax|nJn o e|q| lCb| nevor| adaoor: oer |o:, |e dvJo||n| oor qroJo |: conqJred v||
e|q| dvJo||n|: (dvJo||n|ldvJo||n|8). On |e |o::, dvJo||n|l |: a::oc|aed v|| vnn|c0, dvJo||n|2 |: a::oc|aed
with vmnic1, and so on. It is a recommended practice to change the names of the dvuplinks to something
meaningful and easy to track. For example, dvuplink1, which gets associated with vmnic on a motherboard, can
be renaned a: |O|Jo||n|l', dvJo||n|2, v||c| qe: a::oc|aed v|| vnn|c on an exoan:|on card, can be
renaned a: |xoan:|onJo||n|l.'
|e |o:: |ave :one ||erne nevor| adaoor: a: |AN on no|erboard (|O|) and :one on exoan:|on card:,
for a better resiliency story, VMware recommends selecting one network adaptor from LOM and one from an
expansion card when conguring network adaptor teaming. To congure this teaming on a VDS, administrators
must pay attention to the dvuplink and vmnic association along with dvportgroup conguration where network
adaptor teaming is enabled. In the network adaptor teaming conguration on a dvportgroup, administrators
must choose the various dvuplinks that are part of a team. If the dvuplinks are named appropriately according to
|e |o: vnn|c a::oc|a|on, adn|n|:raor: can :e|ec |O|Jo||n|l' and |xoan:|onJo||n|l' v|en conqJr|nq
the teaming option for a dvportgroup.
dvportgroup Conguration
A: de:cr|bed |n Tab|e 2, |ere are ve d|eren oor qroJo: |a are conqJred or |e ve d|eren ra c yoe:.
Customers can create up to 5,000 unique port groups per VDS. In this example deployment, the decision on
creating diferent port groups is based on the number of traf c types.
Accord|nq o Tab|e 2, dvoorqroJo |CA |: creaed or |e nanaqenen ra c yoe. T|ere are o|er
dvoorqroJo: dened or |e o|er ra c yoe:. T|e o||ov|nq are |e |ey conqJra|on: o dvoorqroJo |CA.
Tean|nq oo|on. |xo||c| a||over order orov|de: a deern|n|:|c vay o d|rec|nq ra c o a oar|cJ|ar Jo||n|.
|y :e|ec|nq dvJo||n|l a: an ac|ve Jo||n| and dvJo||n|2 a: a :andby Jo||n|, nanaqenen ra c v||| be carr|ed
over dvJo||n|l Jn|e:: |ere |: a a||Jre on dvJo||n|l. A|| o|er dvJo||n|: are conqJred a: JnJ:ed. ConqJr|nq
|e a||bac| oo|on o No' |: a|:o reconnended, o avo|d |e aoo|nq o ra c beveen vo nevor| adaoor:.
T E C H NI C AL WH I T E PAP E R / 1 5
VMware vSphere Distributed Switch
Best Practices
The failback option determines how a physical adaptor is returned to active duty after recovering from a
a||Jre. a||bac| |: :e o No,' a a||ed adaoor |: |e |nac|ve, even aer recovery, Jn|| ano|er cJrren|y
active adaptor fails and requires a replacement.
V|vare reconnend: |:o|a|nq a|| rac yoe: ron eac| o|er by den|nq a :eoarae V|AN or eac|
dvportgroup.
T|ere are :evera| o|er oaraneer: |a are oar o |e dvoorqroJo conqJra|on. CJ:oner: can c|oo:e
to congure these parameters based on their environment needs. For example, customers can congure
|V|AN o orov|de |:o|a|on v|en |ere are ||n|ed V|AN: ava||ab|e |n |e env|ronnen.
A: yoJ o||ov |e dvoorqroJo: conqJra|on |n Tab|e 2, yoJ can :ee |a eac| rac yoe |: carr|ed over a
specic dvuplink, with the exception of the virtual machine trafc type. The virtual machine trafc type uses
vo ac|ve ||n|:, dvJo||n| and dvJo||n|8, and |e:e ||n|: are J|||.ed |roJq| |e ||T a|qor||n. A: va:
orev|oJ:|y nen|oned, |e ||T a|qor||n |: nJc| nore ec|en |an |e :andard |a:||nq a|qor||n |n
utilizing link bandwidth.
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
UNUSED
UPLI NK
MANAGEMENT
|CA |xo||c|
Failover
dvuplink1 dvuplink2 , 4, S, o, , 8
vMOTI ON
|C| |xo||c|
Failover
dvuplink3 dvuplink4 l, 2, S, o, , 8
FT
|CC |xo||c|
Failover
dvuplink4 dvuplink3 l, 2, S, o, , 8
I SCSI
|C| |xo||c|
Failover
dvuplink5 dvuplink6 l, 2, , 4, , 8
VI RTUAL
MACHI NE
|C| ||T dvuplink7/
dvJo||n|8
None 1, 2, 3, 4, 5, 6
Table 2. Static Design Conguration
Physical Switch Conguration
T|e exerna| o|y:|ca| :v|c|v|ere |e rac| :erver:' nevor| adaoor: are conneced o|: conqJred v||
rJn| conqJra|on v|| a|| |e aooroor|ae V|AN: enab|ed. A: de:cr|bed |n |e ||y:|ca| Nevor| Sv|c|
|araneer:' :ec|on, |e o||ov|nq :v|c| conqJra|on: are oerorned ba:ed on |e V|S :eJo de:cr|bed
in Table 2.
|nab|e ST| on |e rJn| oor: ac|nq |e |SX| |o::, a|onq v|| |e |or|a: node and |||U qJard eaJre.
T|e ean|nq conqJra|on on V|S |: :a|c, :o no ||n| aqqreqa|on |: conqJred on |e o|y:|ca| :v|c|e:.
|ecaJ:e o |e ne:| ooo|oqy deo|oynen, a: :|ovn |n ||qJre 4, |e ||n|:ae rac||nq eaJre |: no reoJ|red
on the physical switches.
In this design approach, resiliency to the infrastructure trafc is achieved through active/standby uplinks, and
:ecJr|y |: accono||:|ed by orov|d|nq :eoarae o|y:|ca| oa|: or |e d|eren rac yoe:. |ovever, v|| ||:
design, the I/O resources are underutilized because the dvuplink2 and dvuplink6 standby links are not used to
:end or rece|ve rac. A|:o, |ere |: no ex|b|||y o a||ocae nore bandv|d| o a rac yoe v|en | need: |.
T E C H NI C AL WH I T E PAP E R / 1 6
VMware vSphere Distributed Switch
Best Practices
There is another variation to the static design approach that addresses the need of some customers to provide
higher bandwidth to the storage and vMotion trafc type. In the static design that was previously described,
|SCS and v|o|on rac |: ||n|ed o lC|. a cJ:oner van: o :Jooor ||q|er bandv|d| or |SCS, |ey can
na|e J:e o |e |SCS nJ||oa||nq :o|J|on. A|:o, v|| |e re|ea:e o vSo|ere S, v|o|on rac can be carr|ed
over nJ||o|e ||erne nevor| adaoor: |roJq| |e :Jooor o nJ||nevor| adaoor v|o|on, |ereby
providing higher bandwidth to the vMotion process.
For more details on how to set up iSCSI multipathing, refer to the VMware vSphere Storage guide:
https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
T|e conqJra|on o nJ||nevor| adaoor v|o|on |: oJ|e :|n||ar o |e |SCS nJ||oa| :eJo, v|ere
administrators must create two separate vmkernel interfaces and bind each one to a separate dvportgroup.
T||: conqJra|on v|| vo :eoarae dvoorqroJo: orov|de: |e connec|v|y o vo d|eren ||erne nevor|
adaptors or dvuplinks.
TRAFFI C
TYPE
PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
UNUSED
UPLI NK
MANAGEMENT
|CA |xo||c|
Failover
dvuplink1 dvuplink2 , 4, S, o, , 8
vMOTI ON
|C|l None dvuplink3 dvuplink4 l, 2, S, o, , 8
vMOTI ON
|C|2 None dvuplink4 dvuplink3 l, 2, S, o, , 8
FT
|CC |xo||c|
Failover
dvuplink2 dvuplink1 , 4, S, o, , 8
I SCSI
|C|l None dvuplink5 None 1, 2, 3, 4, 6, 7,
8
I SCSI
|C|2 None dvuplink6 None 1, 2, 3, 4, 5, 7,
8
VI RTUAL
MACHI NE
|C| ||T dvuplink7/
dvJo||n|8
None 1, 2, 3, 4, 5, 6
Table 3. Static Design Conguration with iSCSI Multipathing and MultiNetwork Adaptor vMotion
A: :|ovn |n Tab|e , |ere are vo enr|e: eac| or |e v|o|on and |SCS rac yoe:. A|:o :|ovn |: a ||: o
|e add||ona| dvoorqroJo conqJra|on: reoJ|red o :Jooor |e nJ||nevor| adaoor v|o|on and |SCS
nJ||oa||nq oroce::e:. |or nJ||nevor| adaoor v|o|on, dvoorqroJo: |C|l and |C|2 are ||:ed,
conqJred v|| dvJo||n| and dvJo||n|4 re:oec|ve|y a: ac|ve ||n|:. And or |SCS nJ||oa||nq, dvoorqroJo:
|C|l and |C|2 are conneced o dvJo||n|S and dvJo||n|o re:oec|ve|y a: ac|ve ||n|:. |oad ba|anc|nq acro::
|e nJ||o|e dvJo||n|: |: oerorned by |e nJ||oa||nq |oq|c |n |e |SCS oroce:: and by |e |SX| o|aorn |n
the vMotion process. Conguring the teaming policies for these dvportgroups is not required.
FT, management and virtual machine trafc-type dvportgroup conguration and physical switch conguration
or ||: de:|qn rena|n |e :ane a: |o:e de:cr|bed |n |e:|qn Oo|on l' o |e orev|oJ: :ec|on.
T E C H NI C AL WH I T E PAP E R / 1 7
VMware vSphere Distributed Switch
Best Practices
This static design approach improves on the rst design by using advanced capabilities such as iSCSI
nJ||oa||nq and nJ||nevor| adaoor v|o|on. |J a |e :ane |ne, ||: oo|on |a: |e :ane c|a||enqe:
related to underutilized resources and inexibility in allocating additional resources on the y to diferent
trafc types.
Design Option 2 Dynamic Conguration with NIOC and LBT
Aer |oo||nq a |e rad||ona| de:|qn aooroac| v|| :a|c Jo||n| conqJra|on:, |e': a|e a |oo| a |e V|vare
reconnended de:|qn oo|on |a a|e: advanaqe o |e advanced V|S eaJre: :Jc| a: NOC and ||T.
In this design, the connectivity to the physical network infrastructure remains the same as that described in the
:a|c de:|qn oo|on. |ovever, |n:ead o a||oca|nq :oec|c dvJo||n|: o |nd|v|dJa| rac yoe:, |e |SX|
platform utilizes those dvuplinks dynamically. To illustrate this dynamic design, each virtual infrastructure trafc
types bandwidth utilization is estimated. In a real deployment, customers should rst monitor the virtual
infrastructure trafc over a period of time, to gauge the bandwidth utilization, and then come up with bandwidth
numbers for each trafc type. The following are some bandwidth numbers estimated by trafc type:
|anaqenen rac (lC|)
v|o|on (lC|)
|T (lC|)
|SCS (lC|)
V|rJa| nac||ne (2C|)
|a:ed on ||: bandv|d| |norna|on, adn|n|:raor: can orov|:|on aooroor|ae /O re:oJrce: o eac| rac yoe
by J:|nq |e NOC eaJre o V|S. |e': a|e a |oo| a |e V|S oaraneer conqJra|on: or ||: de:|qn, a: ve||
a: |e NOC :eJo. T|e dvJo||n| oor qroJo conqJra|on rena|n: |e :ane, v|| e|q| dvJo||n|: creaed or |e
e|q| l Cb| nevor| adaoor:. T|e dvoorqroJo conqJra|on |: de:cr|bed |n |e o||ov|nq :ec|on.
dvportgroup Conguration
In this design, all dvuplinks are active and there are no standby and unused uplinks, as shown in Table 4.
A|| dvJo||n|: are |ereore ava||ab|e or J:e by |e ean|nq a|qor||n. T|e o||ov|nq are |e |ey oaraneer
conqJra|on: o dvoorqroJo |CA.
Tean|nq oo|on. ||T |: :e|eced a: |e ean|nq a|qor||n. \|| ||T conqJra|on, |e nanaqenen rac
initially will be scheduled based on the virtual port ID hash. Depending on the hash output, management trafc
is sent out over one of the dvuplinks. Other trafc types in the virtual infrastructure can also be scheduled on
|e :ane dvJo||n| |n||a||y. |ovever, v|en |e J|||.a|on o |e dvJo||n| qoe: beyond |e S oercen |re:|o|d,
|e ||T a|qor||n v||| be |nvo|ed and :one o |e rac v||| be noved o o|er JnderJ|||.ed dvJo||n|:. |:
oo::|b|e |a nanaqenen rac v||| be noved o o|er dvJo||n|: v|en :Jc| an ||T even occJr:.
T|e a||bac| oo|on nean: qo|nq ron J:|nq a :andby ||n| o J:|nq an ac|ve Jo||n| aer |e ac|ve Jo||n|
comes back into operation after a failure. This failback option works when there are active and standby
dvuplink congurations. In this design, there are no standby dvuplinks. So when an active uplink fails, the
trafc owing on that dvuplink is moved to another working dvuplink. If the failed dvuplink comes back,
|e ||T a|qor||n v||| :c|edJ|e nev rac on |a dvJo||n|. T||: oo|on |: |e a: |e deaJ|.
V|vare reconnend: |:o|a|nq a|| rac yoe: ron eac| o|er by den|nq a :eoarae V|AN or eac|
dvportgroup.
T|ere are :evera| o|er oaraneer: |a are oar o |e dvoorqroJo conqJra|on. CJ:oner: can c|oo:e o
conqJre |e:e oaraneer: ba:ed on |e|r env|ronnen need:. |or exano|e, |ey can conqJre |V|AN o
orov|de |:o|a|on v|en |ere are ||n|ed V|AN: ava||ab|e |n |e env|ronnen.
T E C H NI C AL WH I T E PAP E R / 1 8
VMware vSphere Distributed Switch
Best Practices
A: yoJ o||ov |e dvoorqroJo: conqJra|on |n Tab|e 4, yoJ can :ee |a eac| rac yoe |a: a|| dvJo||n|:
ac|ve and |a |e:e ||n|: are J|||.ed |roJq| |e ||T a|qor||n. |e': nov |oo| a |e NOC conqJra|on
described in the last two columns of Table 4.
T|e NOC conqJra|on |n ||: de:|qn |e|o: orov|de |e aooroor|ae /O re:oJrce: o |e d|eren rac yoe:.
|a:ed on |e orev|oJ:|y e:|naed bandv|d| nJnber: oer rac yoe, |e :|are: oaraneer |: conqJred |n
|e NOC :|are: co|Jnn |n Tab|e 4. T|e :|are: va|Je: :oec|y |e re|a|ve |noorance o :oec|c rac yoe:,
and NOC en:Jre: |a dJr|nq conen|on :cenar|o: on |e dvJo||n|:, eac| rac yoe qe: |e a||ocaed
bandwidth. For example, a shares conguration of 10 for vMotion, iSCSI and FT allocates equal bandwidth to
these trafc types. Virtual machines get the highest bandwidth with 20 shares and management gets lower
bandwidth with 5 shares.
To |||J:rae |ov :|are va|Je: ran:|ae o bandv|d| nJnber:, |e': a|e an exano|e o lCb caoac|y dvJo||n|
carrying all ve trafc types. This is a worst-case scenario where all trafc types are mapped to one dvuplink.
T||: v||| never |aooen v|en cJ:oner: enab|e |e ||T eaJre, becaJ:e ||T v||| ba|ance |e rac ba:ed on
the utilization of uplinks. This example shows how much bandwidth each trafc type will be allowed on one
dvJo||n| dJr|nq a conen|on or over:Jb:cr|o|on :cenar|o and v|en ||T |: no enab|ed.
Toa| :|are:. nanaqenen (S) v|o|on (l0) |T (l0) |SCS (l0) v|rJa| nac||ne (20) = SS
|anaqenen. S :|are:, (S/SS) ` lCb = 90.9l|bo:
v|o|on. l0 :|are:, (l0/SS) ` lCb = l8l.l8|bo:
|T. l0 :|are:, (l0/SS) ` lCb = l8l.l8|bo:
|SCS. l0 :|are:, (l0/SS) ` lCb = l8l.l8|bo:
V|rJa| nac||ne. 20 :|are:, (20/SS) ` lCb = o.o4|bo:
To calculate the bandwidth numbers during contention, you should rst calculate the percentage of bandwidth
for a trafc type by dividing its share value by the total available share number (55). In the second step, the total
bandv|d| o |e dvJo||n| (lCb) |: nJ||o||ed v|| |e oercenaqe o bandv|d| nJnber ca|cJ|aed |n |e r:
step. For example, 5 shares allocated to management trafc translate to 90.91Mbps of bandwidth to
nanaqenen oroce:: on a J||y J|||.ed lCb nevor| adaoor. n ||: exano|e, cJ:on :|are conqJra|on |:
discussed, but a customer can make use of predened high (100), normal (50) and low (25) shares when
assigning them to diferent trafc types.
The vSphere platform takes these congured share values and applies them per uplink. The schedulers running
at each uplink are responsible for making sure that the bandwidth resources are allocated according to the
:|are:. n |e ca:e o an e|q| lCb| nevor| adaoor deo|oynen, |ere are e|q| :c|edJ|er: rJnn|nq. |eoend|nq
on the number of trafc types scheduled on a particular uplink, the scheduler will divide the bandwidth among
the trafc types, based on the share numbers. For example, if only FT (10 shares) and management (5 shares)
trafc are owing through dvuplink 5, FT trafc will get double the bandwidth of management trafc, based on
|e :|are: va|Je. A|:o, v|en |ere |: no nanaqenen rac ov|nq, a|| bandv|d| can be J|||.ed by |e |T
oroce::. T||: ex|b|||y |n a||oca|nq /O re:oJrce: |: |e |ey bene o |e NOC eaJre.
T|e NOC ||n|: oaraneer o Tab|e 4 |: no conqJred |n ||: de:|qn. T|e ||n|: va|Je :oec|e: an ab:o|Je
maximum limit on egress trafc for a trafc type. Limits are specied in Mbps. This conguration provides a hard
||n| on any rac, even | /O re:oJrce: are ava||ab|e o J:e. U:|nq ||n|: conqJra|on |: no reconnended
unless you really want to control the trafc, even though additional resources are available.
T|ere |: no c|anqe |n o|y:|ca| :v|c| conqJra|on |n ||: de:|qn aooroac|, even v|| |e c|o|ce o |e nev ||T
a|qor||n. T|e ||T ean|nq a|qor||n doe:n' reoJ|re any :oec|a| conqJra|on on o|y:|ca| :v|c|e:. Peer o
|e o|y:|ca| :v|c| :e|nq: de:cr|bed |n |e:|qn Oo|on l.'
T E C H NI C AL WH I T E PAP E R / 1 9
VMware vSphere Distributed Switch
Best Practices
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
NI OC
SHARES
NI OC
LI MI TS
MANAGEMENT |CA ||T 1, 2, 3, 4,
S, o, , 8
None 5
vMOTI ON |C| ||T 1, 2, 3, 4,
S, o, , 8
None 10
FT |CC ||T 1, 2, 3, 4,
S, o, , 8
None 10
I SCSI |C| ||T 1, 2, 3, 4,
S, o, , 8
None 10
VI RTUAL
MACHI NE
|C| ||T 1, 2, 3, 4,
S, o, , 8
None 20
Table 4. Dynamic Design Conguration with NIOC and LBT
T||: de:|qn doe: no orov|de ||q|er |an lCb bandv|d| o |e v|o|on and |SCS rac yoe: a: |: |e ca:e
v|| :a|c de:|qn J:|nq nJ||nevor| adaoor v|o|on and |SCS nJ||oa||nq. T|e ||T a|qor||n canno :o||
the infrastructure trafc across multiple dvuplink ports and utilize all the links. So even if vMotion dvportgroup
|C| |a: a|| e|q| lCb| nevor| adaoor: a: ac|ve Jo||n|:, v|o|on rac v||| be carr|ed over on|y one o
the eight uplinks. The main advantage of this design is evident in the scenarios where the vMotion process is not
using the uplink bandwidth, and other trafc types are in need of the additional resources. In these situations,
NOC na|e: :Jre |a |e JnJ:ed bandv|d| |: a||ocaed o |e o|er rac yoe: |a need |.
This dynamic design option is the recommended approach because it takes advantage of the advanced VDS
eaJre: and J|||.e: /O re:oJrce: ec|en|y. T||: oo|on a|:o orov|de: ac|veac|ve re:|||ency v|ere no Jo||n|:
are in standby mode. In this design approach, customers allow the vSphere platform to make the optimal
decisions on scheduling trafc across multiple uplinks.
Some customers who have restrictions in the physical infrastructure in terms of bandwidth capacity across
diferent paths and limited availability of the layer 2 domain might not be able to take advantage of this dynamic
design option. When deploying this design option, it is important to consider all the diferent trafc paths that a
trafc type can take and to make sure that the physical switch infrastructure can support the specic
characteristics required for each trafc type. VMware recommends that vSphere and network administrators
work together to understand the impact of the vSphere platforms trafc scheduling feature over the physical
network infrastructure before deploying this design option.
|very cJ:oner env|ronnen |: d|eren, and |e reoJ|renen: or |e rac yoe: are a|:o d|eren. |eoend|nq
on the need of the environment, a customer can modify these design options to t their specic requirements.
For example, customers can choose to use a combination of static and dynamic design options when they need
higher bandwidth for iSCSI and vMotion activities. In this hybrid design, four uplinks can be statically allocated to
iSCSI and vMotion trafc types while the remaining four uplinks are used dynamically for the remaining trafc
yoe:. Tab|e S :|ov: |e rac yoe: and a::oc|aed oor qroJo conqJra|on: or |e |ybr|d de:|qn. A: :|ovn
in the table, management, FT and virtual machine trafc will be distributed on dvuplink1 to dvuplink4 through
|e vSo|ere o|aorn': rac :c|edJ||nq eaJre:, ||T and NOC. T|e rena|n|nq oJr dvJo||n|: are :a|ca||y
assigned to vMotion and iSCSI trafc types.
T E C H NI C AL WH I T E PAP E R / 2 0
VMware vSphere Distributed Switch
Best Practices
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
NI OC
SHARES
NI OC
LI MI TS
MANAGEMENT |CA ||T 1, 2, 3, 4 None 5
vMOTI ON |C|l None 5 6 -
vMOTI ON |C|2 None 6 5 -
FT |CC ||T 1, 2, 3, 4 None 10
I SCSI |C|l None 7 None -
I SCSI |C|2 None 8 None -
VI RTUAL
MACHI NE
|C| ||T 1, 2, 3, 4 None 20
Table 5. Hybrid Design Conguration
Rack Server with Two 10GbE Network Adaptors
T|e vo l0Cb| nevor| adaoor: deo|oynen node| |: becon|nq very connon becaJ:e o |e bene: |ey
provide through I/O consolidation. The key benets include better utilization of I/O resources, simplied
nanaqenen and redJced CA||X and O||X. A||oJq| ||: deo|oynen orov|de: |e:e bene:, |ere are :one
c|a||enqe: v|en | cone: o |e rac nanaqenen a:oec:. |:oec|a||y |n ||q||y con:o||daed v|rJa||.ed
env|ronnen: v|ere nore rac yoe: are carr|ed over ever l0Cb| nevor| adaoor:, | becone: cr||ca| o
or|or||.e rac yoe: |a are |nooran and orov|de |e reoJ|red S|A qJaranee:. T|e NOC eaJre ava||ab|e
on the VDS helps in this trafc management activity. In the following sections, you will see how to utilize this
feature in the diferent designs.
A: :|ovn |n ||qJre S, rac| :erver: v|| vo l0Cb| nevor| adaoor: are conneced o |e vo acce:: |ayer
:v|c|e: o avo|d any :|nq|e oo|n o a||Jre. S|n||ar o |e rac| :erver v|| e|q| lCb| nevor| adaoor:, |e
diferent VDS and physical switch parameter congurations are taken into account with this design. On the
o|y:|ca| :v|c| :|de, |e nev l0Cb :v|c|e: n|q| |ave :Jooor or |Co| |a enab|e: converqence or SAN
and |AN rac. T||: docJnen cover: on|y |e :andard l0Cb deo|oynen: |a :Jooor | :oraqe rac
(|SCS/N|S) and no |Co|.
T E C H NI C AL WH I T E PAP E R / 2 1
VMware vSphere Distributed Switch
Best Practices
n ||: :ec|on, vo de:|qn oo|on: are de:cr|bed, one |: a rad||ona| aooroac| and |e o|er one |: a V|vare
recommended approach.
Access
Layer
Aggregation
Layer
PG-A
Legend:
PG-B
. . . . . . . . . . . . . . . . . . . . . .
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Cluster 2 Cluster 1
Figure 5. Rack Server with Two 10GbE Network Adaptors
Design Option 1 Static Conguration
T|e :a|c conqJra|on aooroac| or rac| :erver deo|oynen v|| l0Cb| nevor| adaoor: |: :|n||ar o |e one
de:cr|bed |n |e:|qn Oo|on l' o rac| :erver deo|oynen v|| e|q| lCb| adaoor:. T|ere are a ev d|erence:
in the conguration where the numbers of dvuplinks are changed from eight to two, and dvportgroup
parameters are diferent. Lets take a look at the conguration details on the VDS front.
dvuplink Conguration
To :Jooor |e nax|nJn vo ||erne nevor| adaoor: oer |o:, |e dvJo||n| oor qroJo |: conqJred v||
two dvuplinks (dvuplink1, dvuplink2). On the hosts, dvuplink1 is associated with vmnic0 and dvuplink2 is
associated with vmnic1.
dvportgroup Conguration
A: de:cr|bed |n Tab|e o, |ere are ve d|eren dvoorqroJo: |a are conqJred or |e ve d|eren ra c
yoe:. |or exano|e, dvoorqroJo |CA |: creaed or |e nanaqenen ra c yoe. T|e o||ov|nq are |e o|er
|ey conqJra|on: o dvoorqroJo |CA.
Tean|nq oo|on. An exo||c| a||over order orov|de: a deern|n|:|c vay o d|rec|nq ra c o a oar|cJ|ar Jo||n|.
|y :e|ec|nq dvJo||n|l a: an ac|ve Jo||n| and dvJo||n|2 a: a :andby Jo||n|, nanaqenen ra c v||| be carr|ed
over dvJo||n|l Jn|e:: |ere |: a a||Jre v|| |. ConqJr|nq |e a||bac| oo|on o No' |: a|:o reconnended, o
avoid the apping of traf c between two network adaptors. The failback option determines how a physical
adaoor |: reJrned o ac|ve dJy aer recover|nq ron a a||Jre. a||bac| |: :e o No,' a a||ed adaoor |: |e
inactive, even after recovery, until another currently active adaptor fails, requiring its replacement.
V|vare reconnend: |:o|a|nq a|| ra c yoe: ron eac| o|er by den|nq a :eoarae V|AN or
each dvportgroup.
T E C H NI C AL WH I T E PAP E R / 2 2
VMware vSphere Distributed Switch
Best Practices
T|ere are var|oJ: o|er oaraneer: |a are oar o |e dvoorqroJo conqJra|on. CJ:oner: can c|oo:e o
congure these parameters based on their environment needs.
Tab|e o orov|de: |e conqJra|on dea||: or a|| |e dvoorqroJo:. Accord|nq o |e conqJra|on, dvJo||n|l
carr|e: nanaqenen, |SCS and v|rJa| nac||ne rac, dvJo||n|2 |and|e: v|o|on, |T and v|rJa| nac||ne
rac. A: yoJ can :ee, |e v|rJa| nac||ne rac yoe na|e: J:e o vo Jo||n|:, and |e:e Jo||n|: are J|||.ed
|roJq| |e ||T a|qor||n.
With this deterministic teaming policy, customers can decide to map diferent trafc types to the available uplink
ports, depending on environment needs. For example, if iSCSI trafc needs higher bandwidth and other trafc
types have relatively low bandwidth requirements, customers can decide to keep only iSCSI trafc on dvuplink1
and move all other trafc to dvuplink2. When deciding on these trafc paths, customers should understand the
physical network connectivity and the paths bandwidth capacities.
Physical Switch Conguration
The external physical switch, which the rack servers network adaptors are connected to, has trunk conguration
v|| a|| |e aooroor|ae V|AN: enab|ed. A: de:cr|bed |n |e o|y:|ca| nevor| :v|c| oaraneer: :ec|on:, |e
following switch congurations are performed based on the VDS setup described in Table 6.
|nab|e ST| on |e rJn| oor: ac|nq |SX| |o::, a|onq v|| |e |or|a: node and |||U qJard eaJre.
T|e ean|nq conqJra|on on V|S |: :a|c and |ereore no ||n| aqqreqa|on |: conqJred on |e o|y:|ca|
switches.
|ecaJ:e o |e ne:| ooo|oqy deo|oynen :|ovn |n ||qJre S, |e ||n| :aerac||nq eaJre |: no reoJ|red on
the physical switches.
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
UNUSED
UPLI NK
MANAGEMENT
|CA |xo||c|
Failover
dvuplink1 dvuplink2 None
vMOTI ON
|C| |xo||c|
Failover
dvuplink2 dvuplink1 None
FT
|CC |xo||c|
Failover
dvuplink2 dvuplink1 None
I SCSI
|C| |xo||c|
Failover
dvuplink1 dvuplink2 None
VI RTUAL
MACHI NE
|C| ||T dvuplink1/
dvuplink2
None None
Table 6. Static Design Conguration
This static design option provides exibility in the trafc path conguration, but it cannot protect against one
trafc types dominating others. For example, there is a possibility that a network-intensive vMotion process
n|q| a|e avay no: o |e nevor| bandv|d| and |noac v|rJa| nac||ne rac. ||d|rec|ona| rac:|ao|nq
oaraneer: a oor qroJo and oor |eve|: can orov|de :one |e|o |n nanaq|nq d|eren rac rae:. |ovever,
using this approach for trafc management requires customers to limit the trafc on the respective dvportgroups.
Limiting trafc to a certain level through this method puts a hard limit on the trafc types, even when the
bandwidth is available to utilize. This underutilization of I/O resources because of hard limits is overcome
|roJq| |e NOC eaJre, v||c| orov|de: ex|b|e rac nanaqenen ba:ed on |e :|are: oaraneer:.
|e:|qn Oo|on 2,' de:cr|bed |n |e o||ov|nq :ec|on, |: ba:ed on |e NOC eaJre.
T E C H NI C AL WH I T E PAP E R / 2 3
VMware vSphere Distributed Switch
Best Practices
Design Option 2 Dynamic Conguration with NIOC and LBT
T||: dynan|c de:|qn oo|on |: |e V|varereconnended aooroac| |a a|e: advanaqe o |e NOC and ||T
features of the VDS.
Connec|v|y o |e o|y:|ca| nevor| |nra:rJcJre rena|n: |e :ane a: |a de:cr|bed |n |e:|qn Oo|on l.'
|ovever, |n:ead o a||oca|nq :oec|c dvJo||n|: o |nd|v|dJa| rac yoe:, |e |SX| o|aorn J|||.e: |o:e
dvuplinks dynamically. To illustrate this dynamic design, each virtual infrastructure trafc types bandwidth
utilization is estimated. In a real deployment, customers should rst monitor the virtual infrastructure trafc
over a period of time to gauge the bandwidth utilization, and then come up with bandwidth numbers.
The following are some bandwidth numbers estimated by trafc type:
|anaqenen rac (lC|)
v|o|on (2C|)
|T (lC|)
|SCS (2C|)
V|rJa| nac||ne (2C|)
T|e:e bandv|d| e:|nae: are d|eren ron |e one con:|dered v|| rac| :erver deo|oynen v|| e|q| lCb|
network adaptors. Lets take a look at the VDS parameter congurations for this design. The dvuplink port group
conqJra|on rena|n: |e :ane, v|| vo dvJo||n|: creaed or |e vo l0Cb| nevor| adaoor:. T|e
dvportgroup conguration is as follows.
dvportgroup Conguration
n ||: de:|qn, a|| dvJo||n|: are ac|ve and |ere are no :andby and JnJ:ed Jo||n|:, a: :|ovn |n Tab|e . A||
dvuplinks are therefore available for use by the teaming algorithm. The following are the key congurations of
dvoorqroJo |CA.
Tean|nq oo|on. ||T |: :e|eced a: |e ean|nq a|qor||n. \|| ||T conqJra|on, nanaqenen rac |n||a||y
v||| be :c|edJ|ed ba:ed on |e v|rJa| oor | |a:|. |a:ed on |e |a:| oJoJ, nanaqenen rac v||| be :en
out over one of the dvuplinks. Other trafc types in the virtual infrastructure can also be scheduled on the
:ane dvJo||n| v|| ||T conqJra|on. SJb:eoJen|y, | |e J|||.a|on o |e Jo||n| qoe: beyond |e S oercen
|re:|o|d, |e ||T a|qor||n v||| be |nvo|ed and :one o |e rac v||| be noved o o|er JnderJ|||.ed
dvuplinks. It is possible that management trafc will get moved to other dvuplinks when such an event occurs.
T|ere are no :andby dvJo||n|: |n ||: conqJra|on, :o |e a||bac| :e|nq |: no aoo||cab|e or ||: de:|qn
aooroac|. T|e deaJ| :e|nq or ||: a||bac| oo|on |: Ye:.'
V|vare reconnend: |:o|a|nq a|| rac yoe: ron eac| o|er by den|nq a :eoarae V|AN or eac|
dvportgroup.
T|ere are :evera| o|er oaraneer: |a are oar o |e dvoorqroJo conqJra|on. CJ:oner: can c|oo:e o
congure these parameters based on their environment needs.
A: yoJ o||ov |e dvoorqroJo: conqJra|on |n Tab|e , yoJ can :ee |a eac| rac yoe |a: a|| |e dvJo||n|:
a: ac|ve and |e:e Jo||n|: are J|||.ed |roJq| |e ||T a|qor||n. |e': a|e a |oo| a |e NOC conqJra|on.
T|e NOC conqJra|on |n ||: de:|qn no on|y |e|o: orov|de |e aooroor|ae /O re:oJrce: o |e d|eren rac
yoe: bJ a|:o orov|de: S|A qJaranee: by oreven|nq one rac yoe ron don|na|nq o|er:.
|a:ed on |e bandv|d| a::Jno|on: nade or d|eren rac yoe:, |e :|are: oaraneer: are conqJred |n
|e NOC :|are: co|Jnn |n Tab|e . To |||J:rae |ov :|are va|Je: ran:|ae o bandv|d| nJnber: |n ||:
deo|oynen, |e': a|e an exano|e o a l0Cb caoac|y dvJo||n| carry|nq a|| ve rac yoe:. T||: |: a vor:ca:e
scenario in which all trafc types are mapped to one dvuplink. This will never happen when customers enable
|e ||T eaJre, becaJ:e ||T v||| nove |e rac yoe ba:ed on |e Jo||n| J|||.a|on.
T E C H NI C AL WH I T E PAP E R / 2 4
VMware vSphere Distributed Switch
Best Practices
The following example shows how much bandwidth each trafc type will be allowed on one dvuplink during a
conen|on or over:Jb:cr|o|on :cenar|o and v|en ||T |: no enab|ed.
Toa| :|are:. nanaqenen (S) v|o|on (20) |T (l0) |SCS (20) v|rJa| nac||ne (20) = S
|anaqenen. S :|are:, (S/S) ` l0Cb = oo|bo:
v|o|on. 20 :|are:, (20/S) ` l0Cb = 2.oCbo:
|T. l0 :|are:, (l0/S) ` l0Cb = l.Cbo:
|SCS. 20 :|are:, (20/S) ` l0Cb = 2.oCbo:
V|rJa| nac||ne. 20 :|are:, (20/S) ` l0Cb = 2.oCbo:
For each trafc type, rst the percentage of bandwidth is calculated by dividing the share value by the total
ava||ab|e :|are nJnber (S), and |en |e oa| bandv|d| o |e dvJo||n| (l0Cb) |: J:ed o ca|cJ|ae |e
bandv|d| :|are or |e rac yoe. |or exano|e, 20 :|are: a||ocaed o v|o|on rac ran:|ae o 2.oCbo:
o bandv|d| o |e v|o|on oroce:: on a J||y J|||.ed l0Cb| nevor| adaoor.
n ||: l0Cb| deo|oynen, cJ:oner: can orov|de b|qqer o|oe: o |nd|v|dJa| rac yoe: v||oJ |e J:e o
rJn||nq or nJ||oa||nq ec|no|oq|e:. T||: va: no |e ca:e v|| an e|q|lCb| deo|oynen.
There is no change in physical switch conguration in this design approach, so refer to the physical switch
:e|nq: de:cr|bed |n |e:|qn Oo|on l' |n |e orev|oJ: :ec|on.
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
NI OC
SHARES
NI OC
LI MI TS
MANAGEMENT
|CA ||T dvuplink1, 2 None 5
vMOTI ON
|C| ||T dvuplink1, 2 None 20
FT
|CC ||T dvuplink1, 2 None 10
I SCSI
|C| ||T dvuplink1, 2 None 20
VI RTUAL
MACHI NE
|C| ||T dvuplink1, 2 None 20
Table 7. Dynamic Design Conguration
This design option utilizes the advanced VDS features and provides customers with a dynamic and exible
de:|qn aooroac|. n ||: de:|qn, /O re:oJrce: are J|||.ed eec|ve|y and S|A: are ne ba:ed on |e
shares allocation.
T E C H NI C AL WH I T E PAP E R / 2 5
VMware vSphere Distributed Switch
Best Practices
Blade Server in Example Deployment
||ade :erver: are :erver o|aorn: |a orov|de ||q|er :erver con:o||da|on oer rac| Jn| a: ve|| a: |over oover
and coo||nq co::. ||ade c|a::|: |a |o: |e b|ade :erver: |ave oroor|eary arc||ecJre: and eac| vendor |a:
its own way of managing resources in the blade chassis. It is dif cult in this document to look at all of the various
blade chassis available on the market and to discuss their deployments. In this section, we will focus on some
generic parameters that customers should consider when deploying VDS in a blade chassis environment.
From a networking point of view, all blade chassis provide the following two options:
neqraed :v|c|e:. \|| ||: oo|on, |e b|ade c|a::|: enab|e: bJ|||n :v|c|e: o conro| ra c ov beveen
the blade servers within the chassis and the external network.
|a::|roJq| ec|no|oqy. T||: |: an a|erna|ve ne|od o nevor| connec|v|y |a enab|e: |e |nd|v|dJa|
blade servers to communicate directly with the external network.
n ||: docJnen, |e |neqraed :v|c| oo|on |: de:cr|bed a: v|ere |e b|ade c|a::|: |a: a bJ|||n ||erne
:v|c|.' T||: ||erne :v|c| ac: a: an acce:: |ayer :v|c|, a: :|ovn |n ||qJre o.
T||: :ec|on d|:cJ::e: a deo|oynen |n v||c| |e |SX| |o: |: rJnn|nq on a b|ade :erver. T|e o||ov|nq vo yoe:
of blade server conguration will be described in the next section:
||ade :erver v|| vo l0Cb| nevor| adaoor:
||ade :erver v|| |ardvarea::|:ed nJ||o|e |oq|ca| nevor| adaoor:
For each of these two congurations, various VDS design approaches will be discussed.
Blade Server with Two 10GbE Network Adaptors
T||: deo|oynen |: oJ|e :|n||ar o |a o a rac| :erver v|| vo l0Cb| nevor| adaoor: |n v||c| eac| |SX|
|o: |: orov|ded v|| vo l0Cb| nevor| adaoor:. A: :|ovn |n ||qJre o, an |SX| |o: rJnn|nq on a b|ade :erver
|n |e b|ade c|a::|: |: a|:o orov|ded v|| vo l0Cb| nevor| adaoor:.
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Access
Layer
Aggregation
Layer
PG-A
Legend:
Cluster 2 Cluster 1
PG-B
Figure 6. Blade Server with Two 10GbE Network Adaptors
T E C H NI C AL WH I T E PAP E R / 2 6
VMware vSphere Distributed Switch
Best Practices
In this section, two design options are described. One is a traditional static approach and the other one is a
V|varereconnended dynan|c conqJra|on v|| NOC and ||T eaJre: enab|ed. T|e:e vo aooroac|e: are
exac|y |e :ane a: |e deo|oynen de:cr|bed |n |e Pac| Server v|| Tvo l0Cb| Nevor| Adaoor:' :ec|on.
On|y b|ade c|a::|::oec|c de:|qn dec|:|on: v||| be d|:cJ::ed a: oar o ||: :ec|on. |or a|| o|er V|S and
:v|c|re|aed conqJra|on:, reer o |e Pac| Server v|| Tvo l0Cb| Nevor| Adaoor:' :ec|on o ||:
document.
Design Option 1 Static Conguration
T|e conqJra|on o ||: de:|qn aooroac| |: exac|y |e :ane a: |a de:cr|bed |n |e |e:|qn Oo|on l' :ec|on
Jnder Pac| Server v|| Tvo l0Cb| Nevor| Adaoor:.' Peer o Tab|e o or dvoorqroJo conqJra|on dea||:.
|e': a|e a |oo| a |e b|ade :erver:oec|c oaraneer: |a reoJ|re aen|on dJr|nq |e de:|qn.
Nevor| and |ardvare re||ab|||y con:|dera|on: :|oJ|d be |ncorooraed dJr|nq |e b|ade :erver de:|qn a: ve||.
In these blade server designs, customers must focus on the following two areas:
||q| ava||ab|||y o b|ade :v|c|e: |n |e b|ade c|a::|:
Connec|v|y o b|ade :erver nevor| adaoor: o |nerna| b|ade :v|c|e:
||q| ava||ab|||y o b|ade :v|c|e: can be ac||eved by |av|nq vo ||erne :v|c||nq nodJ|e: |n |e b|ade
c|a::|:. And |e connec|v|y o vo nevor| adaoor: on |e b|ade :erver :|oJ|d be :Jc| |a one nevor|
adaoor |: conneced o |e r: ||erne :v|c| nodJ|e, and |e o|er nevor| adaoor |: |oo|ed o |e
second switch module in the blade chassis.
Ano|er a:oec |a reoJ|re: aen|on |n |e b|ade :erver deo|oynen |: |e nevor| bandv|d| ava||ab|||y
across the midplane of the blade chassis and between the blade switches and aggregation layer. If there is an
oversubscription scenario in the deployment, customers must think about utilizing trafc shaping and
or|or||.a|on (802.lo aqq|nq) eaJre: ava||ab|e |n |e vSo|ere o|aorn. T|e or|or||.a|on eaJre enab|e:
cJ:oner: o aq |e |nooran rac con|nq oJ o |e vSo|ere o|aorn. T|e:e ||q|or|or|yaqqed oac|e:
are then treated according to priority by the external switch infrastructure. During congestion scenarios, the
switch will drop lower-priority packets rst and avoid dropping the important, high-priority packets.
This static design option provides customers with the exibility to choose diferent network adaptors for
d|eren rac yoe:. |ovever, v|en do|nq |e rac a||oca|on on a ||n|ed, vo l0Cb| nevor| adaoor:,
adn|n|:raor: J||nae|y v||| :c|edJ|e nJ||o|e rac yoe: on a :|nq|e adaoor. A: nJ||o|e rac yoe: ov
through one adaptor, the chances of one trafc types dominating others increases. To avoid the performance
|noac o |e no|:y ne|q|bor:' (don|na|nq rac yoe), cJ:oner: nJ: J|||.e |e rac nanaqenen oo|:
orov|ded |n |e vSo|ere o|aorn. One o |e rac nanaqenen eaJre: |: NOC, and |a eaJre |: J|||.ed |n
|e:|qn Oo|on 2,' v||c| |: de:cr|bed |n |e o||ov|nq :ec|on.
Design Option 2 Dynamic Conguration with NIOC and LBT
T||: dynan|c conqJra|on aooroac| |: exac|y |e :ane a: |a de:cr|bed |n |e |e:|qn Oo|on 2' :ec|on
Jnder Pac| Server v|| Tvo l0Cb| Nevor| Adaoor:.' Peer o Tab|e or |e dvoorqroJo conqJra|on
dea||: and NOC :e|nq:. T|e o|y:|ca| :v|c|re|aed conqJra|on |n |e b|ade c|a::|: deo|oynen |: |e :ane
a: |a de:cr|bed |n |e rac| :erver deo|oynen. |or |e b|ade cener:oec|c reconnenda|on on re||ab|||y and
trafc management, refer to the previous section.
VMware recommends this design option, which utilizes the advanced VDS features and provides customers with
a dynan|c and ex|b|e de:|qn aooroac|. \|| ||: de:|qn, /O re:oJrce: are J|||.ed eec|ve|y and S|A: are ne
based on the shares allocation.
T E C H NI C AL WH I T E PAP E R / 2 7
VMware vSphere Distributed Switch
Best Practices
Blade Server with Hardware-Assisted Logical Network Adaptors
(HP Flex-10 or Cisco UCSlike Deployment)
Some of the new blade chassis support traf c management capabilities that enable customers to carve I/O
re:oJrce:. T||: |: ac||eved by orov|d|nq |oq|ca| nevor| adaoor: or |e |SX| |o::. n:ead o vo l0Cb|
nevor| adaoor:, |e |SX| |o: nov :ee: nJ||o|e o|y:|ca| nevor| adaoor: |a ooerae a d|eren
conqJrab|e :oeed:. A: :|ovn |n ||qJre , eac| |SX| |o: |: orov|ded v|| e|q| ||erne nevor| adaoor:
|a are carved oJ o vo l0Cb| nevor| adaoor:.
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Access
Layer
Aggregation
Layer
PG-A
Legend:
Cluster 2 Cluster 1
PG-B
Figure 7. Multiple Logical Network Adaptors
T||: deo|oynen |: oJ|e :|n||ar o |a o |e rac| :erver v|| e|q| lCb| nevor| adaoor:. |ovever, |n:ead
o lCb| nevor| adaoor:, |e caoac|y o eac| nevor| adaoor |: conqJred a |e b|ade c|a::|: |eve|. n |e
blade chassis, customers can carve out diferent capacity network adaptors based on the need of each traf c
yoe. |or exano|e, | |SCS ra c need: 2.SCb o bandv|d|, a |oq|ca| nevor| adaoor v|| |a anoJn o
I/O resources can be created on the blade chassis and provided for the blade server.
A: or |e conqJra|on o |e V|S and b|ade c|a::|: :v|c| |nra:rJcJre, |e conqJra|on de:cr|bed |n
|e:|qn Oo|on l' Jnder Pac| Server v|| ||q| lCb| Nevor| Adaoor:' |: nore re|evan or ||: deo|oynen.
The static conguration option described in that design can be applied as is in this blade server environment.
Refer to Table 2 for the dvportgroup conguration details and switch congurations described in that section
for physical switch conguration details.
T|e oJe:|on nov |: v|e|er NOC caoab|||y add: any va|Je |n ||: :oec|c b|ade :erver deo|oynen. NOC |:
a traf c management feature that helps in scenarios where multiple traf c types ow through one uplink or
nevor| adaoor. |n ||: oar|cJ|ar deo|oynen on|y one ra c yoe |: a::|qned o a :oec|c ||erne nevor|
adaoor, |e NOC eaJre v||| no add any va|Je. |ovever, | nJ||o|e ra c yoe: are :c|edJ|ed over one nevor|
adaoor, cJ:oner: can na|e J:e o NOC o a::|qn aooroor|ae :|are: o d|eren ra c yoe:. T||: NOC
conqJra|on v||| en:Jre |a bandv|d| re:oJrce: are a||ocaed o ra c yoe: and |a S|A: are ne.
T E C H NI C AL WH I T E PAP E R / 2 8
VMware vSphere Distributed Switch
Best Practices
A: an exano|e, |e': con:|der a :cenar|o |n v||c| v|o|on and |SCS rac |: carr|ed over one Cb |oq|ca| Jo||n|.
To oroec |e |SCS rac ron nevor||nen:|ve v|o|on rac, adn|n|:raor: can conqJre NOC and
allocate shares to each trafc type. If the two trafc types are equally important, administrators can congure
:|are: v|| eoJa| va|Je: (l0 eac|). \|| ||: conqJra|on, v|en |ere |: a conen|on :cenar|o, NOC v||| na|e
:Jre |a |e |SCS oroce:: v||| qe |a| o |e lCb Jo||n| bandv|d| and avo|d |av|nq any |noac on |e
vMotion process.
VMware recommends that the network and server administrators work closely together when deploying the
trafc management features of the VDS and blade chassis. To achieve the best end-to-end quality of service
(QoS) result, a considerable amount of coordination is required during the conguration of the trafc
management features.
Operational Best Practices
Aer a cJ:oner :Jcce::J||y de:|qn: |e v|rJa| nevor| |nra:rJcJre, |e nex c|a||enqe: are |ov o deo|oy
|e de:|qn and |ov o |eeo |e nevor| ooera|ona|. V|vare orov|de: var|oJ: oo|:, A|:, and orocedJre: o
help customers efectively deploy and manage their network infrastructure. The following are some key tools
available in the vSphere platform:
V|vare vSo|ere' Connand||ne nerace (vSo|ere C|)
V|vare vSo|ere' A|
V|rJa| nevor| non|or|nq and roJb|e:|oo|nq
Ne||ov
|or n|rror|nq
In the following section, we will briey discuss how vSphere and network administrators can utilize these tools to
manage their virtual network. Refer to the vSphere documentation for more details on the tools.
VMware vSphere Command-Line Interface
vSphere administrators have several ways to access vSphere components through vSphere interface options,
|nc|Jd|nq V|vare vSo|ere' C||en', vSo|ere \eb C||en, and vSo|ere Connand||ne nerace. T|e vSo|ere
CLI command set enables administrators to perform conguration tasks by using a vSphere vCLI package
|n:a||ed on :Jooored o|aorn: or by J:|nq V|vare vSo|ere' |anaqenen A::|:an (v|A). Peer o |e
Getting Started with vSphere CLI document for more details on the commands:
http://www.vmware.com/support/developer/vcli.
The entire networking conguration can be performed through vSphere vCLI, helping administrators automate
the deployment process.
VMware vSphere API
The networking setup in the virtualized datacenter involves conguration of virtual and physical switches.
V|vare |a: orov|ded A|: |a enab|e nevor| :v|c| vendor: o qe |norna|on aboJ |e v|rJa|
infrastructure, which helps them to automate the conguration of the physical switches and the overall process.
|or exano|e, vCener can r|qqer an even aer |e v|o|on oroce:: o a v|rJa| nac||ne |: oerorned. Aer
receiving this event trigger and related information, the network vendors can recongure the physical switch
oor oo||c|e: :Jc| |a v|en |e v|rJa| nac||ne nove: o ano|er |o:, |e V|AN/acce:: conro| ||: (AC|)
congurations are migrated along with the virtual machine. Multiple networking vendors have provided this
aJona|on beveen o|y:|ca| and v|rJa| |nra:rJcJre conqJra|on: |roJq| |neqra|on v|| vSo|ere A|:.
Customers should check with their networking vendors to learn whether such an automation tool exists that will
bridge the gap between physical and virtual networking and simplify the operational challenges.
T E C H NI C AL WH I T E PAP E R / 2 9
VMware vSphere Distributed Switch
Best Practices
Virtual Network Monitoring and Troubleshooting
Monitoring and troubleshooting network trafc in a virtual environment require similar tools to those available in
the physical switch environment. With the release of vSphere 5, VMware gives network administrators the ability
o non|or and roJb|e:|oo |e v|rJa| |nra:rJcJre |roJq| eaJre: :Jc| a: Ne||ov and oor n|rror|nq.
Ne||ov caoab|||y on a d|:r|bJed :v|c| a|onq v|| a Ne||ov co||ecor oo| |e|o: non|or aoo||ca|on ov:
and measures ow performance over time. It also helps in capacity planning and ensuring that I/O resources are
utilized properly by diferent applications, based on their needs.
|or n|rror|nq caoab|||y on a d|:r|bJed :v|c| |: a va|Jab|e oo| |a |e|o: nevor| adn|n|:raor: debJq
nevor| |::Je: |n a v|rJa| |nra:rJcJre. CranJ|ar conro| over non|or|nq |nqre::, eqre:: or a|| rac o a oor
helps administrators ne-tune what trafc is sent for analysis.
vCenter Server on a Virtual Machine
A: nen|oned ear||er, vCener Server |: on|y J:ed o orov|:|on and nanaqe V|S conqJra|on:. CJ:oner: can
choose to deploy it on a virtual machine or a physical host, depending on their management resource design
requirements. In case of vCenter Server failure scenarios, the VDS will continue to provide network connectivity,
but no VDS conguration changes can be performed.
|y deo|oy|nq vCener Server on a v|rJa| nac||ne, cJ:oner: can a|e advanaqe o vSo|ere o|aorn eaJre:
:Jc| a: vSo|ere ||q| Ava||ab|||y (|A) and V|vare |aJ| To|erance (|aJ| To|erance) o orov|de ||q|er re:|||ency
to the management plane. In such deployments, customers must pay more attention to the network congurations.
This is because if the networking for a virtual machine hosting vCenter Server is miscongured, the network
connec|v|y o vCener Server |: |o:. T||: n|:conqJra|on nJ: be xed. |ovever, cJ:oner: need vCener
Server o x |e nevor| conqJra|on becaJ:e on|y vCener Server can conqJre a V|S. A: a vor|aroJnd o
this situation, customers must connect to the host directly where the vCenter Server virtual machine is running
through vSphere Client. Then they must reconnect the virtual machine hosting vCenter Server to a VSS that is
a|:o conneced o |e nanaqenen nevor| o |o::. Aer |e v|rJa| nac||ne rJnn|nq vCener Server |:
reconnected to the network, it can manage and congure VDS.
Peer o |e connJn|y ar|c|e V|rJa| |ac||ne |o:|nq a vCener Server |e: |rac|ce:' or qJ|dance reqard|nq
the deployment of vCenter on a virtual machine:
|o.//connJn||e:.vnvare.con/:erv|e/.|veServ|e/orev|ev|ody/l4089l02llo292/
V||o:VC|e:|rac||ce:.|n|.
Conclusion
A V|vare vSo|ere d|:r|bJed :v|c| orov|de: cJ:oner: v|| |e r|q| nea:Jre o eaJre:, caoab||||e: and
ooera|ona| :|no||c|y or deo|oy|nq a v|rJa| nevor| |nra:rJcJre. A: cJ:oner: nove on o bJ||d or|vae or
oJb||c c|oJd:, V|S orov|de: |e :ca|ab|||y nJnber: or :Jc| deo|oynen:. Advanced caoab||||e: :Jc| a: NOC
and ||T are |ey or ac||ev|nq beer J|||.a|on o /O re:oJrce: and or orov|d|nq beer S|A: or v|rJa||.ed
business-critical applications and multitenant deployments. Support for standard networking visibility and
non|or|nq eaJre: :Jc| a: oor n|rror|nq and Ne||ov |e|o: adn|n|:raor: nanaqe and roJb|e:|oo a v|rJa|
infrastructure through familiar tools. VDS also is an extensible platform that enables integration with other
nevor||nq vendor orodJc: |roJq| ooen vSo|ere A|:.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed
at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW-vSPHR-DIST-SWTCH-PRCTICES-USLET-101

S-ar putea să vă placă și