Documente Academic
Documente Profesional
Documente Cultură
Tabl e of C ontents
www.v ce.com
Copy right 2013 VCE Company , LLC. All Rights Reserv ed.
VCE believ es the inf ormation in this publication is accurate as of its publication date. The inf ormation is subject to
change without notice.
Contents
Introduction ......................................................................................................................................... 7
About this guide................................................................................................................................. 7
Audience ............................................................................................................................................ 8
Scope ................................................................................................................................................. 8
Feedback ........................................................................................................................................... 8
Trusted m ulti-tenancy foundational elements .............................................................................. 9
Secure s eparation ...........................................................................................................................10
Service assurance...........................................................................................................................10
Security and compliance ................................................................................................................11
Availability and data pr otection ......................................................................................................11
Tenant management and control...................................................................................................11
Service provider management and control ...................................................................................12
Technology overview .......................................................................................................................13
Management....................................................................................................................................14
Advanced Management Pod ......................................................................................................14
EMC Ionix Unified Infrastructure Manager/Provis ioning...........................................................14
Compute technologies ....................................................................................................................15
Cisco Unified Computing System ...............................................................................................15
VMw are vSphere .........................................................................................................................15
VMw are vCenter Server ..............................................................................................................15
VMw are vCloud Director..............................................................................................................15
VMw are vCenter Char geback.....................................................................................................16
VMw are vShield ...........................................................................................................................16
Storage technologies ......................................................................................................................16
EMC Fully Automated Storage Tiering ......................................................................................16
EMC FA ST Cache .......................................................................................................................17
EMC Pow er Path/V E ....................................................................................................................17
EMC Unified Storage...................................................................................................................17
EMC Unisphere Management Suite...........................................................................................17
EMC Unisphere Quality of Service Manager .............................................................................18
Netw ork technologies......................................................................................................................18
Cisco Nex us 1000V Series .........................................................................................................18
Cisco Nex us 5000 Series ............................................................................................................18
Cisco Nex us 7000 Series ............................................................................................................18
Cisco MDS....................................................................................................................................18
Cisco Data Center Netw ork Manager ........................................................................................19
Cisco A CE, Cisco ACE Web Applic ation Firew all, Cisco IPS traffic flow s............................117
Access layer ...............................................................................................................................118
Security recommendations .......................................................................................................123
Thr eats mitigated .......................................................................................................................124
Vblock Systems secur ity features .........................................................................................124
Design cons iderations for availability and data protection.........................................................125
Physical redundancy design cons ideration .............................................................................125
Design cons iderations for service prov ider management and contr ol......................................129
Design considerations for additional security technologies .................................................130
Design cons iderations for secure separation..............................................................................131
RSA Archer eGRC .....................................................................................................................131
RSA enV ision .............................................................................................................................131
Design cons iderations for service assurance .............................................................................131
RSA Archer eGRC .....................................................................................................................131
RSA enV ision .............................................................................................................................132
Design cons iderations for security and compliance ...................................................................133
RSA Archer eGRC .....................................................................................................................133
RSA enV ision .............................................................................................................................134
Design cons iderations for availability and data protection.........................................................134
RSA Archer eGRC .....................................................................................................................134
RSA enV ision .............................................................................................................................135
Design cons iderations for tenant management and contr ol ......................................................135
RSA Archer eGRC .....................................................................................................................135
RSA enV ision .............................................................................................................................135
Design cons iderations for service prov ider management and contr ol......................................136
RSA Archer eGRC .....................................................................................................................136
RSA enV ision .............................................................................................................................136
Conclusion .......................................................................................................................................137
Next steps ........................................................................................................................................139
Acronym glossary ..........................................................................................................................140
Introduction
The Vblock Solution for Trusted Multi-Tenancy (TMT) Design Guide describes how Vblock
Systems allow enterprises and service providers to rapidly build virtualized data centers that support
the unique challenges of provisioning Infrastructure as a Service (IaaS) to multiple tenants.
The trusted multi-tenancy solution comprises six foundational elements that address the unique
requirements of the IaaS cloud service model:
Secure separation
Service assurance
The trusted multi-tenancy solution deploys compute, storage, network, security, and management
Vblock System components that address each element while offering service providers and tenants
numerous benefits. The following table summarizes these benefits.
Provider benefits
Tenant benefits
Lowe r cost-to-serv e
Designing and managing Vblock Systems to deliver infrastructure multi-tenancy and service
multi-tenancy
The specific goal of this guide is to describe the design of and rationale behind the solution. The guide
looks at each layer of the Vblock System and shows how to achieve trusted multi-tenancy at each
layer. The design includes many issues that must be addressed prior to deployment, as no two
environments are alike.
Audience
The target audience for this guide is highly technical, including technical consultants, professional
services personnel, IT managers, infrastructure architects, partner engineers, sales engineers, and
service providers deploying a trusted multi-tenancy environment with leading technologies from VCE.
Scope
Trusted multi-tenancy can be used to offer dedicated IaaS (compute, storage, network, management,
and virtualization resources) or leverage single instances of services and applications for multiple
consumers. This guide only addresses design considerations for offering dedicated IaaS to multiple
tenants.
While this design guide describes how Vblock Systems can be designed, operated, and managed to
support trusted multi-tenancy, it does not provide specific configuration information, which must be
specifically considered for each unique deployment.
In this guide, the terms Tenant and Consumer refer to the consumers of the services provided by a
service provider.
Feedback
To suggest documentation changes and provide feedback on this paper, send email to
docfeedback@vce.com. Include the title of this paper, the name of the topic to which your comment
applies, and your feedback.
Secure separation
Service assurance
Figur e 1. Six ele ments of the Vblock Solution for Trusted Multi-Tenancy
Secure separation
Secure separation refers to the effective segmentation and isolation of tenants and their assets within
the multi-tenant environment. Adequate secure separation ensures that the resources of existing
tenants remain untouched and the integrity of the applications, workloads, and data remains
uncompromised when the service provider provisions new tenants. Each tenant might have access to
different amounts of network, compute, and storage resources in the converged stack. The tenant
sees only those resources allocated to them.
From the standpoint of the service provider, secure separation requires the systematic deployment of
various security control mechanisms throughout the infrastructure to ensure the confidentiality,
integrity, and availability of tenant data, services, and applications. The logical segmentation and
isolation of tenant assets and information is essential for providing confidentiality in a multi-tenant
environment. In fact, ensuring the privacy and security of each tenant becomes a key design
requirement in the decision to adopt cloud services.
Service assurance
Service assurance plays a vital role in providing tenants with consistent, enforceable, and reliable
service levels. Unlike physical resources, virtual resources are highly scalable and easy to allocate
and reallocate on demand. In a multi-tenant virtualized environment, the service provider prioritizes
virtual resources to accommodate the growth and changing business needs of tenants. Service level
agreements (SLA) define the level of service agreed to by the tenant and service provider. The
service assurance element of trusted multi-tenancy provides technologies and methods to ensure that
tenants receive the agreed-upon level of service.
Various methods are available to deliver consistent SLAs across the network, compute, and storage
components of the Vblock System, including:
Quality of service in the Cisco Unified Computing System (UCS) and Cisco Nexus platforms
Without the correct mix of service assurance features and capabilities, it can be difficult to maintain
uptime, throughput, quality of service, and availability SLAs.
10
11
Tenants should have control over relevant portions of their service. Specifically, tenants should be
able to:
In addition, tenants taking advantage of data protection or data backup services should be able to
manage this capability on their own, including setting schedules and backup types, initiating jobs, and
running reports.
This tenant-in-control model allows tenants to dynamically change the environment to suit their
workloads as resource requirements change.
12
Technology overview
The Vblock System from VCE is the world's most advanced converged infrastructureone that
optimizes infrastructure, lowers costs, secures the environment, simplifies management, speeds
deployment, and promotes innovation. The Vblock System is designed as one architecture that spans
the entire portfolio, includes best-in-class components, offers a single point of contact from initiation
through support, and provides the industry's most robust range of configurations.
Vblock Systems provide production ready (fully tested) virtualized infrastructure components, including
industry-leading technologies from Cisco, EMC, and VMware. Vblock Systems are designed and built
to satisfy a broad range of specific customer implementation requirements. To design trusted multitenancy, you need to understand each layer (compute, network, and storage) of the Vblock System
architecture. Figure 2 provides an example of Vblock System architecture.
This section describes the technologies at each layer of the Vblock System addressed in this guide to
achieve trusted multi-tenancy.
13
Management
Management technologies include Advanced Management Pod (AMP) and EMC Ionix Unified
Infrastructure Manager/Provisioning (UIM/P) (optional).
Two versions of the AMP are available: a mini-AMP and a high-availability version (HA AMP). A highavailability AMP is recommended.
For more information on AMP, refer to the Vblock Systems Architecture Overview documentation
located at www.vce.com/vblock.
Easily define and create infrastructure service profiles to match business requirements
Respond to dynamic business needs with infrastructure service life cycle management
Integrate with VMware vCenter and VMware vCloud Director for extended management
capabilities
14
Compute technologies
Within the computing infrastructure of the Vblock System, multi-tenancy concerns at multiple levels
must be addressed, including the UCS server infrastructure and the VMware vSphere Hypervisor.
15
Storage technologies
The features of multi-tenancy offerings can be combined with standard security methods such as
storage area network (SAN) zoning and Ethernet virtual local area networks (VLAN) to segregate,
control, and manage storage resources among the infrastructure tenants.
16
EMC FA ST Cache
EMC FAST Cache is an industry-leading feature supported by Vblock Systems. It extends the EMC
VNX arrays read-write cache and ensures that unpredictable I/O spikes are serviced at enterprise
flash drive (EFD) speeds, which is of particular benefit in a VMware vCloud Director environment.
Multiple virtual machines on multiple virtual machine file system (VMFS) data stores spread across
multiple hosts can generate a very random I/O pattern, placing stress on both the storage processors
as well as the DRAM cache. FAST Cache, a standard feature on all Vblock Systems, mitigates the
effects of this kind of I/O by extending the DRAM cache for reads and writes, increasing the overall
cache performance of the array, improving l/O during usage spikes, and dramatically reducing the
overall number of dirty pages and cache misse s.
Because FAST Cache is aware of EFD disk tiers available in the array, FAST VP and FAST Cache
work together to improve array performance. Data that has been promoted to an EFD tier is never
cached inside FAST Cache, ensuring that both options are leveraged in the most efficient way.
17
Network technologies
Multi-tenancy concerns must be addressed at multiple levels within the network infrastructure of the
Vblock System. Various methods, including zoning and VLANs, can enforce network separation.
Internet Protocol Security (IPsec) also provides application-independent network encryption at the IP
layer for additional security.
Cisco MDS
The Cisco MDS 9000 Series helps build highly available, scalable storage networks with advanced
security and unified management. The Cisco MDS 9000 family facilitates secure separation at the
network layer with virtual storage area networks (VSAN) and zoning. VSANs help achieve higher
security and greater stability in fibre channel (FC) fabrics by providing isolation among devices that are
physically connected to the same fabric. The zoning service within a fibre channel fabric provides
security between devices sharing the same fabric.
18
Security technologies
RSA Archer eGRC and RSA enVision security technologies can be used to achieve security and
compliance.
19
Design framework
This section provides the following information:
End-to-end topology
Logical topology
End-to-end topology
Secure separation creates trusted zones that shield each tenants applications, virtual machines,
compute, network, and storage from compromise and resource effects caused by adjacent tenants
and external threats. The solution framework presented in this guide considers additional technologies
that comprehensively provide appropriate in-depth defense. A combination of protective, detective,
and reactive controls and solid operational processes are required to deliver protection against
internal and external threats.
Key layers include:
Virtual machine and cloud resources (VMware vSphere and VMware vCloud Director)
20
21
Description
Each running an instance of the Nexus 1000V Virtual Ethernet Module (VE M)
Tena nt
Multiple v irtual machines, which hav e diff erent applications such as Web
serv er, database, and so f orth, for each tenant
Compute layer
The following table provides an example of the components of a multi-tenant environment virtual
compute farm.
Note: A Vblock System may have more resources than what is described in the f ollowing table.
Component
Description
Four UCS B440 serv ers (f our Intel Xeon 7500 series processors
and 32 dual in-line memory module slots with 256 GB memory)
Each serv er has two CNAs and are dual-attached to the UCS
6100 f abric interconnect
22
Logical topology
Figure 4 shows the logical topology for the trusted multi-tenancy design framework.
23
24
The logical topology represents the virtual components and virtual connections that exist within the
physical topology. The following table describes the topology.
Component
Details
Nexus 7000
Nexus 5000
Prov ides a robust compute lay er platf orm. Virtual port channel
prov ides a topology with redundant chassis, cards, and links with
Nexus 5000 and Nexus 7000.
Each connects to one MDS 9148 to f orm its own f abric.
Four 4 GB/s FC links connect the UCS 6120 to MDS 9148.
The MDS 9148 switches connect to the storage controllers. In this
example, the storage array has two controllers. Each MDS 9148 has
two connections to each FC storage controller. These dual
connections prov ide redundancy if an FC controller f ails and the MDS
9148 is not isolated.
Connect to the Nexus 5000 access switch through EtherChannel with
dual-10 Gb E.
Each chassis is populated with blade serv ers and Fabric Extenders
f or redundancy or aggregation of bandwidth.
Connect to the SAN f abric through the Cisco UCS 6120XP f abric
interconnect, which uses an 8-port 8 GB f ibre channel expansion
module to access the SAN.
Connect to LAN through the Cisco UCS 6120XP f abric interconnects.
These ports require SFP + adapters. The serv er ports of f abric
interconnects can operate at 10 GB/s and Fibre Channel ports of
f abric interconnects can operate at 2/4/8 GB/s.
EMC VN X storage
25
26
Traffic flow in the data center is classified into the following categories:
Note: Front-end traffic, also called client-to-server traffic, trav erses the Nexus 7000 aggregation layer and a
select number of network-based services.
At the application layer, each tenant may have multiple vApps with applications and have different
virtual machines for different workloads. The Cisco Nexus 1000V distributed virtual switch acts as the
virtual access layer for the virtual machines. Edge LAN policies, such as quality of service marking
and vNIC ACLs, can be implemented at the Nexus 1000V. Each ESXi server becomes a virtual
Ethernet blade of Nexus 1000V, called Virtual Ethernet Module (VEM). Each vNIC connects to Nexus
1000V through a port group; each port group specifies one or more VLANs used by a virtual machine
NIC. The port group can also specify other network attributes, such as rate limit and port security. The
VM uplink port profile forwards VLANs belonging to virtual machines. The system uplink port profile
forwards VLANs belonging to management traffic. The virtual machine traffic for different tenants
traverses the network through different uplink port profiles, where port security, rate limiting, and
quality of service apply to guarantee secure separation and assurance.
VMware vSphere virtual machine NICs are associated to the Cisco Nexus 1000V to be used as the
uplinks. The network interface virtualization capabilities of the Cisco adapter enable the use of
VMware multi-NIC design on a server that has two 10 GB physical interfaces with complete quality of
service, bandwidth sharing, and VLAN portability among the virtual adapters. vShield Edge controls all
network traffic to and from the virtual data center and helps provide an abstraction of the separation in
the cloud environment.
Virtual machine traffic goes through the UCS FEX (I/O module) to the fabric interconnect 6120.
If the traffic is aligned to use the storage resources and it is intended to use FC storage, it passe s over
an FC port on the fabric interconnect and Cisco MDS, to the storage array, and through a storage
processor, to reach the specific storage pool or storage groups. For example, if a tenant is using a
dedicated storage resource with specific disks inside a storage array, traffic is routed to the assigned
LUN with a dedicated storage group, RAID group, and disks. If there is NFS traffic, it passes over a
network port on the fabric interconnect and Cisco Nexus 5000, through a virtual port channel to the
storage array, and over a data mover, to reach the NFS data store. The NFS export LUN is tagged
with a VLAN to ensure the security and isolation with a dedicated storage group, RAID group, and
disks. Figure 5 shows an example of a few dedicated tenant storage resources. However, if the
storage is designed for a shared traffic pool, traffic is routed to a specific storage pool to pull
resources.
ESXi hosts for different tenants pass the server-client and management traffic over a server port and
reach the access layer of the Nexus 5000 through virtual port channel.
Server blades on UCS chassis are allocated for the different tenants. The resource on UCS can be
dedicated or shared. For example, if using dedicated servers for each tenant, VLANs are assigned for
different tenants and are carried over the dot1Q trunk to the aggregation layer of the Nexus 7000,
where each tenant is mapped to the Virtual Routing and Forwarding (VRF). Traffic is routed to the
external network over the core.
27
The diagram shows blade server technology with three chassis initially dedicated to the VMware
vCloud environment. The physical design represents the networking and storage connectivity from the
blade chassis to the fabric and SAN, as well as the physical networking infrastructure. (Connectivity
between the blade servers and the chassis switching is different and is not shown here.) Two chassis
are initially populated with eight blades each for the cloud resource clusters, with an even distribution
between the two chassis of blades belonging to each resource cluster.
In this scenario, VMware vSphere resources are organized and separated into management and
resource clusters with three resource groups (Gold, Silver, and Bronze). Figure 7 illustrates the
management cluster and resource groups.
28
vCenter Serv er
vCenter Database
2 (f or multi-cell)
29
Components
v Shield Manager
Note: A vCloud Director cluster contains one or more vCloud Director serv ers; these servers are referred to as
cells and f orm the basis of the VMware cloud. A cloud can be formed from multiple cells. The number of
vCloud Director cells depends on the size of the vCloud environment and the level of redundancy.
Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups
would not host vCenter management virtual machines. Best practices encourage separating the cloud
management cluster from the cloud resource groups(s) in order to:
Facilitate quicker troubleshooting and problem resolution. Management components are strictly
contained in a specified cluster and manageable management cluster.
Keep cloud management components separate from the resources they are managing.
Provide an additional step for high availability and redundancy for the trusted multi-tenancy
infrastructure.
30
Resource groups
A resource group is a set of resources dedicated to user workloads and managed by VMware vCenter
Server. vCloud Director manages the resources of all attached resource groups within vCenter
Servers. All cloud-provisioning tasks are initiated through VMware vCloud Director and passed down
to the appropriate vCenter Server instance.
Figure 9 highlights cloud resource groups.
Provisioning resources in standardized groupings promotes a consistent approach for scaling vCloud
environments. For consistent workload experience, place each resource group on a separate
resource cluster.
The resource group design represents three VMware vSphere High Availability (HA) Distributed
Resource Scheduler (DRS) clusters and infrastructure used to run the vApps that are provisioned and
managed by VMware vCloud Director.
31
Logical design
This section provides information about the logical design, including:
Security
Specification
v Sphere datacenter
Fully automated
Y es
67%
N/A
Y es
Medi um
Note: In this section, the scope is limited to only the Vblock System supporting the management component
workloads.
32
Specification
Processors
x86 compatible
Storage presented
Networking
Memory
Note: VMware v Cloud Director deployment requires storage for sev eral elements of the ov erall framework. The
first is the storage needed to house the vCloud Director management cluster. This includes the repository
for configuration information, organizations, and allocations that are stored in an Oracle database. The
second is the vSphere storage objects presented to vCloud Director as data stores accessed by ESXi
serv ers in the vCloud Director configuration. This storage is managed by the v Sphere administrator and
consumed by vCloud Director users depending on vCloud Director configuration. The third is the existence
of a single NFS data store to serv e as a staging area for vApps to be uploaded to a catalog.
vCenter Serv er
Y es
vCloud Director
Y es
Y es
v Shield Manager
Y es
33
Specification
Processors
x86 compatible
Storage presented
Networking
Memory
VMw are vSphere cluster host des ign specification for resource groups
All VMware vSphere resource clusters are configured similarly with the following specifications.
Attribute
Specification
Fully automated
3 stars
Y es
83%
N/A
34
Security
The RSA Archer eGRC Platform can be run on a single server, with the application and database
components running on the same server. This configuration is suitable for organizations:
For the trusted multi-tenancy framework, RSA enVision can be deployed as a virtual appliance in the
AMP. Each Vblock System component can be configured to utilize it as its centralized event manager
through its identified collection method. RSA enVision can then be integrated with RSA Archer eGRC
per the RSA Security Incident Management Solution configuration guidelines.
35
In this design guide (and associated configurations), three levels of services are provided in the cloud:
Bronze, Silver, and Gold. These tiers define service levels for compute, storage, and network
performance. The following table provides sample network and data differentiations by service tier.
Bronze
Silver
Gold
Serv ices
Band width
20%
30%
40%
Segmentation
Data Protection
None
None
36
37
38
Configuration
While UIM/P automates the operational tasks involved in building services on Vblock Systems,
administrators need to perform initial task sets on each domain manager before beginning service
provisioning. This section describes both key initial tasks to perform on the individual domain
managers and operational tasks managed through UIM/P.
The following table shows what is configured as part of initial device configuration and what is
configured through UIM/P.
Device manager
Initial configuration
Operational configuration
completed with UIM/P
UCS Manager
LAN
MAC pool
SAN
Enable ports
KVMIP pool
Create VLANs
WWPN pool
Assign VLANs
VSANs
Boot policies
Serv ice templates
Select pools
Serv er
UUID pool
Unisphere MDS/Nexus
vCenter
Zone
Create LUNs
Aliases
Zone sets
DRS policy
Distributed power
management (DPM) policy
Create networks
39
Enabling services
After completing the initial configurations, use the following high-level workflow to enable services.
Stage
Workflow action
Description
Turn on the sy stem, start up Cisco UCS serv ice prof iles, activ ate
network paths, and make resources av ailable f or use. The
workf low separates prov isioning and activ ation, to allow
activ ation of the serv ice as needed.
vCenter sy nchronization
vCloud sy nchronization
40
Figure 13 describes the provisioning, activation, and synchronization process, including key sub-steps
during the provisioning process.
Pr ovisioning a service
To provision a service:
1. Select the service offering.
2. Select Vblock System.
3. Select servers.
4. Configure IP and provide DNS hostname for operating system installation.
5. Select storage.
6. Select and configure network profile and vNICs.
7. Configure vCenter cluster settings.
8. Configure vCloud Director settings.
2013 VCE Company, LLC. All Rights Reserved.
41
Version
Description
Cisco UCS
2.0
VMware v Sphere
5.0
VMware v Shield
VMware v Center
Chargeback
1.5
5.0
1.6.2
VMware ES Xi hosts
Resource pools
VMware High Av ailability and Distributed Resource
Scheduler
VMware v Motion
Prov ides network security serv ices, including NAT and f irewall.
Includes:
42
Cisco UCS
Cisco UCS
The UCS blade servers contain a pair of Cisco Virtual Interface Card (VIC) Ethernet uplinks. Cisco
VIC presents virtual interfaces (UCS vNIC) to the VMware ESXi host, which allow for further traffic
segmentation and categorization across all traffic types based on vNIC network policies.
Using port aggregation between the fabric interconnect vNIC pairs enhances the availability and
capacity of each traffic category. All inbound traffic is stripped of its VLAN header and switched to the
appropriate destinations virtual Ethernet interface. In addition, the Cisco VIC allows for the creation of
multiple virtual host bus adapters (vHBA), permitting FC-enabled startup across the same physical
infrastructure.
Each VMware virtual interface type, VMkernel, and individual virtual machine interface connects
directly to the Cisco Nexus 1000V software distributed virtual switch. At this layer, packets are tagged
with the appropriate VLAN header and all outbound traffic is aggregated to the two Cisco fabric
interconnects.
This section contains information about the high-level UCS features that help achieve secure
separation in the trusted multi-tenancy framework:
UCS organizations
VLAN considerations
VSAN considerations
MAC addresses
WWN values
UUID
BIOS
Firmware versions
43
In a multi-tenant environment, the service provider can define a service profile giving access to any
server in a predefined server resource with specific processor, memory, or other administrator-defined
characteristics. The service provider can then provision one or more servers through service profiles,
which can be used for an organization or a tenant. Service profiles are particularly useful when
deployed with UCS Role-Based Access Control (RBAC), which provides granular administrative
access control to UCS system resources based on administrative roles in a service provider
environment.
Servers instantiated by service profiles start up from a LUN that is tied to the specified WWPN,
allowing an installed operating system instance to be locked with the service profile. The
independence from server hardware allows installed systems to be re-deployed between blades.
Through the use of pools and templates, UCS hardware can be quickly deployed and scaled.
The trusted multi-tenancy framework uses three distinct server roles to segregate and classify UCS
blade servers. This helps identify and associate specific service profiles depending on their purpose
and policy. The following table describes these roles.
Role
Description
Mana gement
These serv ers can be associated with a serv ice prof ile that is meant only for cloud
management or any type of serv ice prov ider inf rastructure workload.
Dedicated
These serv ers can be associated with diff erent serv ice prof iles, serv er pools, and
roles with VLAN policy ; f or example, for a specific tenant VLAN allowed access to
those serv ers that are meant only for specif ic tenants.
The trusted multi-tenancy f ramework considers a f ew tenants who strongly want to
hav e a dedicated UCS cluster to f urther segregate workloads in the v irtualization
lay er as needed. It also considers tenants who want dedicated workload throu ghput
f rom the underly ing compute inf rastructure, which maps to the VMwar e Distributed
Resource Scheduler cluster.
Mixed
These serv ers can be associated with a diff erent serv ice prof ile meant for shared
resource clusters f or the VMware Distributed Resource Scheduler cluster.
Depending on tenant requi rements, UCS can be designed to use a dedicated
compute resource or a shared resource. The trusted multi-tenancy framework uses
mixed serv ers f or shared resource clusters as an example.
These servers can be spread across the UCS fabric to minimize the impact of a single point of failure
or a single chassis failure.
44
Figure 14 shows an example of how the three servers are designed in the trusted multi-tenancy
framework.
45
Figure 15 shows an example of three tenants (Orange, Vanilla, and Grape) using three service
profiles on three different physical blades to ensure secure separation at the blade level.
46
UCS or ganizations
The Cisco UCS organizations feature helps with multi-tenancy by logically segmenting physical
sy stem resources. Organizations are logically isolated in the UCS fabric. UCS hardware and policies
can be assigned to different organizations so that the appropriate tenant or organizational unit can
access the assigned compute resources. A rich set of policies in UCS can be applied per organization
to ensure that the right sets of attributes and I/O policies are assigned to the correct organization.
Each organization can have its own pool of resources, including the following:
Policies
Service profiles
UCS organizations are hierarchical. Root is the top-level organization. System-wide policies and pools
in root are available to all organizations in the system. Any policies and pools created in other
organizations are available only to organizations below it in the same hierarchy.
The functional isolation provided by UCS is helpful for a multi-tenant environment. Use the UCS
features of RBAC and locales (a UCS feature to isolate tenant compute resources) on top of
organizations to assign or restrict user privileges and roles by organization.
Figure 16 shows the hierarchical organization of UCS clusters starting from Root. It shows three types
of cluster configurations (Management, Dedicated, and Mixed). Below that are the three tenants
(Orange, Vanilla, and Grape) with their service levels (Gold, Silver, and Bronze).
47
UCS allows the creation of resource pools to ensure secure separation between tenants. Use the
following:
LAN resources
IP pool
MAC pool
VLAN pool
Management resources
VLAN pool
SAN resources
VSANs
Identity resources
UUID pool
Compute resources
Server pools
Figure 17 illustrates how creating separate resource pools for the three tenants helps with secure
separation at the compute layer.
48
Figure 18 is an example of a UCS Service Profile workflow diagram for three tenants.
VLAN considerations
In Cisco UCS, a named VLAN creates a connection to a specific management LAN and tenantspecific VLANs. The VLAN isolates traffic, including broadcast traffic, to that external LAN. The name
assigned to a VLAN ID adds a layer of abstraction that you can use to globally update all servers
associated with service profiles using the named VLAN. You do not need to reconfigure servers
individually to maintain communication with the external LAN. For example, if a service provider
wanted to isolate a group of compute clusters for a specific tenant, the specific tenant VLAN needs to
be allowed in the service profile of that tenant. This provides another layer of abstraction in secure
separation.
49
To illustrate, if Tenant Orange has dedicated UCS blades, it is recommended to allow only Tenant
Orange-specific VLANs to ensure that only Tenant Orange has access to those blades. Figure 19
shows a dedicated service profile for Tenant Orange that uses a vNIC template as Orange. Tenant
Orange VLANs are allowed to use that specific vNIC template. However, a global vNIC template can
still be used for all blades, providing the ability to allow or disallow specific VLANs from updating
service profile templates.
50
Figure 20 shows that VSAN 10 and VSAN 11 are configured in UCS SAN Cloud and uplinked to an
FC port.
Figure 21 shows how an FC port is assigned to a VSAN ID in UCS. In this case, uplink FC Port 1 is
assigned to VSAN10.
51
52
A service provider may want to view all the listed tenants or organizations in vCloud Director to easily
manage them. Figure 23 shows the service providers tenant view in VMware vCloud Director.
Organizations are the unit of multi-tenancy within vCloud Director. They represent a single logical
security boundary. Each organization contains a collection of users, computing resources, catalogs,
and vApp workloads. Organization users can be local users or imported from an LDAP server. LDAP
integration can be specific to an organization, or it can leverage an organizational unit within the
sy stem LDAP configuration, as defined by the vCloud system administrator. The name of the
organization, specified during creation time, maps to a unique URL that allows access to the GUI for
that organization. For example, Figure 24 shows that Tenant Orange maps to a specific default
organization URL. Each tenant accesses the resource using its own URL and authentication.
53
The vCloud Director network provides an extra layer of separation. vCloud Director has three different
types of networking, each with a specific purpose:
External network
Organization network
vApp network
Connectivity
Direct connection
NAT/routed
Isolated
A directly connected external organization network places the vApp virtual machines in the port group
of the external network. IP address assignments for vApps follow the external network IP addressing.
Internal and routed external organization networks are instantiated through network pools by vCloud
sy stem administrators. Organization administrators do not have the ability to provision organization
networks but can configure network services such as firewall, NAT, DHCP, VPN, and static routing.
Note: Organization network is meant only for the intra-organization network and is specific to an organization.
54
Service providers provision organization networks using network pools. Figure 26 shows the service
providers administrator view of the organization networks.
55
56
To deploy an organization or vApp network, you need a network pool in vCloud Director. Network
pools contain network definitions used to instantiate private/routed organization and vApp networks.
Networks created from network pools are isolated at Layer 2. You can create three types of network
pools in vCloud Director, as shown in the following table.
Network Pool Type
Description
Network pools are backed by pre-prov isioned port groups in Cisco Nexus
1000V or V Mware distributed switch.
VLAN backed
A range of pre-prov isioned VLAN IDs back network pools. This assumes
all VLANs specif ied are trunked.
Figure 28 shows how network pool types are presented in VMware vCloud Director.
57
Each pool has specific requirements, limitations, and recommendations. The trusted multi-tenancy
framework use s a port group backed network pool with a Cisco Nexus 1000V Distributed switch. Each
port group is isolated to its own VLAN ID. Each tenant (network, in this case) is associated with its
own network pool, each backed by a set of port groups.
VMware vCloud Director automatically deploys vShield Edge devices to facilitate routed network
connections. vShield Edge uses MAC encapsulation for NAT routing, which helps prevent Layer 2
network information from being seen by other organizations in the environment. vShield Edge also
provides a firewall service that can be configured to block inbound traffic to virtual machines
connected to a public access organization network.
Cisco UCS
Cisco UCS
The following UCS features support service assurance:
Quality of service
Port channels
Server pools
Compute, storage, and network resources need to be categorized in order to provide a differential
service model for a multi-tenant environment. The following table shows an example of Gold, Silver,
and Bronze service levels for compute resources.
Level
Compute resource
Gold
Silv er
Bronze
System classes in the UCS specify the bandwidth allocated for traffic types across the entire system.
Each system class reserve s a specific segment of the bandwidth for a specific type of traffic. Using
quality of service policies, the UCS assigns a system class to the outgoing traffic and then matches a
quality of service policy to the class of service (CoS) value marked by the Nexus 1000V Series switch
for each virtual machine.
UCS quality of service configuration can help achieve service assurance for multiple tenants. A best
practice to ensure guaranteed quality of service throughout a multi-tenant environment is to configure
quality of service for different service levels on the UCS.
2013 VCE Company, LLC. All Rights Reserved.
58
Figure 29 shows different quality of service weight values configured for different class of service
values that correspond to Gold, Silver, and Bronze service levels. This helps ensure traffic priority for
tenants associated with those service levels.
Quality of service policies assign a system class to the outgoing traffic for a vNIC or vHBA. Therefore,
to configure the vNIC or vHBA, include a quality of service policy in a vNIC or vHBA policy and then
include that policy in a service profile. Figure 30 shows how to create quality of service policies.
59
Description
Pay as y ou go
Resources are reserv ed and committed for v Apps only as v Apps are created.
There is no upf ront reserv ation of resources.
Allocation
A baseline amount (guara ntee) of resources f rom the prov ider v irtual data
center is reserv ed f or the organization v irtual data centers exclusiv e use. An
additional percentag e of resources are av ailable to ov ersubscribe CPU and
memory, but this taps into compute resources that are shared by other
organi zation v irtual data centers drawing f rom the prov ider v irtual data center.
Reserv ation
All resources assigned to the organization v irtual data center are reserv ed
exclusiv ely f or the organization v irtual data centers use.
With all the above models, the organization can be set to deploy an unlimited or limited number of
virtual machines. In selecting the appropriate allocation model, consider the service definition and
organizations use case workloads.
Although all tenants use the shared infrastructure, the resources for each tenant are guaranteed
based on the allocation model in place. The service provider can set the parameters for CPU,
memory, storage, and network for each tenants organization virtual data center, as shown in Figure
31, Figure 32, and Figure 33.
60
61
Figur e 33. Organi zation virtual data center network pool allocation
Cisco UCS
Cisco UCS
The UCS Role-Based Access Control (RBAC) feature helps ensure security by providing granular
administrative access control to the UCS system resources based on administrative roles, tenant
organization, and locale.
The RBAC function of the Cisco UCS allows you to control service provider user access to the actions
and resources in the UCS. RBAC is a security mechanism that can greatly lower the cost and
complexity of Vblock System security administration. RBAC simplifies security administration by using
roles, hierarchies, and constraints to organize privileges. Cisco UCS Manager offers flexible RBAC to
define the roles and privileges for different administrators within the Cisco UCS environment.
The UCS RBAC allows access to be controlled based on the roles assigned to individuals. The
following table lists the elements of the UCS RBAC model.
62
Element
Description
Role
A job f unction within the context of locale, along with the authority and
responsibility giv en to the user assigned to the role
User
A person using the UCS; users are assigned to one or more roles
Action
Any task a user can perf orm in the UCS that is subject to access control; an
action is perf ormed on a resource
Priv ilege
Locale
The UCS RBAC feature can help service providers segregate roles to manage multiple tenants. One
example is using UCS RBAC with LDAP integration to ensure all roles are defined and have specific
accesse s as per their roles. A service provider can leverage this feature in a multi-tenant environment
to ensure a high level of centralized security control. LDAP groups can be created for different
administration roles, such as network, storage, server profiles, security, and operations. This helps
providers keep security and compliance in place by having designated roles to configure different
parts of the Vblock System.
Figure 34 shows an LDAP group mapped to a specific role in a UCS. An Active Directory group called
ucsnetw ork is mapped to a predefined network role in UCS. This means that anyone belonging to
the ucsnetw ork group in Active Directory can perform a network task in UCS; other features are
shown as read-only.
63
Figure 35 illustrates how UCS groups provide hierarchy. It shows how group ucsnetw ork is laid out in
an Active Directory domain.
HTTPS provides authenticated and encrypted access to the Cisco UCS Manager GUI. HTTPS
uses components of the Public Key Infrastructure (PKI), such as digital certificates, to establish
secure communications between the clients browser and Cisco UCS Manager.
64
65
Each tenant has its own user and group management and provides role-based security access, as
shown in Figure 38. The users are shown only the vApps that they can access. vApps that users do
not have access to are not visible, even if they reside within the same organization.
66
In Figure 39, in the trusted multi-tenancy framework there is a VMw are Admins group created in an
Active Directory. This group has access to the trusted multi-tenancy vCenter data center. A user
member of this group can perform the administration of vCenter.
Cisco UCS
Virtualization
67
Cisco UCS
Fabric interconnect clustering allows each fabric interconnect to continuously monitor the others
status. If one fabric interconnect becomes unavailable, the other takes over automatically.
Figure 40 shows how Cisco UCS is deployed as a high availability cluster for management layer
redundancy. It is configured as two Cisco UCS 6100 Series fabric interconnects directly connected
with Ethernet cables between the L1 (L1-to-L1) and L2 (L2-to-L2) ports.
Service profile dynamic mobility provides another layer of protection. When a physical blade server
fails, it automatically transfers the service profile to an available server in the pool.
Virtual por t channel in UCS
With virtual port channel uplinks, there is minimal impact of both physical link failures and upstream
switch failures. With more physical member links in one larger logical uplink, there is the potential for
even better overall uplink load balancing and better high availability.
68
Figure 41 shows how port channel 101 and 102 are configured with four uplink members.
Virtualization
Enable overall cloud availability design for tenants using the following features:
VMware vSphere HA
VMware vMotion
69
In the trusted multi-tenancy framework, all VMware High Availability clusters are deployed with
identical server hardware. Using identical hardware provides a number of key advantages, including
the following:
Automatically optimize and allocate entire pools of resources for optimal hardware utilization and
alignment with business priorities
70
71
Availability options
ES Xi hosts
Conf igure the ESXi host with a minimum of two phy sical paths to each
required net work (port group) to ensure that a single link f ailure does
not impact platf orm or v irtual machine connectivity. This should include
management and v Motion networks. The Load Based Teaming
mechanism is used to av oid ov ersubscribed network links.
Conf igure ESXi hosts with a minimum of two phy sical paths to each
LUN or NFS share to ensure that a single storage path f ailure does not
impact serv ice.
vCenter Serv er
vCenter database
v Shield Manager
vCenter Chargeback
Deploy vCenter Chargeback v irtual machines as a two-node, loadbalanced cluster. Deploy multiple Chargeback data collectors remotely
to av oid a single point of failure.
vCloud Director
72
73
Virtualization
A service provider will have access to the entire VMware vSphere and VMware vCloud environment
to flexibly manage and monitor the environment. A service provider can access and manage the
following:
Cisco UCS
vCloud with a Web browser pointing to the vCloud Director cell address
74
For example, in vCloud Director, the service provider is in complete control of the physical
infrastructure. The service provider can:
Enable or disable ESXi hosts and data stores for cloud usage
Create and remove the external networks that are needed for communicating with the Internet,
backup networks, IP-based storage networks, VPNs, and MPLS networks, as well as the
organization networks and network pools
Create and remove the organization, administration users, provider virtual data center, and
organization virtual data centers
Figure 46 shows how a service provider views the complete physical infrastructure in vCloud Director.
Data Collectors:
- Chargeback Data Collectorresponsible for vCenter Server data collection
- vCloud Director (vCD) and vShield Manager (vSM) data collectors responsible for
utilization/allocation collection on the new abstraction layer created by vCloud Director
Load Balancer (embedded in vCenter Chargeback) receives and routes all user requests to
the application; needs to be installed only once for the Chargeback cluster
75
76
How will the metrics be aggregated and correlated to formulate meaningful business value?
Within a Vblock System virtualized computing environment, the infrastructure chargeback details can
be modeled as fully loaded measurements per virtual machine. The virtual machine essentially
becomes the point resource allocated back to users/customers. Below are the some of the key
metrics to collect when measuring virtual machine resource utilization:
Resource
Chargeback metrics
Unit of measurement
CPU
CPU usage
GHz
Count
Memory usage
GB
Memory size
GB
Network
GB
Disk
Storage usage
GB
GB
Memory
For more information, see Guidelines for Metering and Chargeback Using VMware vCenter
Chargeback on www.vce.com.
77
78
79
VSANs are rst created as isolated fabrics within a common physical topology. Once VSANs are
created, apply individual unique zone sets as necessary within each VSAN. The following table
summarizes the primary differences between VSANs and zones.
Characteristic
VS ANs
Zoning
Membershi p criteria
Hardwar e
Hardwar e
Traff ic accounting
Y es per VSAN
No
No
Traff ic engineering
Y es per VSAN
No
Note: Note that UIM supports only one VSAN f or each fabric.
80
Figure 49 is a graphical representation of how VMware vSphere can be used to separate each
tenants address space.
81
Move a VDM, along with its NFS and CIFS exports and configuration data (LDAP, net groups,
and so forth), to another data mover
Back up the VDM, along with its NFS and CIFS exports and configuration data
This feature supports at least 50 NFS VDMs per physical data mover and up to 25 LDAP domains.
82
Refer to Configuring Virtual Data Movers on VNX for more information (Powerlink access required).
CIFS
NFS
iSCSI
83
Figure 51 displays the access protocols and the respective protocol stack that can be used to access
data residing on a unified system.
CIFS stack
The following table summarizes how tenant data traffic flows inside EMC VNX for the CIFS stack.
Secure separation is maintained at each layer throughout the CIFS stack.
CIFS stack component
Description
VLAN
The secure separation of data access starts at the bottom of the CIFS
stack on the IP network with the use of Virtual Local Area Networks
(VLAN) to separate indiv idual tenants.
The VLAN-tag ging model extends into the unif ied sy stem by VLAN
tagging the indiv idual IP interf aces so they understand and honor the
tags being used.
IP packet ref lection guarantees that any traff ic sent from the storage
sy stem in response to a client request will go out ov er the same phy sical
connection and VLAN on which the request was receiv ed.
The v irtual data mov er is a logical conf iguration container that wraps
around a CIFS f ile-sharing instance.
CIFS Serv er
84
Description
CIFS Share
ABE
NFS stack
The following table summarizes how tenant data traffic flows inside EMC VNX for the NFS stack.
NFS stack component
Description
VLAN
The secure separation of data access starts at the bottom of the NFS
stack on the IP network, using VLANs to separate indiv idual tenants.
The VLAN taggin g model extends into the unif ied system by VLAN
tagging the indiv idual IP interf aces so they understand and honor the
tags being used.
IP packet ref lection guarantees that any traff ic sent from the storage
sy stem in response to a client request will go out ov er the same phy sical
connection and VLAN on which the request was receiv ed.
NFS export hiding tightly controls which users access the NFS exports. It
enhances standard NFS serv er behav ior by prev enting users f rom
seeing NFS exports f or which they do not hav e access-lev el permission.
It will appear to each tenant that they hav e their own indiv idual NFS
serv er.
85
Figure 52 shows an NFS export and how a specific subnet has access to the NFS share.
In this example, VLAN 112 and VLAN 111 subnet has access to the /nfs1 share. VNX also provides
granular access to the NFS share. An NFS export can be presented to a specific tenant subnet or
specific host or group of hosts in the network.
iSCSI stack
The following table summarizes how tenant data traffic flows inside EMC VNX for the iSCSI stack.
iSCSI stack component
Description
VLAN
The secure separation of data access starts at the bottom of the iSCSI
stack on the IP network with the use of VLAN to separate indiv idual
tenants.
The VLAN-tag ging model extends into the unif ied sy stem by VLAN
tagging the indiv idual IP interf aces so they understand and honor the
tags being used.
iSCSI Portal
Target
LUN
Access then f lows through an iSCSI portal to a target dev ice, where it is
ultimately addressed to a LUN.
LUN Masking
86
Set up multiple virtual ports on the VNX and segregate hosts into different VLANs based on your
security policy
VLANs make it more difficult to sniff traffic, as they require sniffing across multiple networks. This
provides extra security.
Figure 53 shows the iSCSI port properties for a port with VLANs enabled and two virtual ports
configured.
87
Description
FC Zone
VSAN
Target
LUN
LUN Masking
Figure 54 and Figure 55 show how a 20 GB FC boot LUN and 2 TB LUN map to each host in VNX. It
ensures each LUN presented to the ESXi host is properly masked and granted access to the specific
LUN and spread out in different RAID groups.
88
Response time
Bandwidth
Throughput
UQM provides a simple user interface for service providers to control policies. This control is invisible
to tenants and can ensure that the activity of one tenant does not impact that of another. For example,
if a tenant requests a dedicated disk, storage groups, and spindles for its storage resources, apply
these control policies to get optimum storage I/O performance.
89
Figure 56 shows how you can create policies with a specific set of I/O class to ensure that SLAs are
maintained.
EMC V NX FA ST V P
With standard storage tiering in a non-FAST VP enabled array, multiple storage tiers are typically
presented to the vCloud environment, and each offering is abstracted out into separate provider virtual
data centers (vDC). A provider may choose to provision an EFD [SSD/Flash] tier, an FC/SAS tier, and
a SATA/NL-SAS tier, and then abstract these into Gold, Silver, and Bronze provider virtual data
centers. The customer then chooses resources from these for use in their organizational virtual data
center.
This provisioning model is limited for a number of reasons, including the following:
VMware vCloud Director does not allow for a non-disruptive way to move virtual machines from
one provider virtual data center to another. This means the customer must provide for downtime
if the vApp needs to be moved to a more appropriate tier.
For workloads with a variable I/O personality, there is no mechanism to automatically migrate
those workloads to a more appropriate disk tier.
With the cost of enterprise flash drives (EFD) still significant, creating an entire tier of them can
be prohibitively expensive, especially with few workloads having an I/O pattern that takes full
advantage of this particular storage medium.
One way in which the standard storage tiering model can be beneficial is when multiple arrays are
used to provide different kinds of storage to support different I/O workloads.
90
For
Production tier
Archiv e tier
Tiering policies
EMC FAST VP offers a number of policy settings to determine how data is placed, how often it is
promoted, and how data movement is managed. In a VMware vCloud Director environment, the
following policy settings are recommended to best accommodate the types of I/O workloads
produced.
Policy
Default setting
Recommended setting
91
EMC FA ST Cache
In a VMware vCloud Director environment, VCE recommends a minimum of 100 GB of EMC FAST
Cache, with the amount of FAST Cache increasing as the number of virtual machines increases.
The combination of FAST VP and FAST Cache allows the vCloud environment to scale better,
support more virtual machines and a wider variety of service offerings, and protect against I/O spikes
and bursting workloads in a way that is unique in the industry. These two technologies in tandem are
a significant differentiator for the Vblock System.
As storage is provisioned to organization virtual data centers, the shared storage pool for the provider
virtual data center is seen as a single pool of storage with no distinction of storage characteristics,
protocol, or other characteristics differentiating it from being a single large address space.
If a provider virtual data center contains more than one data store, it is considered best practice that
those data stores have equal performance capability, protocol, and quality of service. Otherwise, the
slower storage in the collective pool will impact the performance of that provider virtual data storage
pool. Some virtual data centers might end up with faster storage than others.
To gain the benefits of different storage tiers or protocols, define separate provider virtual data
centers, where each provider virtual data center has storage of different protocols or differing qualityof-service storage. For example, provision the following:
A provider virtual data center built on a data store backed by 15K RPM FC disks with loads of
cache in the disk for the highest disk performance tier
A second provider virtual data center built on a data store backed by SATA drives and not much
cache in the array for a lower tier
92
When a provider virtual data center shares a data store with another provider virtual data center, the
performance of one provider virtual data center may impact performance of the other provider virtual
data center. Therefore, it is considered best practice to have a provider virtual data center that has a
dedicated data store such that isolation of the storage reduces the chances of introducing different
quality-of-service storage resources in a provider virtual data center.
93
Role mapping
Once communications are established with the LDAP service, give specific LDAP users or groups
access to Unisphere by mapping them to Unisphere roles. The LDAP service merely performs the
authentication. Once authenticated, a users authorization is determined by the assigned Unisphere
role. The most flexible configuration is to create LDAP groups that correspond to Unisphere roles. This
allows you to control access to Unisphere by managing the members of the LDAP groups.
For example, Figure 58 shows two LDAP groups: Storage Admins and Storage Monitors. It shows
how you can map specific LDAP groups into specific roles.
94
Audit logs are especially important for financial institutions that are monitored by regulators.
Audit information for VNX storage systems is contained within the event log on each storage
processor. The log also contains hardware and software debugging information and a time-stamped
record for each event. Each record contains the following information:
Event code
Description of event
95
Description
Collects logs
Can collect ev ent log data f rom ov er 130 ev ent sourcesf rom firewalls to
databases. RSA enVision can also collect data f rom custom, proprietary
sources using standard transports such as Syslog, OBDC, SNMP, SFTP,
OPSEC, or WMI.
Compresses and encry pts log data so it can be stored f or later analy sis,
while maintaini ng log conf identiality and integrity .
Analy zes data in real time to check f or anomalous behav ior requiring an
immediate alert and response. RSA enVision proprietary logs are also
optimized f or later reporting and f orensic analysis. Built-in reports and
alerts allow administrators and auditors quick and easy access to log data.
96
High availability
In the storage layer, the high availability design is consistent with the high availability model
implemented at other layers in the Vblock System, comprising physical redundancy and path
redundancy. These are listed in the following types of redundancies:
Link redundancy
Link redundancy
Pending the availability of FC port channels on UCS FC ports and FC port trunking, multiple individual
FC links from the 6120 fabric interconnects are connected to each SAN fabric, and VSAN
membership of each link is explicitly configured in the UCS. In the event of an FC (NP) port link failure,
affected hosts will re-logon in a round-robin manner using available ports. FC port channel support,
when available, means that redundant links in the port channel will provide active/active failover
support in the event of a link failure.
Multipathing software from VMware or EMC PowerPath software further enhances high availability,
optimizing use of the available link bandwidth and enhancing load balancing across multiple active
host adapter ports and links with minimal disruption in service.
Hardw are and node redundanc y
The Vblock System trusted multi-tenancy design leverages best practice methodologies for SAN high
availability, prescribing full hardware redundancy at each device in the I/O path from host to SAN. In
terms of hardware redundancy this begins at the server, with dual port adapters per host. Redundant
paths from the hosts feed into dual, redundant MDS SAN switches (that is, with dual supervisors) and
then into redundant SAN arrays with tiered, RAID protection. RAID 1 and 5 were deployed in this
particular design as two more commonly used levels; however the selection of a RAID protection level
depends on a balancing of cost versus the critical nature of the data to be stored.
2013 VCE Company, LLC. All Rights Reserved.
97
The ESXi hosts are protected by the VMware vCenter high availability feature. Storage paths can be
protected using EMC PowerPath/VE. Figure 60 shows the storage path protection.
Virtual machines and application data can be protected using EMC Avamar, EMC Data Domain, and
EMC Replication Manager. However these are not within the scope of this guide.
Single point of failure
High availability (HA) systems are the foundation upon which any enterprise-class multi-tenancy
environment is built. High availability systems are designed to be fully redundant with no single point
of failure (SPOF). Additional availability features can be leveraged to address single point of failure in
the trusted multi-tenancy design. The following are some high-level SPOF entity needs to consider:
Dual-ported drives
Redundant FC loops
Fail-safe network
98
99
MirrorView
When mirroring a thin LUN to another thin LUN, only consumed capacity is replicated between the
storage systems. This is most beneficial for initial synchronizations. Steady state replication is similar,
since only new writes are written from the primary storage system to the secondary system.
When mirroring from a thin LUN to a traditional or thick LUN, the thin LUNs host-visible capacity must
be equal to the traditional LUNs capacity or the thick LUNs user capacity. Any failback scenario that
requires a full synchronization from the secondary to the thin primary image causes the thin LUN to
become fully allocated. When mirroring from a thick LUN or traditional LUN to a thin LUN, the
secondary thin LUN is fully allocated.
With MirrorView, if the secondary image LUN is added with the no initial synchronization option, the
secondary image retains its thin attributes. However, any subsequent full synchronization from the
traditional LUN or thick LUN to the thin LUN, as a result of a recovery operation, causes the thin LUN
to become fully allocated.
For more information on using pool LUNs with MirrorView, see MirrorView Knowledgebook (Powerlink
access required).
Pow erPath Migration Enabler
EMC PowerPath Migration Enabler (PPME) is a host-based migration tool that enables non-disruptive
or minimally disruptive data migration between storage systems or between logical units within a
single storage system. The Host Copy technology in PPME works with the host operating system to
migrate data from the source logical unit to the target. With PPME 5.3, the Host Copy technology
supports migrating virtually provisioned devices. When migrating to a thin target, the targets thindevice capability is maintained.
100
101
VLANs
VLANs
VLANs provide a Layer 2 option to scale virtual machine connectivity, providing application tier
separation and multitenant isolation. In general, Vblock Systems have two types of VLANs:
Routed Include management VLANs, virtual machine VLANs, and data VLANs; will pass
through Layer 2 trunks and be routed to the external network
Internal Carry VMkernel traffic, such as vMotion, service console, NFS, DRS/HA, and so forth
This design guide uses three tenants: Tenant Orange, Tenant Vanilla and Tenant Grape. Each tenant
has multiple virtual machines for different applications (such as Web server, email server, and
database), which are associated with different VLANs. It is always recommended to separate data
and management VLANs.
102
The following table lists example VLAN categories used in the Vblock System trusted multi-tenancy
design framework.
VLAN type
VLAN name
VLAN number
100
C200_ES X_mgt
101
102
103
Vblock_ES X_mgt
104
105
Vblock_ES X_ build
106
Vblock_N1k_pkg
107
Vblock_N1k_control
108
Vblock_NFS
111
Fcoe_USC_to_storageA
109
Fcoe_UCS_to_storageB
110
Vblock_VMNet work
112
113
118
123
Configure VLAN (both Layer 2 and Layer 3) in all network devices supported in the trusted multitenancy infrastructure to ensure that management, tenant, and Vblock System internal VLANs are
isolated from each other.
Note: Serv ice providers may need additional VLANs for scalability, depending on size requirements.
103
The following table summarizes the benefits that the Cisco VRF Lite technology provides a trusted
multi-tenancy environment.
Benefit
Description
Dedicated data and control planes are def ined to handle traff ic belonging
to groups with v arious requirements or policies. These groups represent
an additional lev el of segregation and security as no communication is
allowed among dev ices belonging to different VRFs unless explicitly
conf igured.
Network separation at Layer 2 is accomplished using VLANs. Figure 62 shows how the VLANs
defined on each access layer device for each tenant are mapped to the same tenant VRF at the
distribution layer.
104
Use VLANs to achieve network separation at Layer 2. While VRFs are used to identify a tenant,
VLAN-IDs provide isolation at Layer 2.
Tenant VRFs are applied on the Cisco Nexus 7000 Series Switch at the aggregation and core layer,
which are mapped with unique VLANs. All VLANs are carried over the 802.1Q trunking ports.
ACL supported
Y es
Y es
Y es
105
106
The traffic flow types break down into three traffic categories, as shown in the following table.
Traffic Category
Description
Inf rastructure
Tena nt
Storage
Diff erentiated into Gold, Silv er, and Bronze serv ice lev els; may include v irtual
machine-to-v irtual machine, virtual machine-to-storage, and/or v irtual machineto-tenant traff ic.
Gold tenant traffic is highest priority, requiring low latency and high bandwidth
guarantees
The Vblock Sy stem trusted multi-tenancy design incorporates both FC and IPattached storage. Since these traff ic ty pes are treated diff erently throughout the
network, storage requires two subcategories:
QoS service assurance for Vblock Systems has been introduced at each layer. Consider the following
features for service assurance at the network layer:
Networked-attached devices
Consider traffic classification, bandwidth guarantee with queuing, and rate limiting based on tenant
traffic priority for networking service assurance.
107
To succesfully achive trusted multi-tenancy, a service provider needs to adopt each key component
as discussed below. As shown in Figure 3, the trusted multi-tenancy framework has the following key
components:
Component
Description
Core
Prov ides a Lay er 3 routing module f or all traff ic in and out of the serv ice prov ider
data center.
Aggregatio n
Serv es as the Lay er 2 and Lay er 3 boundary for the data center inf rastructure. In
this design, the aggregation lay er also serv es as the connection point f or the
primary data center f irewalls.
Serv ices
Deploy s serv ices such as serv er load balancers, intrusion prev ention sy stems,
application-based f irewalls, network analy sis modules, and additional f irewall
serv ices.
Access
The data center access lay er serv es as a connection point f or the serv er f arm.
The v irtual access lay er ref ers to the virtual network that resides in the phy sical
serv ers when conf igured f or v irtualization.
With this framework, you can add components as demand and load increase.
108
The following table describes the high-level security functions for each layer of the data center.
Data center layer
Security component
Purpose
Aggregatio n
Serv ice
Access
Virtual access
109
The firewalls are configured in an active-active design, which allows load sharing across the
infrastructure based on the active Layer 2 and Layer 3 traffic paths. Each firewall is configured for two
virtual contexts:
This corresponds to the active Layer 2 spanning tree path and the Layer 3 Hot Standby Routing
Protocol (HSRP) configuration.
Figure 64 shows an example of each firewall connection.
Figur e 64. Cisco ASA virtual contexts and Cisco Nexus 7000 virtual device contexts
110
Use out-of-band management and limit the types of traffic allowed over the management
interface(s).
Depending on traffic types and policies, the goal might not be to send all traffic flows to the services
layer. Some incoming application connections, such as those from a DMZ or client batch jobs (such
as backup), might not need load balancing or additional services. An alternative is to deploy another
context on the firewall to support the VLANs that are not forwarded to the services switches.
Cav eats
Using transparent mode on the Cisco ASA firewalls requires that an IP address be configured for each
context. This is required to bridge traffic from one interface to another and to manage each Cisco ASA
context. While in transparent mode, you cannot allocate the same VLAN across multiple interfaces for
management purposes. A separate VLAN is used to manage each context. The VLANs created for
each context can be bridged back to the primary management VLAN on an upstream switch if
desired.
Note: This prov ides a workaround and does not require allocating new network-wide management VLANs and
IP subnets to manage each context.
111
Services layer
Data center security services can be deployed in a variety of combinations. The goal of these designs
is to provide a modular approach to deploying security by allowing additional capacity to be added
easily for each service. Additional Web application firewalls, intrusion prevention systems (IPS),
firewalls, and monitoring services can all be scaled without requiring an overall redesign of the data
center.
Figure 65 illustrates how the services layer fits into the data center security environment.
112
Cisco ACE provides a highly available and scalable data center solution from which the VMware
vCloud Director environment can benefit. Use Cisco ACE to apply a different context and associated
policies, interfaces, and resources for one vCloud Director cell and a completely different context for
another vCloud Director cell.
In this design, Cisco ACE is terminating incoming HTTPS requests and decrypting the traffic prior to
forwarding it to the Web application firewall farm. The Web application firewall and subsequent Cisco
IPS devices can now view the traffic in clear text for inspection purposes.
Note: Some compliance standards and security policies dictate that traffic be encry pted f rom client to server. It is
possible to modify the design so traffic is re-encrypted on Cisco ACE after inspection prior to being
forwarded to the serv er.
113
114
The IPS deployment in the data center leverages EtherChannel load balancing from the service
switch. This method is recommended for the data center because it allows the IPS services to scale to
meet the data center requirements. This is shown in Figure 67.
A port channel is configured on the services switch to forward traffic over each 10 GB link to the
receiving IPS. Since Cisco IPS does not support Link Aggregate Control Protocol (LACP) or Port
Aggregation Protocol (PAgP), the port channel is set to on to ensure no negotiation is necessary for
the channel to become operational.
It is very important to ensure all traffic for a specific flow goes to the same Cisco IPS. To best
accomplish this, it is recommended to set the hash for the port channel to source and destination IP
address. Each EtherChannel supports up to eight ports per channel.
115
This design can scale up to eight Cisco IPS 4270s per channel. Figure 68 illustrates Cisco IPS
EtherChannel load balancing.
Cav eats
Spanning tree plays an important role in IPS redundancy in this design. Under normal operating
conditions traffic, a VLAN always follows the same active Layer 2 path. If a failure occurs (a service
switch failure or a service switch link failure), spanning tree converges, and the active Layer 2 traffic
path changes to the redundant service switch and Cisco IPS appliances.
116
Cisco A CE, Cisco ACE Web Applic ation Firew all, Cisco IPS traffic flow s
The security services in this design reside between the VDC1 and VDC2 on the Cisco Nexus 7000
Series Switch. All security services are running in a Layer 2 transparent configuration. As traffic flows
from VDC1 to the outside Cisco ASA context, it is bridged across VLANs and forwarded through each
security service until it reaches the inside VDC2, where it is routed directly to the correct server or
application.
Figure 69 shows the service flow for client-to-server traffic through the security services in the red
traffic path. In this example, the client is making a Web request to a virtual IP address (VIP) defined on
the Cisco ACE virtual context.
117
The following table describes the stages associated with Figure 69.
Stage
What happens
Client is directed through Cisco Nexus 7000-1 VDC1 to the activ e Cisco ASA v irtual context
transparently bridging traff ic between VDC1 and VDC2 on the Cisco Nexus 7000.
The transpare nt Cisco ASA v irtual context forwards traffic from VLAN 161 to VLAN 162 towards
Cisco Nexus 7000-1 VDC2.
VDC2 shows spanning tree root f or VLAN 162 through connection to serv ices switch SS1. SS1
shows spanning tree root f or VLAN 162 through the Cisco ACE transparent v irtual context.
The Cisco ACE transparent v irtual context applies an input serv ice policy on VLAN 162. This
serv ice policy , named AGGREGA TE_S LB, has the v irtual IP def inition. The v irtual IP rules
associated with this policy enf orce SSL-termination serv ices and load-balancing serv ices to a
Web application f irewall serv er farm.
HTTP -based prob es determine the state of the Web application f irewall serv er f arm. The
request is f orwarded to a specif ic Web application f irewall appli ance def ined in the Cisco ACE
serv er farm. The client IP address is inserted as an HTTP header by Cisco ACE to maintain the
integrity of serv er-based logging within the f arm. The source IP address of the request
f orwarded to the Web application f irewall is that of the originating clientin this example,
10.7.54.34.
In this example, the Web application f irewall has a v irtual Web application def ined named Crack
Me. The Web app lication f irewall appliance receiv es on port 81 the HTTP requ est that was
f orwarded f rom Cisco ACE. The Web application f irewall applies all relev ant security policies f or
this traffic and proxies the request back to a VIP (10.8.162.200) located on the same v irtual
Cisco ACE context on VLAN interf ace 190.
Traff ic is f orwarded f rom the Web application f irewall on VLAN 163. A port channel is
conf igured to carry VLAN 163 and VLAN 164 on each member trunk interf ace. Cisco IPS
receiv es all traff ic on VLAN 163, perf orms inline inspection, and f orwards the traff ic back ov er
the port channel on VLAN 164.
Access layer
In this design, the data center access layer provides Layer 2 connectivity for the server farm. In most
cases the primary role of the access layer is to provide port density for scaling the server farm. Figure
70 shows the data center access layer.
118
Recommenda tions
Security at the access layer is primarily focused on securing Layer 2 flows. Best practices include:
Additional security mechanisms that can be deployed at the access layer include:
Catalyst Integrated Security features, which include Dynamic Address Re solution Protocol
(ARP) inspection, Dynamic Host Configuration Protocol (DHCP) Snooping, and IP Source
Guard
Port security can also be used to lock down a critical server to a specific port.
The access layer and virtual access layer serve the same logical purpose. The virtual access layer is
a new location and a new footprint of the traditional physical data center access layer. These features
are also applicable to the traditional physical access layer.
119
120
When a network policy is defined on the Cisco Nexus 1000V, it is updated in the virtual data center
and displayed as a port group. The network and security teams can configure a predefined policy and
make it available to the server administrators using the same methods they use to apply policies
today. Cisco Nexus 1000V policies are defined through a feature called port profiles.
Policy enforcement
Use port profiles to configure network and security features under a single profile that can be applied
to multiple interfaces. Once you define a port profile, you can inherit that profile and any setting
defined on one or more interfaces. You can define multiple profilesall assigned to different
interfaces.
This feature provides multiple security benefits:
Network security policies are still defined by network and security administrators and are applied
to the virtual switch in the same way as on physical access switches.
Once the features are defined in a port profile and assigned to an interface, the server
administrator need only pick the available port group and assign it to the virtual machine. This
alleviates the chances of misconfiguration and overlapping, or of non-compliant security policies
being applied.
Visibility
Server virtualization brings new challenges for visibility into what is occurring at the virtual network
level. Traffic flows can occur within the server between virtual machines without needing to traverse a
physical access switch. Although vCloud Director and vShield Edge restrict vApp traffic inside the
organization, if there is a specific situation where dedicated tenant environment virtual machines are
available and a tenant-specific virtual machine is infected or compromised, it may be more difficult for
administrators to spot the problem without the traffic forwarding through security appliances.
Encapsulated Remote Switched Port Analyzer (ERSPAN) is a useful tool for gaining visibility into
network traffic flows. This feature is supported on the Cisco Nexus 1000V. ERSPAN can be enabled
on the Cisco Nexus 1000V and traffic flows can be exported from the server to external devices. See
Figure 72.
121
Figur e 72. Cisco Nexus 1000V and E RSP AN IDS and NAM at services switch
What happens
ERSPAN f orwards copies of the v irtual machine traffic to the Cisco IPS appliance and the
Cisco Network Analy sis Module (NA M). Both the Cisco IPS and Cisco NAM are located at the
serv ice lay er in the serv ice switch.
A new v irtual sensor (VS1) has been created on the existing Cisco IPS appliances to prov ide
monitoring f or only the ERSPAN session f rom the serv er. Up to f our v irtual sensors can be
conf igured on a single Cisco IPS appliance and they can be conf igured in either intrusion
prev ention system or instruction detection sy stem (IDS) mode. In this case the new v irtual
sensor VS1 has been set to IDS or monitor mode. It receiv es a copy of the v irtual machine
traffic ov er the ERSPAN session f rom the Cisco Nexus 1000V.
Two ERSPAN sessions hav e been created on the Cisco Nexus 1000V:
122
Using a different ERSPAN-id for each session provides isolation. A maximum of 66 source and
destination ERSPAN sessions can be configured per switch.
Cav eats
ERSPAN can affect overall system performance, depending on the number of ports sending data and
the amount of traffic being generated. It is always a good idea to monitor system performance when
you enable ERSPAN to verify the overall effects on the system.
Note: Y ou must permit protocol type header 0x88BE f or ERSPAN Generic Routing Encapsulation (GRE)
connections.
Security recommendations
The following are some best practice security recommendations:
Harden data center infrastructure devices and use authentication, authorization, and accounting
for role-based access control and logging.
Authenticate and authorize device access using TACACS+ to a Cisco Access Control Server
(ACS).
Define local usernames and secrets for user accounts in the ADMIN group. The local username
and secret should match that defined in the TACACS server.
Define the ACLs to limit the type of traffic to and from the device from the out-of-band
management network.
Enable network time protocol (NTP) on all devices. NTP synchronizes timestamps for all logging
across the infrastructure, which makes it an invaluable tool for troubleshooting.
For detailed infrastructure security recommendations and best practices, see the Cisco Network
Security Baseline and the following URL:
www.cisco.com/en/US/docs/solutions/Enterprise/Security/Baseline_Security/securebasebook.html
123
Cisco IPS
Authorized access
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Visibility
Y es
Y es
Cisco
ACE
Cisco
ACE W AF
RSA
enVision
Infrastructure
protection
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Y es
Port security
ACLs
Port security
Cisco Nexus 5000 Series switches provide port security features that reject intrusion attempts and
report these intrusions to the administrator.
Typically, any fibre channel device in a SAN can attach to any SAN switch port and access SAN
services based on zone membership. Port security features prevent unauthorized access to a switch
port in the Cisco Nexus 5000 Series switch.
ACLs
A router ACL (RACL) is an ACL that is applied to an interface with a Layer 3 address assigned to it. It
can be applied to any port that has an IP address, including the following:
Routed interfaces
Loopback interfaces
VLAN interfaces
The security boundary is to permit or deny traffic moving between subnets or networks. The RACL is
supported in hardware and has no effect on performance.
124
A VLAN access control list (VACL) is an ACL that is applied to a VLAN. It can be applied only to a
VLANno other type of interface. The security boundary is to permit or deny moving traffic between
VLANs and permit or deny traffic within a VLAN. The VLAN ACL is supported in hardware.
A port access control list (PACL) entry is an ACL applied to a Layer 2 switch port interface. It cannot
be applied to any other type of interface. It works in only the ingress direction. The security boundary
is to permit or deny moving traffic within a VLAN. The PACL is supported in hardware and has no
effect on performance.
Details
Link redundancy
125
In addition to physical layer redundancy, the following logical redundancy features help provide a
highly reliable and robust environment that will guarantee the customers service with minimum
interruption during the network failure or maintenance:
126
127
Virtual port channel is used across the trusted multi-tenancy network between the different layers.
HSRP is configured at the Nexus 7000 sub-aggregation layer, which provides the backup default
gateway if the primary default gateway fails.
Cisco Nexus 1000V and MAC pinni ng
The Cisco Nexus 1000V Series Switch uses the MAC pinning feature to provide more granular loadbalancing methods and redundancy. Virtual machine NICs can be pinned to an uplink path using port
profiles definitions. Using port profiles, an administrator defines the preferred uplink path to use. If
these uplinks fail, another uplink is dynamically chosen. If an active physical link goes down, the Cisco
Nexus 1000V Series Switch sends notification packets upstream of a surviving link to inform upstream
switches of the new path required to reach these virtual machines. These notifications are sent to the
Cisco UCS 6100 Series fabric interconnect, which updates its MAC address tables and sends
gratuitous ARP messages on the uplink ports so the data center access layer network can learn the
new path.
Nexus 1000V VSM redundanc y
Define one Virtual Supervisor Module (VSM) as the primary module and the other as the secondary
module. The two VSMs run as an active-standby pair, similar to supervisors in a physical chassi s, and
provide high-availability switch management. The Cisco Nexus 1000V Series VSM is not in the data
path, so even if both VSMs are powered down, the Virtual Ethernet Module (VEM) is not affected and
continues to forward traffic. Each VSM in an active-standby pair is required to run on a separate
VMware ESXi host. This setup helps ensure high availability if even one VMware ESXi server fails.
128
SSHv2
129
Creating and distributing policies and controls and mapping them to regulations and internal
compliance requirements
Assessing whether the controls are actually in place and working, and remediating those that
are not
130
131
132
Figur e 76. RSA Solutio n for Cloud Security and Co mpl iance
133
Using this solution gives the service provider a means to ensure and, very importantly, prove the
compliance of the virtualized infrastructure to authoritative sources such as PCI-DSS, COBIT, NIST,
HIPAA, and NERC.
In addition to this information classification, RSA Archer integrates with RSA enVision as its collection
entity from sources such as data loss prevention, anti-virus, and intruder detection/prevention systems
to bring these data points into the centralized governance dashboards.
134
135
136
Conclusion
The six foundational elements of secure separation, service assurance, security and compliance,
availability and data protection, tenant management and control, and service provider management
and control form the basis of the Vblock System trusted multi-tenancy design framework.
The following table summarizes the technologies used to ensure trusted multi-tenancy at each layer of
the Vblock System.
Trusted multitenancy element
Secure separation
Compute
Storage
Network
VSAN segmentation
VLAN segmentation
Zoning
VRF
UCS organization al
groups
Security
technologies
Discrete, separate
instances of RSA
Archer eGRC and
RSA enVision f or the
serv ice prov ider and
f or each tenant as
needed
VMware v Shield
Apps, Edge
UCS VLANs
UCS VSANs
VMware v Cloud
Director
Serv ice assurance
UCS quality of
serv ice
Port channels
Serv er pools
VMware v Cloud
Director
VMware High
Av ailability
EMC Unisphere
Quality of Serv ice
Mana ger
Nexus
1000/5000/70 00
quality of serv ice
EMC Fully
Automated Storage
Tierin g (FAST)
Pools
VMware Fault
Toler ance
VMware Distributed
Resource Scheduler
(DRS)
VMware v Sphere
Resource Pools
Security and
compliance
UCS RBAC
LDAP
vCenter
Administrator group
RADIUS or
TACACS+
Authentication with
LDAP or Activ e
Directory
ASA f irewalls
Cisco Intrusion
Prev ention Sy stem
(IPS)
Cisco Application
Control Engine
Port security
ACLs
137
Compute
Storage
Network
Security
technologies
Cisco Nexus OS
v irtual port channels
(v PC)
Real-time
correlations and
alerting through
integration of
sy stems with RSA
enVision
Fabric interconnect
clustering
Serv ice prof ile
dy namic mobility
VMware v Sphere
High Av ailability
VMware v Motion
VMware v Center
Heartbeat
EMC Po werPath
Migratio n Enabler
VMware v Cloud
Director cells
VMware v Center
Site Recov ery
Mana ger (SR M)
Tena nt
management and
control
VMware v Cloud
Director
VMware v Cloud
Director
VMware v Cloud
Director
RSA enVision
VMware v Center
EMC Unisphere
VMware v Cloud
Director
VMware v Shield
Mana ger
VMware v Center
Chargeback
Cisco Nexus 1000V
138
Next steps
To learn more about this and other solutions, contact a VCE representative or visit www.vce.com.
139
Acronym glossary
The following table defines acronyms used throughout this guide.
Acronym
Definition
ABE
ACE
ACL
ACS
AD
Activ e Directory
AMP
API
CDP
CHAP
CLI
CNA
CoS
CRR
DR
DRS
EFD
ERSPAN
FAST
FC
Fibre channel
FCoE
FWSM
GbE
Gigabit Ethernet
HA
High Av ailability
HBA
HSRP
IaaS
IDS
IPS
IPsec
140
Acronym
Definition
LACP
LUN
MAC
NAM
NAT
NDMP
NPV
N port v irtualization
NTP
PAgP
PACL
PCI-DSS
PPME
QoS
RACL
RBAC
SAN
SLA
SPOF
SRM
SSH
Secure shell
SSL
TMT
Trusted multi-tenancy
UIM/P
UCS
UQM
VACL
vCD
vCloud Director
vDC
VDC
VDM
VEM
vHBA
141
Acronym
Definition
VIC
VIP
Virtual IP
VLAN
VM
Virtual machine
VMDK
VMFS
vNIC
v PC
VRF
VSAN
v SM
v Shield Manager
VSM
WAF
142
ABOUT VCE
VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of converged infrastructure and
cloud-based computing models that dramatically reduce the cost of IT while improving time to market for our customers. VCE,
through the Vblock Systems, delivers the industry's only fully integrated and fully virtualized cloud infrastructure system. VCE
solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and
application development environments, allowing customers to focus on business innovation instead of integrating, validating, and
managing IT infrastructure.
For more information, go to www.vce.com.
Copyright 2013 VCE Company, LLC. All Rights Reserved. Vblock and the VCE logo are registered trademarks or trademarks of VCE Company, LLC and/or its
affiliates in the United States or other countries. All other trademarks used herein are the property of their respective owners.