Sunteți pe pagina 1din 143

Vbl oc k Sol uti on for Trus ted M ulti-Tenanc y: D esign Guide

Tabl e of C ontents

www.v ce.com

VBLOCK SOLUTION FOR TRUSTED


MULTI-TENANCY: DESIGN GUIDE
Version 2.0
March 2013

2013 VCE Company, LLC. All Rights Reserved.

Copy right 2013 VCE Company , LLC. All Rights Reserv ed.
VCE believ es the inf ormation in this publication is accurate as of its publication date. The inf ormation is subject to
change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDE D "AS IS." VCE MAKES NO


REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPE CT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFI CALLY DISCLAIMS IMPLIED WARRA NTIES OR
MERCHA NTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

2013 VCE Company, LLC. All Rights Reserved.

Contents
Introduction ......................................................................................................................................... 7
About this guide................................................................................................................................. 7
Audience ............................................................................................................................................ 8
Scope ................................................................................................................................................. 8
Feedback ........................................................................................................................................... 8
Trusted m ulti-tenancy foundational elements .............................................................................. 9
Secure s eparation ...........................................................................................................................10
Service assurance...........................................................................................................................10
Security and compliance ................................................................................................................11
Availability and data pr otection ......................................................................................................11
Tenant management and control...................................................................................................11
Service provider management and control ...................................................................................12
Technology overview .......................................................................................................................13
Management....................................................................................................................................14
Advanced Management Pod ......................................................................................................14
EMC Ionix Unified Infrastructure Manager/Provis ioning...........................................................14
Compute technologies ....................................................................................................................15
Cisco Unified Computing System ...............................................................................................15
VMw are vSphere .........................................................................................................................15
VMw are vCenter Server ..............................................................................................................15
VMw are vCloud Director..............................................................................................................15
VMw are vCenter Char geback.....................................................................................................16
VMw are vShield ...........................................................................................................................16
Storage technologies ......................................................................................................................16
EMC Fully Automated Storage Tiering ......................................................................................16
EMC FA ST Cache .......................................................................................................................17
EMC Pow er Path/V E ....................................................................................................................17
EMC Unified Storage...................................................................................................................17
EMC Unisphere Management Suite...........................................................................................17
EMC Unisphere Quality of Service Manager .............................................................................18
Netw ork technologies......................................................................................................................18
Cisco Nex us 1000V Series .........................................................................................................18
Cisco Nex us 5000 Series ............................................................................................................18
Cisco Nex us 7000 Series ............................................................................................................18
Cisco MDS....................................................................................................................................18
Cisco Data Center Netw ork Manager ........................................................................................19

2013 VCE Company, LLC. All Rights Reserved.

Security technologies ......................................................................................................................19


RSA Archer eGRC .......................................................................................................................19
RSA enV ision ...............................................................................................................................19
Design fram ework.............................................................................................................................20
End-to-end topology ........................................................................................................................20
Virtual machine and c loud resources layer ................................................................................21
Virtual access layer/v Sw itch .......................................................................................................22
Storage and SA N layer ................................................................................................................22
Compute layer ..............................................................................................................................22
Netw ork layers .............................................................................................................................23
Logical topology ..............................................................................................................................23
Tenant traffic flow representation ...............................................................................................26
VMw are vSphere logical framew ork overview ...........................................................................28
Logical design..................................................................................................................................32
Cloud management cluster logical des ign .................................................................................32
vSpher e cluster specifications ....................................................................................................33
Host logical design specifications for cloud management c luster ...........................................33
Host logical configuration for resource groups ..........................................................................34
VMw are vSphere cluster host des ign specification for resource groups ................................34
Security .........................................................................................................................................35
Tenant anatomy overview ..............................................................................................................35
Design considerations for m anagement and orchestration.....................................................37
Configur ation ...................................................................................................................................39
Enabling services ............................................................................................................................40
Creating a service offering ..........................................................................................................41
Pr ovisioning a service..................................................................................................................41
Design considerations for com pute..............................................................................................42
Design cons iderations for secure separation................................................................................43
Cisco UCS ....................................................................................................................................43
VMw are vCloud Director .............................................................................................................52
Design cons iderations for service assurance ...............................................................................58
Cisco UCS ....................................................................................................................................58
VMw are vCloud Director .............................................................................................................60
Design cons iderations for security and compliance .....................................................................62
Cisco UCS ....................................................................................................................................62
VMw are vCloud Director .............................................................................................................65
VMw are vCenter Server ..............................................................................................................67

2013 VCE Company, LLC. All Rights Reserved.

Design cons iderations for availability and data protection...........................................................67


Cisco UCS ....................................................................................................................................68
Virtualization.................................................................................................................................69
Design cons iderations for tenant management and contr ol ........................................................73
VMw are vCloud Director .............................................................................................................73
Design cons iderations for service prov ider management and contr ol........................................74
Virtualization.................................................................................................................................74
Design considerations for storage................................................................................................78
Design cons iderations for secure separation................................................................................78
Segmentation by VSA N and zoning...........................................................................................78
Separation of data at rest............................................................................................................80
Address space separation...........................................................................................................80
Separation of data access...........................................................................................................83
Design cons iderations for service assurance ...............................................................................89
Dedication of runtime r esources .................................................................................................89
Quality of service control .............................................................................................................89
EMC V NX FA ST V P ....................................................................................................................90
EMC FA ST Cache .......................................................................................................................92
EMC Unisphere Management Suite...........................................................................................92
VMw are vCloud Director .............................................................................................................92
Design cons iderations for security and compliance .....................................................................93
Authentication w ith LDA P or Active Directory ...........................................................................93
VNX and RSA enV ision...............................................................................................................96
Design cons iderations for availability and data protection...........................................................97
High availability ............................................................................................................................97
Local and remote data protection ...............................................................................................99
Design cons iderations for service prov ider management and contr ol......................................101
Design considerations for netw orking .......................................................................................102
Design cons iderations for secure separation..............................................................................102
VLANs .........................................................................................................................................102
Virtual routing and forw arding...................................................................................................103
Virtual dev ice context ................................................................................................................105
Access control list ......................................................................................................................105
Design cons iderations for service assurance .............................................................................106
Design cons iderations for security and compliance ...................................................................108
Data center firew alls ..................................................................................................................109
Services layer .............................................................................................................................112
Cisco Application Control Engine .............................................................................................112
Cisco Intrusion Pr evention System ..........................................................................................114
2013 VCE Company, LLC. All Rights Reserved.

Cisco A CE, Cisco ACE Web Applic ation Firew all, Cisco IPS traffic flow s............................117
Access layer ...............................................................................................................................118
Security recommendations .......................................................................................................123
Thr eats mitigated .......................................................................................................................124
Vblock Systems secur ity features .........................................................................................124
Design cons iderations for availability and data protection.........................................................125
Physical redundancy design cons ideration .............................................................................125
Design cons iderations for service prov ider management and contr ol......................................129
Design considerations for additional security technologies .................................................130
Design cons iderations for secure separation..............................................................................131
RSA Archer eGRC .....................................................................................................................131
RSA enV ision .............................................................................................................................131
Design cons iderations for service assurance .............................................................................131
RSA Archer eGRC .....................................................................................................................131
RSA enV ision .............................................................................................................................132
Design cons iderations for security and compliance ...................................................................133
RSA Archer eGRC .....................................................................................................................133
RSA enV ision .............................................................................................................................134
Design cons iderations for availability and data protection.........................................................134
RSA Archer eGRC .....................................................................................................................134
RSA enV ision .............................................................................................................................135
Design cons iderations for tenant management and contr ol ......................................................135
RSA Archer eGRC .....................................................................................................................135
RSA enV ision .............................................................................................................................135
Design cons iderations for service prov ider management and contr ol......................................136
RSA Archer eGRC .....................................................................................................................136
RSA enV ision .............................................................................................................................136
Conclusion .......................................................................................................................................137
Next steps ........................................................................................................................................139
Acronym glossary ..........................................................................................................................140

2013 VCE Company, LLC. All Rights Reserved.

Introduction
The Vblock Solution for Trusted Multi-Tenancy (TMT) Design Guide describes how Vblock
Systems allow enterprises and service providers to rapidly build virtualized data centers that support
the unique challenges of provisioning Infrastructure as a Service (IaaS) to multiple tenants.
The trusted multi-tenancy solution comprises six foundational elements that address the unique
requirements of the IaaS cloud service model:

Secure separation

Service assurance

Security and compliance

Availability and data protection

Tenant management and control

Service provider management and control

The trusted multi-tenancy solution deploys compute, storage, network, security, and management
Vblock System components that address each element while offering service providers and tenants
numerous benefits. The following table summarizes these benefits.
Provider benefits

Tenant benefits

Lowe r cost-to-serv e

Cost sav ings transf erred to tenants

Standardi zed off erings

Faster incident resolution with standardized serv ices

Easier growth an d scale using standard


inf rastructures

Secure isolation of resources and data

More pre dictable planni ng aroun d capacity and


workloads

Usage-based serv ices model, such as backup and


storage

About this guide


This design guide explains how service providers can use specific products in the compute, network,
storage, security, and management component layers of Vblock Systems to support the six
foundational elements of trusted multi-tenancy. By meeting these objectives, Vblock Systems offer
service providers and enterprises an ideal business model and IT infrastructure to securely provision
IaaS to multiple tenants.
This guide demonstrates processes for:

Designing and managing Vblock Systems to deliver infrastructure multi-tenancy and service
multi-tenancy

Managing and operating Vblock Systems securely and reliably

2013 VCE Company, LLC. All Rights Reserved.

The specific goal of this guide is to describe the design of and rationale behind the solution. The guide
looks at each layer of the Vblock System and shows how to achieve trusted multi-tenancy at each
layer. The design includes many issues that must be addressed prior to deployment, as no two
environments are alike.

Audience
The target audience for this guide is highly technical, including technical consultants, professional
services personnel, IT managers, infrastructure architects, partner engineers, sales engineers, and
service providers deploying a trusted multi-tenancy environment with leading technologies from VCE.

Scope
Trusted multi-tenancy can be used to offer dedicated IaaS (compute, storage, network, management,
and virtualization resources) or leverage single instances of services and applications for multiple
consumers. This guide only addresses design considerations for offering dedicated IaaS to multiple
tenants.
While this design guide describes how Vblock Systems can be designed, operated, and managed to
support trusted multi-tenancy, it does not provide specific configuration information, which must be
specifically considered for each unique deployment.
In this guide, the terms Tenant and Consumer refer to the consumers of the services provided by a
service provider.

Feedback
To suggest documentation changes and provide feedback on this paper, send email to
docfeedback@vce.com. Include the title of this paper, the name of the topic to which your comment
applies, and your feedback.

2013 VCE Company, LLC. All Rights Reserved.

Trusted multi-tenancy foundational elements


The trusted multi-tenancy solution comprises six foundational elements that address the unique
requirements of the IaaS cloud service model:

Secure separation

Service assurance

Security and compliance

Availability and data protection

Tenant management and control

Service provider management and control

Figur e 1. Six ele ments of the Vblock Solution for Trusted Multi-Tenancy

2013 VCE Company, LLC. All Rights Reserved.

Secure separation
Secure separation refers to the effective segmentation and isolation of tenants and their assets within
the multi-tenant environment. Adequate secure separation ensures that the resources of existing
tenants remain untouched and the integrity of the applications, workloads, and data remains
uncompromised when the service provider provisions new tenants. Each tenant might have access to
different amounts of network, compute, and storage resources in the converged stack. The tenant
sees only those resources allocated to them.
From the standpoint of the service provider, secure separation requires the systematic deployment of
various security control mechanisms throughout the infrastructure to ensure the confidentiality,
integrity, and availability of tenant data, services, and applications. The logical segmentation and
isolation of tenant assets and information is essential for providing confidentiality in a multi-tenant
environment. In fact, ensuring the privacy and security of each tenant becomes a key design
requirement in the decision to adopt cloud services.

Service assurance
Service assurance plays a vital role in providing tenants with consistent, enforceable, and reliable
service levels. Unlike physical resources, virtual resources are highly scalable and easy to allocate
and reallocate on demand. In a multi-tenant virtualized environment, the service provider prioritizes
virtual resources to accommodate the growth and changing business needs of tenants. Service level
agreements (SLA) define the level of service agreed to by the tenant and service provider. The
service assurance element of trusted multi-tenancy provides technologies and methods to ensure that
tenants receive the agreed-upon level of service.
Various methods are available to deliver consistent SLAs across the network, compute, and storage
components of the Vblock System, including:

Quality of service in the Cisco Unified Computing System (UCS) and Cisco Nexus platforms

EMC Symmetrix Quality of Service tools

EMC Unisphere Quality of Service Manager (UQM)

VMware Distributed Resource Scheduler (DRS)

Without the correct mix of service assurance features and capabilities, it can be difficult to maintain
uptime, throughput, quality of service, and availability SLAs.

2013 VCE Company, LLC. All Rights Reserved.

10

Security and compliance


Security and compliance refers to the confidentiality, integrity, and availability of each tenants
environment at every layer of the trusted multi-tenancy stack. Trusted multi-tenancy ensures security
and compliance using technologies like identity management and access control, encryption and key
management, firewalls, malware protection, and intrusion prevention. This is a primary concern for
both service provider and tenant.
The trusted multi-tenancy solution ensures that all activities performed in the provisioning,
configuration, and management of the multi-tenant environment, as well as day-to-day activities and
events for individual tenants, are verified and continuously monitored. It is also important that all
operational events are recorded and that these records are available as evidence during audits.
As regulatory requirements expand, the private cloud environment will become increasingly subject to
security and compliance standards, such as Payment Card Industry Data Security Standards (PCIDSS), HIPAA, Sarbanes-Oxley (SOX), and Gramm-Leach-Bliley Act (GLBA). With the proper tools,
achieving and demonstrating compliance is not only possible, but it can often become easier than in a
non-virtualized environment.

Availability and data protection


Resources and data must be available for use by the tenant. High availability means that resources
such as network bandwidth, memory, CPU, or data storage are always online and available to users
when needed. Redundant systems, configurations, and architecture can minimize or eliminate points
of failure that adversely affect availability to the tenant.
Data protection is a key ingredient in a resilient architecture. Cloud computing imposes a resource
trade-off from high performance. Increasingly robust security and data classification requirements are
an essential tool for balancing that equation. Enterprises need to know what data is important and
where it is located as prerequisites to making performance cost-benefit decisions, as well as ensuring
focus on the most critical areas for data loss prevention procedures.

Tenant management and control


In every cloud services model there are elements of control that the service provider delegates to the
tenant. The tenants administrative, management, monitoring, and reporting capabilities need to be
restricted to the delegated resources. Reasons for delegating control include convenience, new
revenue opportunities, security, compliance, or tenant requirement. In all cases, the goal of the trusted
multi-tenancy model is to allow for and simplify the management, visibility, and reporting of this
delegation.

2013 VCE Company, LLC. All Rights Reserved.

11

Tenants should have control over relevant portions of their service. Specifically, tenants should be
able to:

Provision allocated resources

Manage the state of all virtualized objects

View change management status for the infrastructure component

Add and remove administrative contacts

Request more services as needed

In addition, tenants taking advantage of data protection or data backup services should be able to
manage this capability on their own, including setting schedules and backup types, initiating jobs, and
running reports.
This tenant-in-control model allows tenants to dynamically change the environment to suit their
workloads as resource requirements change.

Service provider management and control


Another goal of trusted multi-tenancy is to simplify management of resources at every level of the
infrastructure and to provide the functionality to provision, monitor, troubleshoot, and charge back the
resources used by tenants. Management of multi-tenant environments comes with challenges, from
reporting and alerting to capacity management and tenant control delegation. The Vblock System
helps address these challenges by providing scalable, integrated management solutions inherent to
the infrastructure, and a rich, fully developed application programming interface (API) stack for adding
additional service provider value.
Providers of infrastructure services in a multi-tenant environment require comprehensive control and
complete visibility of the shared infrastructure to provide the availability, data protection, security, and
service levels expected by tenants. The ability to control, manage, and monitor resources at all levels
of the infrastructure requires a dynamic, efficient, and flexible design that allows the service provider to
access, provision, and then release computing resources from a shared pool quickly, easily, and
with minimal effort.

2013 VCE Company, LLC. All Rights Reserved.

12

Technology overview
The Vblock System from VCE is the world's most advanced converged infrastructureone that
optimizes infrastructure, lowers costs, secures the environment, simplifies management, speeds
deployment, and promotes innovation. The Vblock System is designed as one architecture that spans
the entire portfolio, includes best-in-class components, offers a single point of contact from initiation
through support, and provides the industry's most robust range of configurations.
Vblock Systems provide production ready (fully tested) virtualized infrastructure components, including
industry-leading technologies from Cisco, EMC, and VMware. Vblock Systems are designed and built
to satisfy a broad range of specific customer implementation requirements. To design trusted multitenancy, you need to understand each layer (compute, network, and storage) of the Vblock System
architecture. Figure 2 provides an example of Vblock System architecture.

Figur e 2. Exa mple of Vblock System architecture


Note: Cisco Nexus 7000 is not part of the Vblock System architecture.

This section describes the technologies at each layer of the Vblock System addressed in this guide to
achieve trusted multi-tenancy.

2013 VCE Company, LLC. All Rights Reserved.

13

Management
Management technologies include Advanced Management Pod (AMP) and EMC Ionix Unified
Infrastructure Manager/Provisioning (UIM/P) (optional).

Advanced Management Pod


Vblock Systems include an AMP that provides a single management point for the Vblock System. It
enables the following benefits:

Monitors and manages Vblock System health, performance, and capacity

Provides fault isolation for management

Eliminates resource overhead on the Vblock System

Provides a clear demarcation point for remote operations

Two versions of the AMP are available: a mini-AMP and a high-availability version (HA AMP). A highavailability AMP is recommended.
For more information on AMP, refer to the Vblock Systems Architecture Overview documentation
located at www.vce.com/vblock.

EMC Ionix Unified Infrastructure Manager/Pr ovisioning


EMC Ionix UIM/P can be used to provide automated provisioning capabilities for the Vblock System in
a trusted multi-tenancy environment by combining provisioning with configuration, change, and
compliance management. With UIM/P, you can speed service delivery and reduce errors with policybased, automated converged infrastructure provisioning. Key features include the ability to:

Easily define and create infrastructure service profiles to match business requirements

Separate planning from execution to optimize senior IT technical staff

Respond to dynamic business needs with infrastructure service life cycle management

Maintain Vblock System compliance through policy-based management

Integrate with VMware vCenter and VMware vCloud Director for extended management
capabilities

2013 VCE Company, LLC. All Rights Reserved.

14

Compute technologies
Within the computing infrastructure of the Vblock System, multi-tenancy concerns at multiple levels
must be addressed, including the UCS server infrastructure and the VMware vSphere Hypervisor.

Cisco Unified Computing System


The Cisco UCS is a next-generation data center platform that unites network, compute, storage, and
virtualization into a cohesive system designed to reduce total cost of ownership and increase business
agility. The system integrates a low-latency, lossless, 10 Gb Ethernet (GbE) unified network fabric with
enterprise class x86 architecture servers. The system is an integrated, scalable, multi-chassis platform
in which all resources participate in a unified management domain. Whether it has only one server or
many servers with thousands of virtual machines (VM), the Cisco UCS is managed as a single
sy stem, thereby decoupling scale from complexity.
Cisco UCS Manager provides unified, centralized, embedded management of all software and
hardware components of the Cisco UCS across multiple chassis and thousands of virtual machines.
The entire UCS is managed as a single logical entity through an intuitive graphical user interface
(GUI), a command-line interface (CLI), or an XML API. UCS Manager delivers greater agility and
scale for server operations while reducing complexity and risk. It provides flexible role- and policybased management using service profiles and templates, and it facilitates processe s based on IT
Infrastructure Library (ITIL) concepts.

VMw are vSphere


VMware vSphere is a complete, scalable, and powerful virtualization platform, delivering the
infrastructure and application services that organizations need to transform their information
technology and deliver IT as a service. VMware vSphere is a host operating system that runs directly
on the Cisco UCS infrastructure and fully virtualizes the underlying hardware, allowing multiple virtual
machine guest operating systems to share the UCS physical resources.

VMw are vCenter Server


VMware vCenter Server is a simple and efficient way to manage VMware vSphere. It provides unified
management of all the hosts and virtual machines in your data center from a single console with
aggregate performance monitoring of clusters, hosts and virtual machines. VMware vCenter Server
gives administrators deep insight into the status and configuration of clusters, hosts, virtual machines,
storage, the guest operating system, and other critical components of a virtual infrastructure. It plays a
key role in helping achieve secure separation, availability, tenant management and control, and
service provider management and control.

VMw are vCloud Director


VMware vCloud Director gives customers the ability to build secure private clouds that dramatically
increase data center efficiency and business agility. With VMware vSphere, VMware vCloud Director
delivers cloud computing for existing data centers by pooling virtual infrastructure resources and
delivering them to users as catalog-based services.

2013 VCE Company, LLC. All Rights Reserved.

15

VMw are vCenter Char geback


VMware vCenter Chargeback is an end-to-end metering and cost reporting solution for virtual
environments that enables accurate cost measurement, analysis, and reporting of virtual machines
using VMware vSphere. Virtual machine resource consumption data is collected from VMware
vCenter Server. Integration with VMware vCloud Director also enables automated chargeback for
private cloud environments.

VMw are vShield


The VMware vShield family of security solutions provides virtualization-aware protection for virtual
data centers and cloud environments. VMware vShield products strengthen application and data
security, enable trusted multi-tenancy, improve visibility and control, and accelerate IT compliance
efforts across the organization.
VMware vShield products include vShield App and vShield Edge. vShield App provides firewall
capability between virtual machines by placing a firewall filter on every virtual network adapter. It
allows for easy application of firewall policies. vShield Edge virtualizes data center perimeters and
offers firewall, VPN, Web load balancer, NAT, and DCHP services.

Storage technologies
The features of multi-tenancy offerings can be combined with standard security methods such as
storage area network (SAN) zoning and Ethernet virtual local area networks (VLAN) to segregate,
control, and manage storage resources among the infrastructure tenants.

EMC Fully Automated Storage Tiering


EMC Fully Automated Storage Tiering (FAST) automates the movement and placement of data
across storage resources as needed. FAST enables continuous optimization of your applications by
eliminating trade-offs between capacity and performance, while simultaneously lowering cost and
delivering higher service levels.
EMC VNX FAS T VP
EMC VNX FAST VP is a policy-based auto-tiering solution that efficiently utilizes storage tiers by
moving slices of colder data to high-capacity disks. It increases performance by keeping hotter slices
of data on performance drives.
In a VMware vCloud environment, FAST VP enables providers to offer a blended storage offering,
reducing the cost of a traditional single-type offering while allowing for a wider range of customer use
cases. This helps accommodate a larger cross-section of virtual machines with different performance
characteristics.

2013 VCE Company, LLC. All Rights Reserved.

16

EMC FA ST Cache
EMC FAST Cache is an industry-leading feature supported by Vblock Systems. It extends the EMC
VNX arrays read-write cache and ensures that unpredictable I/O spikes are serviced at enterprise
flash drive (EFD) speeds, which is of particular benefit in a VMware vCloud Director environment.
Multiple virtual machines on multiple virtual machine file system (VMFS) data stores spread across
multiple hosts can generate a very random I/O pattern, placing stress on both the storage processors
as well as the DRAM cache. FAST Cache, a standard feature on all Vblock Systems, mitigates the
effects of this kind of I/O by extending the DRAM cache for reads and writes, increasing the overall
cache performance of the array, improving l/O during usage spikes, and dramatically reducing the
overall number of dirty pages and cache misse s.
Because FAST Cache is aware of EFD disk tiers available in the array, FAST VP and FAST Cache
work together to improve array performance. Data that has been promoted to an EFD tier is never
cached inside FAST Cache, ensuring that both options are leveraged in the most efficient way.

EMC Pow er Path/V E


EMC PowerPath/VE delivers PowerPath multipathing features to optimize storage access in VMware
vSphere virtual environments by removing the administrative overhead associated with load balancing
and failover. Use PowerPath/VE to standardize path management across heterogeneous physical
and virtual environments. PowerPath/VE enables you to automate optimal server, storage, and path
utilization in a dynamic virtual environment.
PowerPath/VE works with VMware vSphere ESXi as a multipathing plug-in that provides enhanced
path management capabilities to ESXi hosts. It installs as a kernel module on the vSphere host and
plugs in to the vSphere I/O stack framework to bring the advanced multipathing capabilities of
PowerPathdynamic load balancing and automatic failoverto the VMware vSphere platform.

EMC Unified Storage


The EMC Unified Storage system is a highly available architecture capable of five nines availability.
The Unified Storage arrays achieve five nines availability by eliminating single points of failure
throughout the physical storage stack, using technologies such as dual-ported drives, hot spares,
redundant back-end loops, redundant front-end and back-end ports, dual storage processors,
redundant fans and power supplies, and cache battery backup.

EMC Unisphere Management Suite


EMC Unisphere provides a simple, integrated experience for managing EMC Unified Storage through
both a storage and VMware lens. Key features include a Web-based management interface to
discover, monitor, and configure EMC Unified Storage; self-service support ecosystem to gain quick
access to realtime online support tools; automatic event notification to proactively manage critical
status changes; and customizable dashboard views and reporting.

2013 VCE Company, LLC. All Rights Reserved.

17

EMC Unisphere Quality of Service Manager


EMC Unisphere Quality of Service (QoS) Manager enables dynamic allocation of storage resources to
meet service level requirements for critical applications. QoS Manager monitors storage system
performance on an appliance-by-application basis, providing a logical view of application performance
on the storage system. In addition to displaying real-time data, performance data can be archived for
offline trending and data analysis.

Network technologies
Multi-tenancy concerns must be addressed at multiple levels within the network infrastructure of the
Vblock System. Various methods, including zoning and VLANs, can enforce network separation.
Internet Protocol Security (IPsec) also provides application-independent network encryption at the IP
layer for additional security.

Cisco Nex us 1000V Series


The Cisco Nexus 1000V is a software switch embedded in the software kernel of VMware vSphere.
The Nexus 1000V provides virtual machine-level network visibility, isolation, and security for VMware
server virtualization. With the Nexus 1000V Series, virtual machines can leverage the same network
configuration, security policy, diagnostic tools, and operational models as their physical server
counterparts attached to dedicated physical network ports. Virtualization administrators can access
predefined network policies that follow mobile virtual machines to ensure proper connectivity, saving
valuable resources for virtual machine administration.

Cisco Nex us 5000 Series


Cisco Nexus 5000 Series switches are data center class, high performance, standards-based
Ethernet and Fibre Channel over Ethernet (FCoE) switches that enable the consolidation of LAN,
SAN, and cluster network environments onto a single unified fabric.

Cisco Nex us 7000 Series


Cisco Nexus 7000 Series switches are modular switching systems designed for use in the data
center. Nexus 7000 switches deliver the scalability, continuous systems operation, and transport
flexibility required for 10 GB/s Ethernet networks today. In addition, the system architecture is capable
of supporting future 40 GB/s Ethernet, 100 GB/s Ethernet, and unified I/O modules.

Cisco MDS
The Cisco MDS 9000 Series helps build highly available, scalable storage networks with advanced
security and unified management. The Cisco MDS 9000 family facilitates secure separation at the
network layer with virtual storage area networks (VSAN) and zoning. VSANs help achieve higher
security and greater stability in fibre channel (FC) fabrics by providing isolation among devices that are
physically connected to the same fabric. The zoning service within a fibre channel fabric provides
security between devices sharing the same fabric.

2013 VCE Company, LLC. All Rights Reserved.

18

Cisco Data Center Netw ork Manager


Cisco Data Center Network Manager provides an effective tool to manage the Cisco data center
infrastructure and actively monitor the SAN and LAN.

Security technologies
RSA Archer eGRC and RSA enVision security technologies can be used to achieve security and
compliance.

RSA Archer eGRC


The RSA Archer eGRC Platform for enterprise governance, risk, and compliance has the industrys
most comprehensive library of policies, control standards, procedures, and asse ssments mapped to
current global regulations and industry guidelines. The flexibility of the RSA Archer framework,
coupled with this library, provides the service providers and tenants in a trusted multi-tenant
environment the mechanism to successfully implement a governance, risk, and compliance program
over the Vblock System. This addresse s both the components and technologies comprising the
Vblock System and the virtualized services and resources it hosts.
Organizations can deploy the RSA Archer eGRC Platform in a variety of configurations, based on the
expected user load, utilization, and availability requirements. As business needs evolve, the
environment can adapt and scale to meet the new demands. Regardless of the size and solution
architecture, the RSA Archer eGRC Platform consists of three logical layers: a .NET Web-enabled
interface, the application layer, and a Microsoft SQL database backend.

RSA enV ision


The RSA enVision platform is a security information and event management (SIEM) solution that
offers a scalable, distributed architecture to collect, store, manage, and correlate event logs generated
from all the components comprising the Vblock Systemfrom the physical devices and software
products to the management and orchestration and security solutions.
By seamlessly integrating with RSA Archer eGRC, RSA enVision provides both service providers and
tenants a powerful solution to collect and correlate raw data into actionable information. Not only does
RSA enVision satisfy regulatory compliance requirements, it helps ensure stability and integrity
through robust incident management capabilities.

2013 VCE Company, LLC. All Rights Reserved.

19

Design framework
This section provides the following information:

End-to-end topology

Logical topology

Logical design details

Overview of tenant anatomy

End-to-end topology
Secure separation creates trusted zones that shield each tenants applications, virtual machines,
compute, network, and storage from compromise and resource effects caused by adjacent tenants
and external threats. The solution framework presented in this guide considers additional technologies
that comprehensively provide appropriate in-depth defense. A combination of protective, detective,
and reactive controls and solid operational processes are required to deliver protection against
internal and external threats.
Key layers include:

Virtual machine and cloud resources (VMware vSphere and VMware vCloud Director)

Virtual access/vSwitch (Cisco Nexus 1000V)

Storage and SAN (Cisco MDS and EMC storage)

Compute (Cisco UCS)

Access and aggregation (Nexus 5000 and Nexus 7000)

Figure 3 illustrates the design framework.

2013 VCE Company, LLC. All Rights Reserved.

20

Figur e 3. Trusted multi-tenancy design fra mewo rk

Virtual machine and c loud resources layer


VMware vSphere and VMware vCloud Director are used in the cloud layer to accelerate the delivery
and consumption of IT services while maintaining the security and control of the data center.
VMware vCloud Director enables the consolidation of virtual infrastructure across multiple clusters, the
encapsulation of application services as portable vApps, and the deployment of those services ondemand with isolation and control.

2013 VCE Company, LLC. All Rights Reserved.

21

Virtual access layer/v Sw itch


Cisco Nexus 1000V distributed virtual switch acts as the virtual network access layer for the virtual
machines. Edge LAN policies such as quality of service marking and vNIC ACLs are implemented at
this layer in Nexus 1000V port-profiles.
The following table describes the virtual access layer.
Component

Description

One data center

One primary Nexus 1000V Virtual Superv isor Module (VS M)


One secondary Nexus 1000V Virtual Superv isor Module

VMware ES Xi serv ers

Each running an instance of the Nexus 1000V Virtual Ethernet Module (VE M)

Tena nt

Multiple v irtual machines, which hav e diff erent applications such as Web
serv er, database, and so f orth, for each tenant

Storage and SA N layer


The trusted multi-tenancy design framework is based on the use of storage arrays supporting fibre
channel connectivity. The storage arrays connect through MDS SAN switches to the UCS 6120
switches in the access layer. Several layers of security (including zoning, access controls at the guest
operating system and ESXi level, and logical unit number (LUN) masking within the VNX) tightly
control access to data on the storage system.

Compute layer
The following table provides an example of the components of a multi-tenant environment virtual
compute farm.
Note: A Vblock System may have more resources than what is described in the f ollowing table.

Component

Description

Three UCS 5108 chassis

11 UCS B200 servers (dual quad-core Intel Xeon X5570 CPU at


2.93 GHZ and 96 GB RAM)

Four UCS B440 serv ers (f our Intel Xeon 7500 series processors
and 32 dual in-line memory module slots with 256 GB memory)

Ten GbE Cisco VIC conv erged network adapters (CNA)


organized into a VMware ESXi cluster

Each serv er has two CNAs and are dual-attached to the UCS
6100 f abric interconnect

The CNAs provide:

15 serv ers (4 clusters)

2013 VCE Company, LLC. All Rights Reserved.

LAN and SAN connectivity to the serv ers, which run


VMware ES Xi 5.0 hypervisor

LAN and SAN services to the hy pervisor

22

Netw ork layers


Access layer
Nexus 5000 is used at the access layer and connects to the Cisco UCS 6120s. In the Layer 2 access
layer, redundant pairs of Cisco UCS 6120 switches aggregate VLANs from the Nexus 1000V
distributed virtual switch. FCoE SAN traffic from virtual machines is handed off as FC traffic to a pair of
MDS SAN switches, and then to a pair of storage array controllers. FC expansion modules in the UCS
6120 switch provide SAN interconnects to dual SAN fabrics. The UCS 6120 switches are in N Port
virtualization (NPV) mode to interoperate with the SAN fabric.
Aggrega tion la yer
Nexus 7000 is used at the aggregation layer. The virtual device context (VDC) feature in the Nexus
7000 separates it into sub-aggregation and aggregation virtual device contexts for Layer 3 routing.
The aggregation virtual device context connects to the core network to route the internal data center
traffic to the Internet and from the Internet back to the internal data center.

Logical topology
Figure 4 shows the logical topology for the trusted multi-tenancy design framework.

2013 VCE Company, LLC. All Rights Reserved.

23

Figur e 4. Trusted multi-tenancy logical topology

2013 VCE Company, LLC. All Rights Reserved.

24

The logical topology represents the virtual components and virtual connections that exist within the
physical topology. The following table describes the topology.
Component

Details

Nexus 7000

Virtualized aggre gation lay er switch.


Prov ides redundant paths to the Nexus 5000 access lay er. Virtual
port channel prov ides a logically loopless topology with conv ergence
times based on EtherChannel.
Creates three v irtual dev ice contexts (VDC): WAN edge v irtual dev ice
context, sub-aggregation v irtual dev ice context, and aggregation
v irtual dev ice context. Sub-aggregation v irtual dev ice context
connects to Nexus 5000 and aggregation v irtual dev ice context by
v irtual port channel.

Nexus 5000

Unif ied access lay er switch.


Prov ides 10 GbE IP connectiv ity between the Vblock System and the
outside world. In a unif ied storage conf iguration, the switches also
connect the f abric interconnects in the compute lay er to the data
mov ers in the storage lay er. The switches also prov ide connectiv ity to
the AMP.

Two UCS 6120 f abric


interconnects

Prov ides a robust compute lay er platf orm. Virtual port channel
prov ides a topology with redundant chassis, cards, and links with
Nexus 5000 and Nexus 7000.
Each connects to one MDS 9148 to f orm its own f abric.
Four 4 GB/s FC links connect the UCS 6120 to MDS 9148.
The MDS 9148 switches connect to the storage controllers. In this
example, the storage array has two controllers. Each MDS 9148 has
two connections to each FC storage controller. These dual
connections prov ide redundancy if an FC controller f ails and the MDS
9148 is not isolated.
Connect to the Nexus 5000 access switch through EtherChannel with
dual-10 Gb E.

Three UCS chassis

Each chassis is populated with blade serv ers and Fabric Extenders
f or redundancy or aggregation of bandwidth.

UCS blade serv ers

Connect to the SAN f abric through the Cisco UCS 6120XP f abric
interconnect, which uses an 8-port 8 GB f ibre channel expansion
module to access the SAN.
Connect to LAN through the Cisco UCS 6120XP f abric interconnects.
These ports require SFP + adapters. The serv er ports of f abric
interconnects can operate at 10 GB/s and Fibre Channel ports of
f abric interconnects can operate at 2/4/8 GB/s.

EMC VN X storage

Connects to the f abric interconnect with 8 GB f ibre channel f or block.


Connects to the Nexus 5000 access switch through EtherChannel
with dual-1 0 GbE f or f ile.

2013 VCE Company, LLC. All Rights Reserved.

25

Tenant traffic flow representation


Figure 5 depicts the traffic flow through each layer of the solution, from the virtual machine level to the
storage layer.

Figur e 5. Tenant traffic flow

2013 VCE Company, LLC. All Rights Reserved.

26

Traffic flow in the data center is classified into the following categories:

Front-endUser to data center, Web, GUI

Back-endWithin data center, multi-tier application, storage, backup

ManagementVirtual machine access, application administration, monitoring, and so forth

Note: Front-end traffic, also called client-to-server traffic, trav erses the Nexus 7000 aggregation layer and a
select number of network-based services.

At the application layer, each tenant may have multiple vApps with applications and have different
virtual machines for different workloads. The Cisco Nexus 1000V distributed virtual switch acts as the
virtual access layer for the virtual machines. Edge LAN policies, such as quality of service marking
and vNIC ACLs, can be implemented at the Nexus 1000V. Each ESXi server becomes a virtual
Ethernet blade of Nexus 1000V, called Virtual Ethernet Module (VEM). Each vNIC connects to Nexus
1000V through a port group; each port group specifies one or more VLANs used by a virtual machine
NIC. The port group can also specify other network attributes, such as rate limit and port security. The
VM uplink port profile forwards VLANs belonging to virtual machines. The system uplink port profile
forwards VLANs belonging to management traffic. The virtual machine traffic for different tenants
traverses the network through different uplink port profiles, where port security, rate limiting, and
quality of service apply to guarantee secure separation and assurance.
VMware vSphere virtual machine NICs are associated to the Cisco Nexus 1000V to be used as the
uplinks. The network interface virtualization capabilities of the Cisco adapter enable the use of
VMware multi-NIC design on a server that has two 10 GB physical interfaces with complete quality of
service, bandwidth sharing, and VLAN portability among the virtual adapters. vShield Edge controls all
network traffic to and from the virtual data center and helps provide an abstraction of the separation in
the cloud environment.
Virtual machine traffic goes through the UCS FEX (I/O module) to the fabric interconnect 6120.
If the traffic is aligned to use the storage resources and it is intended to use FC storage, it passe s over
an FC port on the fabric interconnect and Cisco MDS, to the storage array, and through a storage
processor, to reach the specific storage pool or storage groups. For example, if a tenant is using a
dedicated storage resource with specific disks inside a storage array, traffic is routed to the assigned
LUN with a dedicated storage group, RAID group, and disks. If there is NFS traffic, it passes over a
network port on the fabric interconnect and Cisco Nexus 5000, through a virtual port channel to the
storage array, and over a data mover, to reach the NFS data store. The NFS export LUN is tagged
with a VLAN to ensure the security and isolation with a dedicated storage group, RAID group, and
disks. Figure 5 shows an example of a few dedicated tenant storage resources. However, if the
storage is designed for a shared traffic pool, traffic is routed to a specific storage pool to pull
resources.
ESXi hosts for different tenants pass the server-client and management traffic over a server port and
reach the access layer of the Nexus 5000 through virtual port channel.
Server blades on UCS chassis are allocated for the different tenants. The resource on UCS can be
dedicated or shared. For example, if using dedicated servers for each tenant, VLANs are assigned for
different tenants and are carried over the dot1Q trunk to the aggregation layer of the Nexus 7000,
where each tenant is mapped to the Virtual Routing and Forwarding (VRF). Traffic is routed to the
external network over the core.

2013 VCE Company, LLC. All Rights Reserved.

27

VMw are vSphere logical framew ork overview


Figure 6 shows the virtual VMware vSphere layer on top of the physical server infrastructure.

Figur e 6. vSphere logical fra mework

The diagram shows blade server technology with three chassis initially dedicated to the VMware
vCloud environment. The physical design represents the networking and storage connectivity from the
blade chassis to the fabric and SAN, as well as the physical networking infrastructure. (Connectivity
between the blade servers and the chassis switching is different and is not shown here.) Two chassis
are initially populated with eight blades each for the cloud resource clusters, with an even distribution
between the two chassis of blades belonging to each resource cluster.
In this scenario, VMware vSphere resources are organized and separated into management and
resource clusters with three resource groups (Gold, Silver, and Bronze). Figure 7 illustrates the
management cluster and resource groups.

2013 VCE Company, LLC. All Rights Reserved.

28

Figur e 7. Manage me nt cluster and resource groups

Cloud management clusters


A cloud management cluster is a management cluster containing all core components and services
needed to run the cloud. It is a resource group or compute cluster that represents dedicated
resources for cloud consumption. It is best to use a separate cluster outside the Vblock System
resources.
Each resource group is a cluster of VMware ESXi hosts managed by a VMware vCenter Server, and
is under the control of VMware vCloud Director. VMware vCloud Director can manage the resources
of multiple resource groups or multiple compute clusters.
Cloud management components
The following components run as minimum-requirement virtual machines on the management cluster
hosts:
Components

Number of virtual machines

vCenter Serv er

vCenter Database

vCenter Update Manager

vCenter Update Manager Database

vCloud Director Cells

2 (f or multi-cell)

vCloud Director Database

2013 VCE Company, LLC. All Rights Reserved.

29

Components

Number of virtual machines

vCenter Chargeback Serv er

vCenter Chargeback Database

v Shield Manager

Note: A vCloud Director cluster contains one or more vCloud Director serv ers; these servers are referred to as
cells and f orm the basis of the VMware cloud. A cloud can be formed from multiple cells. The number of
vCloud Director cells depends on the size of the vCloud environment and the level of redundancy.

Figure 8 highlights the cloud management cluster.

Figur e 8. Cloud manag e ment cluster

Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups
would not host vCenter management virtual machines. Best practices encourage separating the cloud
management cluster from the cloud resource groups(s) in order to:

Facilitate quicker troubleshooting and problem resolution. Management components are strictly
contained in a specified cluster and manageable management cluster.

Keep cloud management components separate from the resources they are managing.

Consistently and transparently manage and carve up resource groups.

Provide an additional step for high availability and redundancy for the trusted multi-tenancy
infrastructure.

2013 VCE Company, LLC. All Rights Reserved.

30

Resource groups
A resource group is a set of resources dedicated to user workloads and managed by VMware vCenter
Server. vCloud Director manages the resources of all attached resource groups within vCenter
Servers. All cloud-provisioning tasks are initiated through VMware vCloud Director and passed down
to the appropriate vCenter Server instance.
Figure 9 highlights cloud resource groups.

Figur e 9. Cloud resource grou ps

Provisioning resources in standardized groupings promotes a consistent approach for scaling vCloud
environments. For consistent workload experience, place each resource group on a separate
resource cluster.
The resource group design represents three VMware vSphere High Availability (HA) Distributed
Resource Scheduler (DRS) clusters and infrastructure used to run the vApps that are provisioned and
managed by VMware vCloud Director.

2013 VCE Company, LLC. All Rights Reserved.

31

Logical design
This section provides information about the logical design, including:

Cloud management cluster logical design

VMware vSphere cluster specifications

Host logical design specifications

Host logical configurations for resource groups

VMware vSphere cluster host design specifications for resource groups

Security

Cloud management cluster logical des ign


The compute design encompasses the VMware ESXi hosts contained in the management cluster.
Specifications are listed below.
Attribute

Specification

Number of ESXi hosts

v Sphere datacenter

VMware Distributed Resource Scheduler conf iguration

Fully automated

VMware High Av ailability (HA) Enable Host Monitoring

Y es

VMware High Av ailability Admission Control Policy

Cluster tolerances 1 host f ailure (percentage


based)

VMware High Av ailability percentage

67%

VMware High Av ailability Admission Control Response

Prev ent v irtual machines f rom being powered


on if they violate av ailability constraints

VMware High Av ailability Def ault VM Restart Priority

N/A

VMware High Av ailability Host Isolation Response

Leav e v irtual machine powere d on

VMware High Av ailability Enable VM Monitori ng

Y es

VMware High Av ailability VM Monitoring Sensitiv ity

Medi um

Note: In this section, the scope is limited to only the Vblock System supporting the management component
workloads.

2013 VCE Company, LLC. All Rights Reserved.

32

vSpher e cluster specifications


Each VMware ESXi host in the management cluster has the following specifications.
Attribute

Specification

Host ty pe and v ersion

VMware ES Xi installable v ersion 5.0

Processors

x86 compatible

Storage presented

SAN boot f or ESXi 20 GB


SAN LUN f or v irtual machines 2 TB
NFS shared LUN f or v Cloud Director cells 1 TB

Networking

Connectiv ity to all needed VLANs

Memory

Size to support all management v irtual machines. In this case, 96 GB


memory in each host.

Note: VMware v Cloud Director deployment requires storage for sev eral elements of the ov erall framework. The
first is the storage needed to house the vCloud Director management cluster. This includes the repository
for configuration information, organizations, and allocations that are stored in an Oracle database. The
second is the vSphere storage objects presented to vCloud Director as data stores accessed by ESXi
serv ers in the vCloud Director configuration. This storage is managed by the v Sphere administrator and
consumed by vCloud Director users depending on vCloud Director configuration. The third is the existence
of a single NFS data store to serv e as a staging area for vApps to be uploaded to a catalog.

Host logical design specifications for cloud management c luster


The following table identifies management components that rely on high availability and fault tolerance
for redundancy.
Management component

High availability enabled?

vCenter Serv er

Y es

vCloud Director

Y es

vCenter Chargeback Serv er

Y es

v Shield Manager

Y es

2013 VCE Company, LLC. All Rights Reserved.

33

Host logical configuration for resource groups


The following table identifies the specifications for each VMware ESXi host in the resource cluster.
Attribute

Specification

Host ty pe and v ersion

VMware ES Xi Installable v ersion 5.0

Processors

x86 compatible

Storage presented

SAN boot f or ESXi 20 GB


SAN LUN f or v irtual machines 2 TB

Networking

Connectiv ity to all needed VLANs

Memory

Size to support v irtual machine workloads

VMw are vSphere cluster host des ign specification for resource groups
All VMware vSphere resource clusters are configured similarly with the following specifications.
Attribute

Specification

VMware Distributed Resource Scheduler


conf iguration

Fully automated

VMware Distributed Resource Scheduler


Migratio n Thresh old

3 stars

VMware High Av ailability Enable Host


Monitori ng

Y es

VMware High Av ailability Admission Control


Policy

Cluster tolerances 1 host f ailure (percentage based)

VMware High Av ailability percentage

83%

VMware High Av ailability Admission Control


Response

Prev ent v irtual machines f rom being powered on if they


v iolate av ailability constraints

VMware High Av ailability Def ault VM Restart


Priority

N/A

VMware High Av ailability Host Isolation


Response

Leav e v irtual machine powere d on

2013 VCE Company, LLC. All Rights Reserved.

34

Security
The RSA Archer eGRC Platform can be run on a single server, with the application and database
components running on the same server. This configuration is suitable for organizations:

With fewer than 50 concurrent users

That do not require a high-performance or high availability solution

For the trusted multi-tenancy framework, RSA enVision can be deployed as a virtual appliance in the
AMP. Each Vblock System component can be configured to utilize it as its centralized event manager
through its identified collection method. RSA enVision can then be integrated with RSA Archer eGRC
per the RSA Security Incident Management Solution configuration guidelines.

Tenant anatomy overview


This design guide uses three tenants as examples: Orange (tenant 1), Vanilla (tenant 2), and Grape
(tenant 3). All tenants share the same infrastructure and resources. Each tenant has its own virtual
compute, network, and storage resources. Resources are allocated for each tenant based on their
business model, requirements, and priorities. Traffic between tenants is restricted, separated, and
protected for the trusted multi-tenancy environment.

Figur e 10. Trusted multi-tenancy tenant anato my

2013 VCE Company, LLC. All Rights Reserved.

35

In this design guide (and associated configurations), three levels of services are provided in the cloud:
Bronze, Silver, and Gold. These tiers define service levels for compute, storage, and network
performance. The following table provides sample network and data differentiations by service tier.
Bronze

Silver

Gold

Serv ices

No additional serv ices

Firewall serv ices

Firewall an d loadbalancing serv ices

Band width

20%

30%

40%

Segmentation

One VLAN per client,


single Virtual Routing
and Forwar ding (VRF)

Multiple VL ANs per client,


single VRF

Multiple VL ANs per client,


single VRF

Data Protection

None

Snap v irtual copy (local


site)

Clone mirror copy (local


site)

Disaster Recov ery

None

Remote application (with


specif ic recov ery point
objectiv e (RPO) / recov ery
time objectiv e (RTO))

Remote replication (any point-in-time recov ery )

Using this tiered model, you can do the following:

Offer service tiers with well-defined and distinct SLAs

Support customer segmentation based on desired service levels and functionality

Allow for differentiated application support based on service tiers

2013 VCE Company, LLC. All Rights Reserved.

36

Design considerations for management and orchestration


Service providers can leverage Unified Infrastructure Manager/Provisioning to provision the Vblock
System in a trusted multi-tenancy environment. The AMP cluster of hosts holds UIM/P, which is
accessed through a Web browser.
Use UIM/P as a domain manager to provision Vblock Systems as a single entity. UIM/P interacts with
the individual element managers for compute, storage, SAN, and virtualization to automate the most
common and repetitive operational tasks required to provision services. It also interacts with VMware
vCloud Director to automate cloud operations, such as the creation of a virtual data center.
For provisioning, this guide focuses on the functional capabilities provided by UIM/P in a trusted multitenancy environment.
As shown in Figure 11, the UIM/P dashboard gives service provider administrators a quick summary
of available infrastructure resources. This eliminates the need to perform manual discovery and
documentation, thereby reducing the time it takes to begin deploying resources. Once administrators
have resource availability information, they can begin to provision existing service offerings or create
new ones.

Figur e 11. UIM/P dashboard

2013 VCE Company, LLC. All Rights Reserved.

37

Figur e 12. UIM/P service offerings

2013 VCE Company, LLC. All Rights Reserved.

38

Configuration
While UIM/P automates the operational tasks involved in building services on Vblock Systems,
administrators need to perform initial task sets on each domain manager before beginning service
provisioning. This section describes both key initial tasks to perform on the individual domain
managers and operational tasks managed through UIM/P.
The following table shows what is configured as part of initial device configuration and what is
configured through UIM/P.
Device manager

Initial configuration

Operational configuration
completed with UIM/P

UCS Manager

Management conf iguration (IP and


credentials

LAN
MAC pool

Chassis discov ery

SAN

Enable ports

KVMIP pool

World Wide Name (WWN)


pool

Create VLANs

WWPN pool

Assign VLANs
VSANs

Boot policies
Serv ice templates

Select pools

Select boot policy

Serv er

UUID pool

Create service profile

Associate profile to server

Install v Sphere ESXi

Unisphere MDS/Nexus

vCenter

Management conf iguration (IP and


credentials)

Create storage group


Associate host and LUN

RAID group, storage pool, or both

Zone

Create LUNs

Aliases

Zone sets

Create Windows v irtual machine


Create database

Create data center


Create clusters

Install vCenter software

High availability policy

DRS policy

Distributed power
management (DPM) policy

Add hosts to cluster

Create data stores

Create networks

2013 VCE Company, LLC. All Rights Reserved.

39

Enabling services
After completing the initial configurations, use the following high-level workflow to enable services.
Stage

Workflow action

Description

Vblock System discov ery

Gather data f or Vblock Sy stem dev ices, interconnectiv ity, and


external networks, and populate data in UIM database.

Serv ice planning

Collect serv ice resource requirements, including:

The number of servers and serv er attributes

Amount of boot and data storage and storage attributes

Networks to be used for connectivity between the service


resources and external networks

vCenter Server and ESXi cluster information

Serv ice prov isioning

Reserv e resources based on the serv er and storage


requirements def ined f or the serv ice during serv ice planning.
Install ESXi on the serv ers. Conf igure connectiv ity between the
cluster and external networks.

Serv ice activ ation

Turn on the sy stem, start up Cisco UCS serv ice prof iles, activ ate
network paths, and make resources av ailable f or use. The
workf low separates prov isioning and activ ation, to allow
activ ation of the serv ice as needed.

vCenter sy nchronization

Sy nchronize the ES Xi clusters with the vCenter Serv er. Once


y ou prov ision and activ ate a serv ice, the sy nchronizing process
includes adding the ES Xi cluster to the vCenter serv er data store
and registering the cluster hosts prov isioned with v Center
Serv er.

vCloud sy nchronization

Discov er vCloud and build a connection to the v Center serv ers.


The clusters created in v Center Serv er are pushed to the
appropri ate vCloud. UIM/P integrates with vCloud Director in the
same way it integrates with v Center Serv er.

2013 VCE Company, LLC. All Rights Reserved.

40

Figure 13 describes the provisioning, activation, and synchronization process, including key sub-steps
during the provisioning process.

Figur e 13. Provisioning, activation, and synchronization process flow

Creating a service offering


To create a service offering:
1. Select the operating system.
2. Define server characteristics.
3. Define storage characteristics for startup.
4. Define storage characteristics for application data.
5. Create network profile.

Pr ovisioning a service
To provision a service:
1. Select the service offering.
2. Select Vblock System.
3. Select servers.
4. Configure IP and provide DNS hostname for operating system installation.
5. Select storage.
6. Select and configure network profile and vNICs.
7. Configure vCenter cluster settings.
8. Configure vCloud Director settings.
2013 VCE Company, LLC. All Rights Reserved.

41

Design considerations for compute


Within the computing infrastructure of Vblock Systems, multi-tenancy concerns can be managed at
multiple levels, from the central processing unit (CPU), through the Cisco Unified Computing System
(UCS) server infrastructure, and within the VMware solution elements.
This section describes the design of and rationale behind the trusted multi-tenancy framework. The
design includes many issues that must be addressed prior to deployment, as no two environments are
alike. Design considerations are provided for the components listed in the following table.
Component

Version

Description

Cisco UCS

2.0

Core component of the Vblock System that prov ides compute


resources in the cloud. It helps achiev e secure separation,
serv ice assurance, security , av ailability , and serv ice prov ider
management in the trusted multi-tenancy f ramework.

VMware v Sphere

5.0

Foundation of underly ing cloud inf rastructure and components.


Includes:

VMware v Cloud Director

VMware v Shield

VMware v Center
Chargeback

1.5

5.0

1.6.2

2013 VCE Company, LLC. All Rights Reserved.

VMware ES Xi hosts

VMware v Center Serv er

Resource pools
VMware High Av ailability and Distributed Resource
Scheduler

VMware v Motion

Builds on VMwar e v Sphere to prov ide a complete multi-tenant


inf rastructure. It deliv ers on-demand cloud inf rastructure so
users can consume v irtual resources with maximum agility. It
consolidates data centers and deploys workloads on shared
inf rastructure with built-in security and role-based access
control. Includes:

VMware v Cloud Director Serv er (two instances, each


installed on a Red Hat Linux virtual machine and ref erred to
as a cell)

VMware v Cloud Director Database (one instance per


clustered set of VMware vCloud Director cells)

Prov ides network security serv ices, including NAT and f irewall.
Includes:

v Shield Edge (deployed automatically on hosts as v irtual


appliances by VMware v Cloud Director to separate tenants)

v Shield App (deployed on ESXi host layer to zone and


secure v irtual machine traffic)

v Shield Manager (one instance per vCenter Server in the


cloud resource groups to manage vShield Edge and v Shield
App)

Prov ides resource metering and chargeback models. Includes:

VMware v Center Chargeback Serv er

VMware Chargeback Data Collector


VMware v Cloud Data Collector

VMware v Shield Manager Data Collector

42

Design considerations for secure separation


This section discusse s using the following technologies to achieve secure separation at the compute
layer:

Cisco UCS

VMware vCloud Director

Cisco UCS
The UCS blade servers contain a pair of Cisco Virtual Interface Card (VIC) Ethernet uplinks. Cisco
VIC presents virtual interfaces (UCS vNIC) to the VMware ESXi host, which allow for further traffic
segmentation and categorization across all traffic types based on vNIC network policies.
Using port aggregation between the fabric interconnect vNIC pairs enhances the availability and
capacity of each traffic category. All inbound traffic is stripped of its VLAN header and switched to the
appropriate destinations virtual Ethernet interface. In addition, the Cisco VIC allows for the creation of
multiple virtual host bus adapters (vHBA), permitting FC-enabled startup across the same physical
infrastructure.
Each VMware virtual interface type, VMkernel, and individual virtual machine interface connects
directly to the Cisco Nexus 1000V software distributed virtual switch. At this layer, packets are tagged
with the appropriate VLAN header and all outbound traffic is aggregated to the two Cisco fabric
interconnects.
This section contains information about the high-level UCS features that help achieve secure
separation in the trusted multi-tenancy framework:

UCS service profiles

UCS organizations

VLAN considerations

VSAN considerations

UCS serv ice profiles


Use UCS service profiles to ensure secure separation at the compute layer. Hardware can be
presented in a stateless manner that is completely transparent to the operating system and the
applications that run on it. A service profile creates a hardware overlay that contains specific
information sensitive to the operating system:

MAC addresses

WWN values

UUID

BIOS

Firmware versions

2013 VCE Company, LLC. All Rights Reserved.

43

In a multi-tenant environment, the service provider can define a service profile giving access to any
server in a predefined server resource with specific processor, memory, or other administrator-defined
characteristics. The service provider can then provision one or more servers through service profiles,
which can be used for an organization or a tenant. Service profiles are particularly useful when
deployed with UCS Role-Based Access Control (RBAC), which provides granular administrative
access control to UCS system resources based on administrative roles in a service provider
environment.
Servers instantiated by service profiles start up from a LUN that is tied to the specified WWPN,
allowing an installed operating system instance to be locked with the service profile. The
independence from server hardware allows installed systems to be re-deployed between blades.
Through the use of pools and templates, UCS hardware can be quickly deployed and scaled.
The trusted multi-tenancy framework uses three distinct server roles to segregate and classify UCS
blade servers. This helps identify and associate specific service profiles depending on their purpose
and policy. The following table describes these roles.
Role

Description

Mana gement

These serv ers can be associated with a serv ice prof ile that is meant only for cloud
management or any type of serv ice prov ider inf rastructure workload.

Dedicated

These serv ers can be associated with diff erent serv ice prof iles, serv er pools, and
roles with VLAN policy ; f or example, for a specific tenant VLAN allowed access to
those serv ers that are meant only for specif ic tenants.
The trusted multi-tenancy f ramework considers a f ew tenants who strongly want to
hav e a dedicated UCS cluster to f urther segregate workloads in the v irtualization
lay er as needed. It also considers tenants who want dedicated workload throu ghput
f rom the underly ing compute inf rastructure, which maps to the VMwar e Distributed
Resource Scheduler cluster.

Mixed

These serv ers can be associated with a diff erent serv ice prof ile meant for shared
resource clusters f or the VMware Distributed Resource Scheduler cluster.
Depending on tenant requi rements, UCS can be designed to use a dedicated
compute resource or a shared resource. The trusted multi-tenancy framework uses
mixed serv ers f or shared resource clusters as an example.

These servers can be spread across the UCS fabric to minimize the impact of a single point of failure
or a single chassis failure.

2013 VCE Company, LLC. All Rights Reserved.

44

Figure 14 shows an example of how the three servers are designed in the trusted multi-tenancy
framework.

Figur e 14. Trusted multi-tenancy fra mewo rk server design

2013 VCE Company, LLC. All Rights Reserved.

45

Figure 15 shows an example of three tenants (Orange, Vanilla, and Grape) using three service
profiles on three different physical blades to ensure secure separation at the blade level.

Figur e 15. Secure separation at the blade level

2013 VCE Company, LLC. All Rights Reserved.

46

UCS or ganizations
The Cisco UCS organizations feature helps with multi-tenancy by logically segmenting physical
sy stem resources. Organizations are logically isolated in the UCS fabric. UCS hardware and policies
can be assigned to different organizations so that the appropriate tenant or organizational unit can
access the assigned compute resources. A rich set of policies in UCS can be applied per organization
to ensure that the right sets of attributes and I/O policies are assigned to the correct organization.
Each organization can have its own pool of resources, including the following:

Resource pools (server, MAC, UUID, WWPN, and so forth)

Policies

Service profiles

Service profile templates

UCS organizations are hierarchical. Root is the top-level organization. System-wide policies and pools
in root are available to all organizations in the system. Any policies and pools created in other
organizations are available only to organizations below it in the same hierarchy.
The functional isolation provided by UCS is helpful for a multi-tenant environment. Use the UCS
features of RBAC and locales (a UCS feature to isolate tenant compute resources) on top of
organizations to assign or restrict user privileges and roles by organization.
Figure 16 shows the hierarchical organization of UCS clusters starting from Root. It shows three types
of cluster configurations (Management, Dedicated, and Mixed). Below that are the three tenants
(Orange, Vanilla, and Grape) with their service levels (Gold, Silver, and Bronze).

Figur e 16. UCS cluster hierarchical organi zation

2013 VCE Company, LLC. All Rights Reserved.

47

UCS allows the creation of resource pools to ensure secure separation between tenants. Use the
following:

LAN resources

IP pool

MAC pool

VLAN pool

Management resources

KVM addresse s pool

VLAN pool

SAN resources

WWN addresses pool

VSANs

Identity resources

UUID pool

Compute resources

Server pools

Figure 17 illustrates how creating separate resource pools for the three tenants helps with secure
separation at the compute layer.

Figur e 17. Resource pools

2013 VCE Company, LLC. All Rights Reserved.

48

Figure 18 is an example of a UCS Service Profile workflow diagram for three tenants.

Figur e 18. UCS service profile workflow

VLAN considerations
In Cisco UCS, a named VLAN creates a connection to a specific management LAN and tenantspecific VLANs. The VLAN isolates traffic, including broadcast traffic, to that external LAN. The name
assigned to a VLAN ID adds a layer of abstraction that you can use to globally update all servers
associated with service profiles using the named VLAN. You do not need to reconfigure servers
individually to maintain communication with the external LAN. For example, if a service provider
wanted to isolate a group of compute clusters for a specific tenant, the specific tenant VLAN needs to
be allowed in the service profile of that tenant. This provides another layer of abstraction in secure
separation.

2013 VCE Company, LLC. All Rights Reserved.

49

To illustrate, if Tenant Orange has dedicated UCS blades, it is recommended to allow only Tenant
Orange-specific VLANs to ensure that only Tenant Orange has access to those blades. Figure 19
shows a dedicated service profile for Tenant Orange that uses a vNIC template as Orange. Tenant
Orange VLANs are allowed to use that specific vNIC template. However, a global vNIC template can
still be used for all blades, providing the ability to allow or disallow specific VLANs from updating
service profile templates.

Figur e 19. Dedicated service profile for Tenant Orang e

VSAN considerati ons in UCS


A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic, including
broadcast traffic, to that external SAN. The traffic on one named VSAN knows that the traffic on
another named VSAN exists, but it cannot read or access that traffic.
The name assigned to a VSAN ID adds a layer of abstraction that allows you to globally update all
servers associated with service profiles that use the named VSAN. You do not need to individually
reconfigure servers to maintain communication with the external SAN. You can create more than one
named VSAN with the same VSAN ID.
In a cluster configuration, a named VSAN is configured to be accessible to only the FC uplinks on
both fabric interconnects.

2013 VCE Company, LLC. All Rights Reserved.

50

Figure 20 shows that VSAN 10 and VSAN 11 are configured in UCS SAN Cloud and uplinked to an
FC port.

Figur e 20. VSAN configuratio n in UCS

Figure 21 shows how an FC port is assigned to a VSAN ID in UCS. In this case, uplink FC Port 1 is
assigned to VSAN10.

Figur e 21. Assigning a VSAN to FC ports

2013 VCE Company, LLC. All Rights Reserved.

51

VMw are vCloud Director


VMware vCloud Director introduces logical constructs to facilitate multi-tenancy and provide
interoperability between vCloud instances built to the vCloud API standard.
VMware vCloud Director helps administer tenantssuch as a business unit, organization, or
divisionby policy. In the trusted multi-tenancy framework, each organization has isolated virtual
resources, independent LDAP-based authentication, specific policy controls, and unique catalogs. To
ensure secure separation in a trusted multi-tenancy environment where multiple organizations share
Vblock System resources, the framework includes VMware vCloud Director along with VMware
vShield perimeter protection, port-level firewall, and NAT and DHCP services.
Figure 22 shows a logical separation of organizations in VMware vCloud Director.

Figur e 22. Organi zation separation

2013 VCE Company, LLC. All Rights Reserved.

52

A service provider may want to view all the listed tenants or organizations in vCloud Director to easily
manage them. Figure 23 shows the service providers tenant view in VMware vCloud Director.

Figur e 23. Tenant view in vCloud Di rector

Organizations are the unit of multi-tenancy within vCloud Director. They represent a single logical
security boundary. Each organization contains a collection of users, computing resources, catalogs,
and vApp workloads. Organization users can be local users or imported from an LDAP server. LDAP
integration can be specific to an organization, or it can leverage an organizational unit within the
sy stem LDAP configuration, as defined by the vCloud system administrator. The name of the
organization, specified during creation time, maps to a unique URL that allows access to the GUI for
that organization. For example, Figure 24 shows that Tenant Orange maps to a specific default
organization URL. Each tenant accesses the resource using its own URL and authentication.

Figur e 24. Organi zation uniqu e identifier URL

2013 VCE Company, LLC. All Rights Reserved.

53

The vCloud Director network provides an extra layer of separation. vCloud Director has three different
types of networking, each with a specific purpose:

External network

Organization network

vApp network

External netw ork


The external network is the connection to the outside world. An external network always needs a port
group, meaning that a port group needs to be available within VMware vSphere and the distributed
switch.
Tenants commonly require direct connections from inside the vCloud environment into the service
provider networking backbone. This is analogous to extending a wire from the network switch
containing the network or VLAN to be used, all the way through the vCloud layers into the vApp. Each
organization in the trusted multi-tenancy environment has an internal organization network and a
direct connect external organization network.
Organization netw ork
An organization network provides network connectivity to vApp workloads within an organization.
Users in an organization have no visibility into external networks and connect to outside networks
through external organization networks. This is analogous to users in an organization connecting to a
corporate network that is uplinked to a service provider for Internet access.
The following table lists connectivity options for organization networks.
Network type

Connectivity

External organi zation

Direct connection

External organi zation

NAT/routed

Internal organi zation

Isolated

A directly connected external organization network places the vApp virtual machines in the port group
of the external network. IP address assignments for vApps follow the external network IP addressing.
Internal and routed external organization networks are instantiated through network pools by vCloud
sy stem administrators. Organization administrators do not have the ability to provision organization
networks but can configure network services such as firewall, NAT, DHCP, VPN, and static routing.
Note: Organization network is meant only for the intra-organization network and is specific to an organization.

2013 VCE Company, LLC. All Rights Reserved.

54

Figure 25 shows an example of an internal and external network configuration.

Figur e 25. Internal and external organi zation networks

Service providers provision organization networks using network pools. Figure 26 shows the service
providers administrator view of the organization networks.

Figur e 26. Ad ministrator view of organization networks

2013 VCE Company, LLC. All Rights Reserved.

55

v App netw ork


A vApp network is similar to an organization network. It is meant for a vApp internal network. It acts as
a boundary for isolating specific virtual machines within a vApp. A vApp network is an isolated
segment created for a particular application stack within an organizations network to enable multi-tier
applications to communicate with each other and, at the same time, isolate the intra-vApp traffic from
other applications within the organization. The resources to create the isolation are managed by the
organization administrator and allocated from a pool provided by the vCloud administrator.
Figure 27 shows a vApp configuration for Tenant Grape.

Figur e 27. Micro-seg mentation of virtual workloads

Netw ork pools


All three network classe s can be backed using the virtual network features of the Nexus 1000V. It is
important to understand the relationship between the virtual networking features of the Nexus 1000V
and the classes of networks defined and implemented in a vCloud Director environment. Typically, a
network class (specifically, an organization and vApp) is described as being backed by an allocation of
isolated networks. For an organization administrator to create an isolated vApp network, the
administrator must have a free isolation resource to consume and use in order to provide that isolated
network for the vApp.

2013 VCE Company, LLC. All Rights Reserved.

56

To deploy an organization or vApp network, you need a network pool in vCloud Director. Network
pools contain network definitions used to instantiate private/routed organization and vApp networks.
Networks created from network pools are isolated at Layer 2. You can create three types of network
pools in vCloud Director, as shown in the following table.
Network Pool Type

Description

v Sphere port group backed

Network pools are backed by pre-prov isioned port groups in Cisco Nexus
1000V or V Mware distributed switch.

VLAN backed

A range of pre-prov isioned VLAN IDs back network pools. This assumes
all VLANs specif ied are trunked.

vCloud Director network


isolation backed

Network pools are backed by v Cloud isolated networks, which are an


ov erlay network uniquely identif ied by a f ence ID implemented through
encapsulation techniques that span hosts and prov ide traffic isolation f rom
other networks. It requires a distributed switch. vCloud Director creates
port groups automatically on distributed switches as needed.

Figure 28 shows how network pool types are presented in VMware vCloud Director.

Figur e 28. Network pools

2013 VCE Company, LLC. All Rights Reserved.

57

Each pool has specific requirements, limitations, and recommendations. The trusted multi-tenancy
framework use s a port group backed network pool with a Cisco Nexus 1000V Distributed switch. Each
port group is isolated to its own VLAN ID. Each tenant (network, in this case) is associated with its
own network pool, each backed by a set of port groups.
VMware vCloud Director automatically deploys vShield Edge devices to facilitate routed network
connections. vShield Edge uses MAC encapsulation for NAT routing, which helps prevent Layer 2
network information from being seen by other organizations in the environment. vShield Edge also
provides a firewall service that can be configured to block inbound traffic to virtual machines
connected to a public access organization network.

Design considerations for service assurance


This section discusse s using the following technologies to achieve service assurance at the compute
layer:

Cisco UCS

VMware vCloud Director

Cisco UCS
The following UCS features support service assurance:

Quality of service

Port channels

Server pools

Redundant UCS fabrics

Compute, storage, and network resources need to be categorized in order to provide a differential
service model for a multi-tenant environment. The following table shows an example of Gold, Silver,
and Bronze service levels for compute resources.
Level

Compute resource

Gold

UCS B440 blades

Silv er

UCS B200 and B440 blad es

Bronze

UCS B200 blades

System classes in the UCS specify the bandwidth allocated for traffic types across the entire system.
Each system class reserve s a specific segment of the bandwidth for a specific type of traffic. Using
quality of service policies, the UCS assigns a system class to the outgoing traffic and then matches a
quality of service policy to the class of service (CoS) value marked by the Nexus 1000V Series switch
for each virtual machine.
UCS quality of service configuration can help achieve service assurance for multiple tenants. A best
practice to ensure guaranteed quality of service throughout a multi-tenant environment is to configure
quality of service for different service levels on the UCS.
2013 VCE Company, LLC. All Rights Reserved.

58

Figure 29 shows different quality of service weight values configured for different class of service
values that correspond to Gold, Silver, and Bronze service levels. This helps ensure traffic priority for
tenants associated with those service levels.

Figur e 29. Quality of service configuration

Quality of service policies assign a system class to the outgoing traffic for a vNIC or vHBA. Therefore,
to configure the vNIC or vHBA, include a quality of service policy in a vNIC or vHBA policy and then
include that policy in a service profile. Figure 30 shows how to create quality of service policies.

Figur e 30. Creating qua lity of service policy

2013 VCE Company, LLC. All Rights Reserved.

59

VMw are vCloud Director


VMware vCloud Director provides several allocation models to achieve service levels in the trusted
multi-tenancy framework. An organization virtual data center allocates resources from a provider
virtual data center and makes them available for use by a given organization. Multiple organization
virtual data centers can take from the same provider virtual data center. One organization can have
multiple organization virtual data centers.
Resources are taken from a provider virtual data center and allocated to an organization virtual data
center using one of three resource allocation models, as shown in the following table.
Model

Description

Pay as y ou go

Resources are reserv ed and committed for v Apps only as v Apps are created.
There is no upf ront reserv ation of resources.

Allocation

A baseline amount (guara ntee) of resources f rom the prov ider v irtual data
center is reserv ed f or the organization v irtual data centers exclusiv e use. An
additional percentag e of resources are av ailable to ov ersubscribe CPU and
memory, but this taps into compute resources that are shared by other
organi zation v irtual data centers drawing f rom the prov ider v irtual data center.

Reserv ation

All resources assigned to the organization v irtual data center are reserv ed
exclusiv ely f or the organization v irtual data centers use.

With all the above models, the organization can be set to deploy an unlimited or limited number of
virtual machines. In selecting the appropriate allocation model, consider the service definition and
organizations use case workloads.
Although all tenants use the shared infrastructure, the resources for each tenant are guaranteed
based on the allocation model in place. The service provider can set the parameters for CPU,
memory, storage, and network for each tenants organization virtual data center, as shown in Figure
31, Figure 32, and Figure 33.

2013 VCE Company, LLC. All Rights Reserved.

60

Figur e 31. Organi zation virtual data center allocation configuration

Figur e 32. Organi zation virtual data center storage allocation

2013 VCE Company, LLC. All Rights Reserved.

61

Figur e 33. Organi zation virtual data center network pool allocation

Design considerations for security and compliance


This section discusse s using the following technologies to achieve security and compliance at the
compute layer:

Cisco UCS

VMware vCloud Director

VMware vCenter Server

Cisco UCS
The UCS Role-Based Access Control (RBAC) feature helps ensure security by providing granular
administrative access control to the UCS system resources based on administrative roles, tenant
organization, and locale.
The RBAC function of the Cisco UCS allows you to control service provider user access to the actions
and resources in the UCS. RBAC is a security mechanism that can greatly lower the cost and
complexity of Vblock System security administration. RBAC simplifies security administration by using
roles, hierarchies, and constraints to organize privileges. Cisco UCS Manager offers flexible RBAC to
define the roles and privileges for different administrators within the Cisco UCS environment.
The UCS RBAC allows access to be controlled based on the roles assigned to individuals. The
following table lists the elements of the UCS RBAC model.

2013 VCE Company, LLC. All Rights Reserved.

62

Element

Description

Role

A job f unction within the context of locale, along with the authority and
responsibility giv en to the user assigned to the role

User

A person using the UCS; users are assigned to one or more roles

Action

Any task a user can perf orm in the UCS that is subject to access control; an
action is perf ormed on a resource

Priv ilege

Permission granted or denied to a role to perf orm an action

Locale

A logical object created to manage organizations and determin e which users


hav e priv ileges to use the resources in organizatio ns

The UCS RBAC feature can help service providers segregate roles to manage multiple tenants. One
example is using UCS RBAC with LDAP integration to ensure all roles are defined and have specific
accesse s as per their roles. A service provider can leverage this feature in a multi-tenant environment
to ensure a high level of centralized security control. LDAP groups can be created for different
administration roles, such as network, storage, server profiles, security, and operations. This helps
providers keep security and compliance in place by having designated roles to configure different
parts of the Vblock System.
Figure 34 shows an LDAP group mapped to a specific role in a UCS. An Active Directory group called
ucsnetw ork is mapped to a predefined network role in UCS. This means that anyone belonging to
the ucsnetw ork group in Active Directory can perform a network task in UCS; other features are
shown as read-only.

Figur e 34. LDAP gro up ma pping in UCS

2013 VCE Company, LLC. All Rights Reserved.

63

Figure 35 illustrates how UCS groups provide hierarchy. It shows how group ucsnetw ork is laid out in
an Active Directory domain.

Figur e 35. Active Directory groups for UCS LDAP

Additional UCS security control features include the following:

Administrative access to the Cisco UCS is authenticated by using either:


- A remote protocol such as LDAP, RADIUS, or TACACS+
- A combination of local database and remote protocols

HTTPS provides authenticated and encrypted access to the Cisco UCS Manager GUI. HTTPS
uses components of the Public Key Infrastructure (PKI), such as digital certificates, to establish
secure communications between the clients browser and Cisco UCS Manager.

2013 VCE Company, LLC. All Rights Reserved.

64

VMw are vCloud Director


Role-based and centralized user authentication through multi-party Active Directory/LDAP integration
is the best way to manage the cloud. In VMware vCloud Director, each organization represents a
collection of end users, groups, and computing resources. Users authenticate at the organization
level, using credentials validated through LDAP. Set this up based on the cloud organizations
requirements.
For example, Service ProviderVCE can have its own Active Directory infrastructure for user and
groups to authenticate to the vCloud environment. Tenant Orange can have its own Active Directory
to manage authentication to the vCloud environment. Having each organization with their own Active
Directory improves security by providing ease of integration with organization identity and access
management processe s and controls, and it ensures that only authorized users have access to the
tenant cloud infrastructure. Figure 36 and Figure 37 show both the service provider and organization
LDAP integration and the difference in LDAP server settings.

Figur e 36. Service provider LDA P integration

2013 VCE Company, LLC. All Rights Reserved.

65

Figur e 37. Organi zation LDAP integration

Each tenant has its own user and group management and provides role-based security access, as
shown in Figure 38. The users are shown only the vApps that they can access. vApps that users do
not have access to are not visible, even if they reside within the same organization.

Figur e 38. User role ma nage ment

2013 VCE Company, LLC. All Rights Reserved.

66

VMw are vCenter Server


VMware vCenter Server is installed using a local administrator account. When vCenter Server is
joined to a domain, it results in any domain administrator gaining administrative privileges to vCenter.
To remove this potential security risk, it is recommended to always create a vCenter Administrator
group in an Active Directory and assign it to the vCenter Server Administrator role, making it possible
to remove the local administrators group from this role.
Note: Ref er to the vSphere Security Hardening Guide at www.vmware.com for more inf ormation.

In Figure 39, in the trusted multi-tenancy framework there is a VMw are Admins group created in an
Active Directory. This group has access to the trusted multi-tenancy vCenter data center. A user
member of this group can perform the administration of vCenter.

Figur e 39. vCenter ad ministration

Design considerations for availability and data protection


Availability and Disaster Recovery (DR) focuses on the recovery of systems and infrastructure after an
incident interrupts normal operations. A disaster can be defined as partial or complete unavailability of
resources and services, including applications, the virtualization layer, the cloud layer, or the
workloads running in the resource groups.
Good practices at the infrastructure level will lead to easier disaster recovery of the cloud
management cluster. This includes technologies such as high availability, DRS, and vMotion for
reactive and proactive protection of your infrastructure.
This section discusse s using the following technologies to achieve availability and data protection at
the compute layer:

Cisco UCS

Virtualization

2013 VCE Company, LLC. All Rights Reserved.

67

Cisco UCS
Fabric interconnect clustering allows each fabric interconnect to continuously monitor the others
status. If one fabric interconnect becomes unavailable, the other takes over automatically.
Figure 40 shows how Cisco UCS is deployed as a high availability cluster for management layer
redundancy. It is configured as two Cisco UCS 6100 Series fabric interconnects directly connected
with Ethernet cables between the L1 (L1-to-L1) and L2 (L2-to-L2) ports.

Figur e 40. Fabric interconnect clustering

Service profile dynamic mobility provides another layer of protection. When a physical blade server
fails, it automatically transfers the service profile to an available server in the pool.
Virtual por t channel in UCS
With virtual port channel uplinks, there is minimal impact of both physical link failures and upstream
switch failures. With more physical member links in one larger logical uplink, there is the potential for
even better overall uplink load balancing and better high availability.

2013 VCE Company, LLC. All Rights Reserved.

68

Figure 41 shows how port channel 101 and 102 are configured with four uplink members.

Figur e 41. Virtual port channel in UCS

Virtualization
Enable overall cloud availability design for tenants using the following features:

VMware vSphere HA

VMware vCenter Heartbeat

VMware vMotion

VMware vCloud Director cells

VMw are v Sphere High Av ailability


VMware High Availability clusters enable a collection of VMware ESXi hosts to work together to
provide, as a group, higher levels of availability for virtual machines than each ESXi host could provide
individually. When planning the creation and use of a new VMware High Availability cluster, the
options you select affect how that cluster responds to failures of hosts or virtual machines.
VMware High Availability provides high availability for virtual machines by pooling the machines and
the hosts on which they reside into a cluster. Hosts in the cluster are monitored and in the event of a
failure, the virtual machines on the failed host are restarted on alternate hosts.
2013 VCE Company, LLC. All Rights Reserved.

69

In the trusted multi-tenancy framework, all VMware High Availability clusters are deployed with
identical server hardware. Using identical hardware provides a number of key advantages, including
the following:

Simplified configuration and management of the servers using host profiles

Increased ability to handle server failures and reduced resource fragmentation

VMw are v Motion


VMware vMotion enables the live migration of running virtual machines from one physical server to
another with zero downtime, continuous service availability, and complete transaction integrity. Use
VMware vMotion to:

Perform hardware maintenance without scheduled downtime

Proactively migrate virtual machines away from failing or underperforming servers

Automatically optimize and allocate entire pools of resources for optimal hardware utilization and
alignment with business priorities

VMw are v Center Heartbeat


Use VMware vCenter Heartbeat to protect vCenter Server in order to provide an additional layer of
resiliency. The vCenter Heartbeat server works by replicating all vCenter configuration and data to a
secondary passive server using a dedicated network channel. The secondary server is up all the time,
with the live configuration of the active server, but an IP packet filter masks it from the active network.
Figure 42 shows a scenario when the complete hardware goes down, the operating system crashes,
or the active vCenter link is down.

Figur e 42. vCenter Hea rtbeat scenario

2013 VCE Company, LLC. All Rights Reserved.

70

VMw are v Cloud Director cells


VMware vCloud Director cells are stateless front-end processors for the vCloud. Each cell has a
variety of purposes and self-manages various functions among cells while connecting to a central
database. The cell manages connectivity to the cloud and provides both API and GUI endpoints/clients.
Figure 43 shows the trusted multi-tenancy framework using multiple cells (a load-balanced group) to
address availability and scale. This is typically achieved by load balancing or content switching this
front-end layer. Load balancers present a consistent address for services regardless of the underlying
node responding. They can spread session load across cells, monitor cell health, and add or remove
cells from the active service pool.

Figur e 43. vCloud Di rector multi-cell

2013 VCE Company, LLC. All Rights Reserved.

71

Single point of failure


To ensure successful implementation of availability, which is a crucial part of the trusted multi-tenancy
design, carefully consider each component listed in the following table.
Component

Availability options

ES Xi hosts

Conf igure all VMwar e ES Xi hosts in highly av ailable clusters with a


minimum of n+1 redundancy . This prov ides protection not only f or the
v irtual machines, but also f or the v irtual machines hosting the platf orm
portal/management applications and all of the v Shield Edge appliances.

ES Xi host network connectiv ity

Conf igure the ESXi host with a minimum of two phy sical paths to each
required net work (port group) to ensure that a single link f ailure does
not impact platf orm or v irtual machine connectivity. This should include
management and v Motion networks. The Load Based Teaming
mechanism is used to av oid ov ersubscribed network links.

ES Xi host storage connectiv ity

Conf igure ESXi hosts with a minimum of two phy sical paths to each
LUN or NFS share to ensure that a single storage path f ailure does not
impact serv ice.

vCenter Serv er

Run v Center Serv er as a virtual machine and make use of v Center


Serv er Heartbeat.

vCenter database

vCenter Heartbeat prov ides v Center database resiliency.

v Shield Manager

v Shield Manager receiv es the additional protection of VMware FT,


resulting in seamless f ailov er between hosts in the ev ent of a host
f ailure

vCenter Chargeback

Deploy vCenter Chargeback v irtual machines as a two-node, loadbalanced cluster. Deploy multiple Chargeback data collectors remotely
to av oid a single point of failure.

vCloud Director

Deploy the v Cloud Director virtual machines as a load-balanced, highly


av ailable clustered pair in an N+1 redundancy setup, with the option to
scale out when the env ironment requires this.

VMw are Site Recov ery Manager


In addition to other components, you can use VMware Site Recovery Manager (SRM) for disaster
recovery and availability. Site Recovery Manager accelerates recovery by automating the recovery
process, and it simplifies the management of disaster recovery plans by making disaster recovery an
integrated element of the management of your VMware virtual infrastructure. VMware Site Recovery
Manager is fully supported on the Vblock System; however, it is not supported with VMware vCloud
Director and is not within the scope of this design guide.

2013 VCE Company, LLC. All Rights Reserved.

72

Design considerations for tenant management and control


This section discusse s using VMware vCloud Director to achieve tenant management and control at
the compute layer.

VMw are vCloud Director


VMware vCloud Director provides an intuitive Web portal (vCloud Self Service Portal) that
organization users u se to manage their compute, storage, and network resources. In general, a
dedicated group of users in a tenant manages the organization resources, such as creating or
assigning networks and catalogs and allocating memory, CPU, or storage resources to an
organization.
In Figure 44, the tenants can create the vApps or deploy them from templates. Tenants can create the
vApp network as needed from the network pool; use the browser plug-in to upload media and access
the console of the virtual machines in the vApp; and start and stop the virtual machines as needed.
For example, when Tenant Orange wants to access its virtual environment, it needs to point to the
URL https://vcd1.pluto.vcelab.net/cloud/org/orange.

Figur e 44. vApp ad ministration

2013 VCE Company, LLC. All Rights Reserved.

73

Te nant in-control configuration


The tenants can manage users and groups, policies, and the catalogs for their environment, as shown
in Figure 45.

Figur e 45. Environ ment ad mi nistration

Design considerations for service provider management and control


This section discusse s using virtualization technologies to achieve service provider management and
control at the compute layer.

Virtualization
A service provider will have access to the entire VMware vSphere and VMware vCloud environment
to flexibly manage and monitor the environment. A service provider can access and manage the
following:

vCenter with a virtual infrastructure (VI) client

Cisco UCS

vCloud with a Web browser pointing to the vCloud Director cell address

vShield Manager with a Web browser pointing to the IP or hostname

vCenter Chargeback with a Web browser pointing to the IP or hostname

Cisco Nexus 1000V with SSH to Virtual Supervisor Module

2013 VCE Company, LLC. All Rights Reserved.

74

For example, in vCloud Director, the service provider is in complete control of the physical
infrastructure. The service provider can:

Enable or disable ESXi hosts and data stores for cloud usage

Create and remove the external networks that are needed for communicating with the Internet,
backup networks, IP-based storage networks, VPNs, and MPLS networks, as well as the
organization networks and network pools

Create and remove the organization, administration users, provider virtual data center, and
organization virtual data centers

Determine which organization can share the catalog with others

Figure 46 shows how a service provider views the complete physical infrastructure in vCloud Director.

Figur e 46. Service provider view

VMw are v Center Chargeback


VMware vCenter Chargeback is an end-to-end metering and cost reporting solution for virtual
environments using VMware vSphere. It has the following core components:

Data Collectors:
- Chargeback Data Collectorresponsible for vCenter Server data collection
- vCloud Director (vCD) and vShield Manager (vSM) data collectors responsible for
utilization/allocation collection on the new abstraction layer created by vCloud Director

Load Balancer (embedded in vCenter Chargeback) receives and routes all user requests to
the application; needs to be installed only once for the Chargeback cluster

Chargeback Server and chargeback database

2013 VCE Company, LLC. All Rights Reserved.

75

Figure 47 shows a Vblock System chargeback deployment architecture model.

Figur e 47. Vblock System chargeback deploy ment architecture

2013 VCE Company, LLC. All Rights Reserved.

76

Key Vbl ock System metrics


When determining a metering methodology for trusted multi-tenancy, consider the following:

What metrics (units, components, or attributes) will be monitored?

How will the metrics be obtained?

What sampling frequency will be used for each metric?

How will the metrics be aggregated and correlated to formulate meaningful business value?

Within a Vblock System virtualized computing environment, the infrastructure chargeback details can
be modeled as fully loaded measurements per virtual machine. The virtual machine essentially
becomes the point resource allocated back to users/customers. Below are the some of the key
metrics to collect when measuring virtual machine resource utilization:
Resource

Chargeback metrics

Unit of measurement

CPU

CPU usage

GHz

Virtual CPU (v CPU)

Count

Memory usage

GB

Memory size

GB

Network

Network receiv ed/transmitted usage

GB

Disk

Storage usage

GB

Disk read/write usage

GB

Memory

For more information, see Guidelines for Metering and Chargeback Using VMware vCenter
Chargeback on www.vce.com.

2013 VCE Company, LLC. All Rights Reserved.

77

Design considerations for storage


Multi-tenancy features can be combined with standard security methods, such as storage area
network (SAN) zoning and Ethernet VLANs, to segregate, control, and manage storage resources
among the infrastructures tenants. Multi-tenancy offerings include data-at-rest encryption; secure
transmission of data; and bandwidth, cache, CPU, and disk drive isolation.
This section describes the design of and rationale behind storage technologies in the trusted multitenancy framework. The design includes many issues that must be addressed prior to deployment.

Design considerations for secure separation


The fundamental principle that makes multi-tenancy secure is that no tenant can access anothers
data. Secure separation is essential to reaching this goal. At the storage layer, secure separation can
be divided into the following basic requirements:

Segmentation of path by VSAN and zoning

Separation of data at rest

Address space separation

Separation of data access

Segmentation by VSA N and zoning


To extend secure separation to the storage layer, consider the isolation mechanisms available in a
SAN environment.
Cisco MDS storage area networks (SAN) offer true segmentation mechanisms, similar to VLANs in
Ethernet. These mechanisms, called VSANs, work with fibre channel zones; however, VSANs do not
tie into the virtual host bus adapter (HBA) of a virtual machine. VSANs and zones associate to a host
rather than a virtual machine. All virtual machines running on a particular host belong to the same
VSAN or zone. Since it is not possible to extend SAN isolation to the virtual machine, VSANs or FC
zones are used to isolate hosts from each other in the SAN fabric.
To keep management overhead low, we do not recommend deploying a large number of VSANs.
Instead, the trusted multi-tenancy design leverages fibre channel soft zone configuration to isolate the
storage layer on a per-host basis. It combines this method with zoning through WWN/device alias for
administrative flexibility.
Fibre channel zones
SAN zoning can restrict visibility and connectivity between devices connected to a common fibre
channel SAN. It is a built-in security mechanism available in an FC switch that prevents traffic leaking
between zones.

2013 VCE Company, LLC. All Rights Reserved.

78

Design scenarios of VSAN and z oning


VSANs and zoning are two powerful tools within the Cisco MDS 9000 family of products that aid the
cloud administrator in building robust, secure, and manageable storage networking environments
while optimizing the use and cost of storage switching hardware. In general, VSANs are used to divide
a redundant physical SAN infrastructure into separate virtual SAN islands, each with its own set of
fibre channel fabric services. Having each VSAN support an independent set of fibre channel services
enables a VSAN-enabled infrastructure to house numerous applications without risk of fabric resource
or event conicts between the virtual environments. Once the physical fabric is divided, use zoning to
implement a security layout that is tuned to the needs of each application within each VSAN. Figure
48 illustrates the VSAN physical topology.

Figur e 48. VSAN physical topology

2013 VCE Company, LLC. All Rights Reserved.

79

VSANs are rst created as isolated fabrics within a common physical topology. Once VSANs are
created, apply individual unique zone sets as necessary within each VSAN. The following table
summarizes the primary differences between VSANs and zones.
Characteristic

VS ANs

Zoning

Maximum per switch/f abric

1024 per switch

1000+ zones per f abric (VSAN)

Membershi p criteria

Phy sical port

Phy sical port, WWN

Isolation enf orcement method

Hardwar e

Hardwar e

Fibre channel serv ice model

New set of serv ices per VSAN

Same set of serv ices for entire


f abric

Traff ic isolation method

Hardwar e-based taggin g

Implicit using hardware ACLs

Traff ic accounting

Y es per VSAN

No

Separate manage ability

Y es per VSAN (f uture)

No

Traff ic engineering

Y es per VSAN

No

Note: Note that UIM supports only one VSAN f or each fabric.

Separation of data at rest


Today, most deployments treat physical storage as a shared infrastructure. However, in multi-tenancy,
it is sometimes necessary to ensure that a specific dataset does not share spindles with any other
dataset. This separation could be required between tenants or even within a single tenants dataset.
Business reasons for this include competitive companies using the same shared service, and
governance/regulatory requirements.
EMC VNX provides flexible RAID and volume configurations that allow spindles to be dedicated to
LUNs or storage pools. VNX allows the creation of tenant-specific storage pools that can be used to
dedicate specified spindles to particular tenants.

Address space separation


In some situations, each tenant is completely unaware of the other tenants. However, without proper
mitigation there is the potential for address space overlap. Fibre channel World Wide Names (WWN)
and iSCSI device names are globally unique, with no possibility of contention in either area. IP
addresses, however, are not globally unique and may conflict.
To remedy this situation, the service provider can assign infrastructure-wide IP addresses within a
service offering. Each X-Blade or VNX storage processor supports one IP address space. However,
an X-Blade can support multiple logical IP interfaces and both storage processors and X-Blades
support VLAN tagging. VLAN tagging allows multiple networks to access resources without the risk of
traversing address spaces. In the event of an IP address conflict, the server log file reports any
duplicate address warnings. IP addressing conflicts can be addressed in higher layers of the stack.
This is most easily accomplished at the compute layer.

2013 VCE Company, LLC. All Rights Reserved.

80

Figure 49 is a graphical representation of how VMware vSphere can be used to separate each
tenants address space.

Figur e 49. Address space separation with VMware vSphere

2013 VCE Company, LLC. All Rights Reserved.

81

Virtual machine data store separation


VMware uses a cluster file system called a virtual machine file system (VMFS). An ESXi host
associates a VMFS volume, which is made up of a larger logical unit. Each virtual machine directory is
stored in the Virtual Machine Disk (VMDK) sub-directory in the VMFS volume. While a virtual machine
is in operation, the VMFS volume locks those files to prevent other ESXi servers from updating them.
One VMDK directory is associated with a single virtual machine; multiple virtual machines cannot
access the same VMDK directory.
We recommend implementing LUN masking (that is, storage groups) to assign storage to ESXi
servers. LUN masking is an authorization process that makes a LUN available only to specific hosts
on the EMC SAN as further protection against misbehaving servers corrupting disks belonging to
other servers. This complements the use of zoning on the MDS, effectively extending zoning from the
front-end port on the array to the device on which the physical disk resides.
Virtual data mov er on VNX
VNX provides a multinaming domain solution for a data mover in the UNIX environment by
implementing an NFS server per virtual data mover (VDM). A data mover hosting several VDMs can
serve UNIX clients that are members of different LDAP or NIS domains, assuming that each VDM
works for a unique naming domain. Several NFS servers are emulated on the data mover in order to
serve the file system resources of the data mover for different naming domains. Each NFS server is
assigned to one or more data mover network interfaces.
The VDMs loaded on a data mover use the network interfaces configured on the data mover. You
cannot duplicate an IP address for two VDM interfaces configured on the same data mover. Once a
VDM interface is assigned, you can manage NFS exports on a VDM. CIFS and NFS protocols can
share the same network interface; however, only one NFS endpoint and CIFS server is addressed
through a particular logical network interface.
The multinaming domain solution implements an NFS server per VDM-named NFS endpoint. The
VDM acts as a container that includes the file systems exported by the NFS endpoint and/or the CIFS
server. These VDM file systems are visible through a subset of data mover network interfaces
attached to the VDM. The same network interface can be shared by both CIFS and NFS protocols on
that VDM. The NFS endpoint and CIFS server are addressed through the network interfaces attached
to that particular VDM. This allows users to perform either of the following:

Move a VDM, along with its NFS and CIFS exports and configuration data (LDAP, net groups,
and so forth), to another data mover

Back up the VDM, along with its NFS and CIFS exports and configuration data

This feature supports at least 50 NFS VDMs per physical data mover and up to 25 LDAP domains.

2013 VCE Company, LLC. All Rights Reserved.

82

Figure 50 shows a physical data mover with VDM implementation.

Figur e 50. Physical data mover with VDM i mple mentation


Note: VDM f or NFS is available on VNX OE for File Version 7.0.50.2. Y ou cannot use Unisphere to configure
VDM f or NFS.

Refer to Configuring Virtual Data Movers on VNX for more information (Powerlink access required).

Separation of data access


Separation of data access ensures that a tenant cannot see or access any other tenants data. The
data access protocol in use determines how this is accomplished. Protocols for how tenant data traffic
flows inside EMC VNX are:

CIFS

NFS

iSCSI

Fibre Channel over Ethernet/Fibre Channel (FCoE/FC)

2013 VCE Company, LLC. All Rights Reserved.

83

Figure 51 displays the access protocols and the respective protocol stack that can be used to access
data residing on a unified system.

Figur e 51. Protocol stack

CIFS stack
The following table summarizes how tenant data traffic flows inside EMC VNX for the CIFS stack.
Secure separation is maintained at each layer throughout the CIFS stack.
CIFS stack component

Description

VLAN

The secure separation of data access starts at the bottom of the CIFS
stack on the IP network with the use of Virtual Local Area Networks
(VLAN) to separate indiv idual tenants.

IP Interf ace VLAN Tagged

The VLAN-tag ging model extends into the unif ied sy stem by VLAN
tagging the indiv idual IP interf aces so they understand and honor the
tags being used.

IP Packet Ref lection

IP packet ref lection guarantees that any traff ic sent from the storage
sy stem in response to a client request will go out ov er the same phy sical
connection and VLAN on which the request was receiv ed.

Virtual Data Mov er

The v irtual data mov er is a logical conf iguration container that wraps
around a CIFS f ile-sharing instance.

CIFS Serv er

The v irtual data mov er resides on the CIFS serv er.

2013 VCE Company, LLC. All Rights Reserved.

84

CIFS stack component

Description

CIFS Share

CIFS shares are built upon the CIFS serv ers.

ABE

At the top of the stack is a Windows f eature called Access Based


Enumeration (AB E). ABE shows a user only the f iles that he/she has
permission to access, thus extending the separation all the way to end
users if desired.

NFS stack
The following table summarizes how tenant data traffic flows inside EMC VNX for the NFS stack.
NFS stack component

Description

VLAN

The secure separation of data access starts at the bottom of the NFS
stack on the IP network, using VLANs to separate indiv idual tenants.

IP Interf ace VLAN tagged

The VLAN taggin g model extends into the unif ied system by VLAN
tagging the indiv idual IP interf aces so they understand and honor the
tags being used.

IP packet ref lection

IP packet ref lection guarantees that any traff ic sent from the storage
sy stem in response to a client request will go out ov er the same phy sical
connection and VLAN on which the request was receiv ed.

NFS export VLAN tagged

NFS exports can be associated with specif ic VLANs.

NFS export hiding

NFS export hiding tightly controls which users access the NFS exports. It
enhances standard NFS serv er behav ior by prev enting users f rom
seeing NFS exports f or which they do not hav e access-lev el permission.
It will appear to each tenant that they hav e their own indiv idual NFS
serv er.

2013 VCE Company, LLC. All Rights Reserved.

85

Figure 52 shows an NFS export and how a specific subnet has access to the NFS share.

Figur e 52. NFS export configuration

In this example, VLAN 112 and VLAN 111 subnet has access to the /nfs1 share. VNX also provides
granular access to the NFS share. An NFS export can be presented to a specific tenant subnet or
specific host or group of hosts in the network.
iSCSI stack
The following table summarizes how tenant data traffic flows inside EMC VNX for the iSCSI stack.
iSCSI stack component

Description

VLAN

The secure separation of data access starts at the bottom of the iSCSI
stack on the IP network with the use of VLAN to separate indiv idual
tenants.

IP Interf ace VLAN tagged

The VLAN-tag ging model extends into the unif ied sy stem by VLAN
tagging the indiv idual IP interf aces so they understand and honor the
tags being used.

iSCSI Portal
Target
LUN

Access then f lows through an iSCSI portal to a target dev ice, where it is
ultimately addressed to a LUN.

LUN Masking

LUN masking is a f eature f or block-based protocols that ensures that


LUNs are v iewed and accessed only by those SAN clients with the
appropri ate permissions.

2013 VCE Company, LLC. All Rights Reserved.

86

Support for VLAN tagging in iSCSI


VLAN is supported for iSCSI data ports and management ports on VNX storage systems. In addition
to better performance, ease of management, and cost benefits, VLANs provide security advantages
since devices configured with VLAN tags can see and communicate with each other only if they
belong to the same VLAN. Therefore, you can:

Set up multiple virtual ports on the VNX and segregate hosts into different VLANs based on your
security policy

Restrict sensitive data to one VLAN

VLANs make it more difficult to sniff traffic, as they require sniffing across multiple networks. This
provides extra security.
Figure 53 shows the iSCSI port properties for a port with VLANs enabled and two virtual ports
configured.

Figur e 53. iSCSI Port Properties with VLAN taggin g enabled

2013 VCE Company, LLC. All Rights Reserved.

87

Fibre Channel ov er Ethernet/fibre channel stack


The lower layers of the fibre channel stack look quite different because it is not an IP-based protocol.
The following table summarizes how tenant data traffic flows inside EMC VNX for the FCoE/FC stack.
FCoE/FC stack component

Description

FC Zone

FC zoning controls which FC/Fibre Channel ov er Ethernet (FCoE)


interf aces can communicate with each other within the f abric.

VSAN

Virtual Storage Area Networks can be used to f urther subdiv ide


indiv idual zones without the need f or phy sical separation.

Target
LUN

Access flows to a target dev ice, where it is ultimately addressed to a


LUN.

LUN Masking

LUN masking is a f eature f or block-based protocols that ensures that


LUNs are v iewed and accessed only by those SAN clients with the
appropri ate permissions.

Figure 54 and Figure 55 show how a 20 GB FC boot LUN and 2 TB LUN map to each host in VNX. It
ensures each LUN presented to the ESXi host is properly masked and granted access to the specific
LUN and spread out in different RAID groups.

Figur e 54. Boot LUN and host mapping

Figur e 55. Data LUN and host map ping

2013 VCE Company, LLC. All Rights Reserved.

88

Design considerations for service assurance


Once you achieve secure separation of each tenants data and path to that data, the next priority is
predictable and reliable access that meets the tenants SLA. Furthermore, in a service provider
chargeback environment, it may be important that tenants do not receive more performance than they
paid for simply because there is no contention for shared storage resources.
Service assurance ensures that SLAs are met at appropriate levels through the dedication of runtime
resources and quality of service control.
Additionally, storage tiering with FAST lowers overall storage costs and simplifies management while
allowing different applications to meet different service-level requirements on distinct pools of storage
within the same storage infrastructure. FAST technology automates the dynamic allocation and
relocation of data across tiers for a given FAST policy, based on changing application performance
requirements. FAST helps maximize the benefits of preconfigured tiered storage by optimizing cost
and performance requirements to put the right data on the right tier at the right time.

Dedication of runtime r esources


Each VNX data mover has dedicated CPUs, memory, front-end, and back-end networks. A data
mover can be dedicated to a single tenant or shared among several tenants. To further ensure the
dedication of runtime resources, data movers can be clustered into active/standby groupings. From a
hardware perspective, dedicating pools, spindles, and network ports to a specific tenant or application
can further ensure adherence to SLAs.

Quality of service control


EMC has several software tools available that organize the dedication of runtime resources. At the
storage layer, the most powerful of these is Unisphere Quality of Service Manager (UQM), which
allows VNX resources to be managed based on service levels.
UQM utilizes policies to set performance goals for high-priority applications, set limits on lower-priority
applications, and schedules policies to run on predefined timetables. These policies direct the
management of any or all of the following performance aspects:

Response time

Bandwidth

Throughput

UQM provides a simple user interface for service providers to control policies. This control is invisible
to tenants and can ensure that the activity of one tenant does not impact that of another. For example,
if a tenant requests a dedicated disk, storage groups, and spindles for its storage resources, apply
these control policies to get optimum storage I/O performance.

2013 VCE Company, LLC. All Rights Reserved.

89

Figure 56 shows how you can create policies with a specific set of I/O class to ensure that SLAs are
maintained.

Figur e 56. EMC VNX Qo S configuration

EMC V NX FA ST V P
With standard storage tiering in a non-FAST VP enabled array, multiple storage tiers are typically
presented to the vCloud environment, and each offering is abstracted out into separate provider virtual
data centers (vDC). A provider may choose to provision an EFD [SSD/Flash] tier, an FC/SAS tier, and
a SATA/NL-SAS tier, and then abstract these into Gold, Silver, and Bronze provider virtual data
centers. The customer then chooses resources from these for use in their organizational virtual data
center.
This provisioning model is limited for a number of reasons, including the following:

VMware vCloud Director does not allow for a non-disruptive way to move virtual machines from
one provider virtual data center to another. This means the customer must provide for downtime
if the vApp needs to be moved to a more appropriate tier.

For workloads with a variable I/O personality, there is no mechanism to automatically migrate
those workloads to a more appropriate disk tier.

With the cost of enterprise flash drives (EFD) still significant, creating an entire tier of them can
be prohibitively expensive, especially with few workloads having an I/O pattern that takes full
advantage of this particular storage medium.

One way in which the standard storage tiering model can be beneficial is when multiple arrays are
used to provide different kinds of storage to support different I/O workloads.

2013 VCE Company, LLC. All Rights Reserved.

90

EMC FAS T VP storage tiering


There are ways to provide more flexibility and a more cost-effective platform when compared with a
standard tiering model. Instead of using a single disk type per provider virtual data center,
organizations can blend both the cost and performance characteristics of multiple disk types. The
following table shows examples of this approach.
Create a FAST VP pool
containing

As this type of tier

For

20% EFD and 80%


FC/SAS disks

Perf ormance tier

Customers who might need the perf ormance of


EFD at certain times, but do not want to pay f or
that perf ormance all the time

50% FC/SAS disks and


50% SA TA disks

Production tier

Most standard enterprise applications to take


adv antage of the standard FC/SAS perf ormance,
y et hav e the ability to de-stage cold data to SATA
disk to lower the ov erall cost of storage per GB

90% SA TA disks and 10%


FC/SAS disks

Archiv e tier

Storing mostly nearline data, with the FC/SAS


disks used f or those instances where the
customer needs to go to the archiv e to recov er
data, or f or customers who are dumping a
signif icant amount of data into the tier.

Tiering policies
EMC FAST VP offers a number of policy settings to determine how data is placed, how often it is
promoted, and how data movement is managed. In a VMware vCloud Director environment, the
following policy settings are recommended to best accommodate the types of I/O workloads
produced.
Policy

Default setting

Recommended setting

Data Relocation Schedule

Set to migrate data sev en day s a


week, betwee n 11pm and 6am,
ref lecting the standard business
day .

In a v Cloud Director env ironment,


open up the Data Relocation windo w
to run 24 hours a day.

Set to use a Data Relocation Rate


of Medium, which can relocate
300-400 G B of data per hour.
FAST VP-e nable d
LUNs/Pools

Set to use the Auto- Tier,


spreading data ev enly across all
tiers of disks.

2013 VCE Company, LLC. All Rights Reserved.

Reduce the Data Relocation Rate to


Low. This allows f or constant
promotion and demotion of data, yet
limits the impact on host I/O.
In a v Cloud Director env ironment,
wher e customers are generally pay ing
f or the lower tier of storage but
lev eraging the ability to promote
workloads to higher- perf orming disk
when ne eded, the recommendation is
to use the Lowest Available Tier
policy . This places all data onto the
lower tier of disk initially , keeping the
higher tier of disk f ree f or data that
needs it.

91

EMC FA ST Cache
In a VMware vCloud Director environment, VCE recommends a minimum of 100 GB of EMC FAST
Cache, with the amount of FAST Cache increasing as the number of virtual machines increases.
The combination of FAST VP and FAST Cache allows the vCloud environment to scale better,
support more virtual machines and a wider variety of service offerings, and protect against I/O spikes
and bursting workloads in a way that is unique in the industry. These two technologies in tandem are
a significant differentiator for the Vblock System.

EMC Unisphere Management Suite


EMC Unisphere provides a simple, integrated experience for managing EMC Unified Storage through
both a storage and VMware lens. It is designed to provide simplicity, flexibility, and automation, which
are all key requirements for using private clouds.
Unisphere includes a unique self-service support ecosystem that is accessible with one-click,
taskbased navigation and controls for intuitive, context-based management. It provides customizable
dashboard views and reporting capabilities that present users with valuable storage management
information.

VMw are vCloud Director


A provider virtual data center is a resource pool consisting of a cluster of VMware ESXi servers that
access a shared storage resource. The provider virtual data center can contain one of the following:

Part of a data store (shared by other provider virtual data centers)

All of a data store

Multiple data stores

As storage is provisioned to organization virtual data centers, the shared storage pool for the provider
virtual data center is seen as a single pool of storage with no distinction of storage characteristics,
protocol, or other characteristics differentiating it from being a single large address space.
If a provider virtual data center contains more than one data store, it is considered best practice that
those data stores have equal performance capability, protocol, and quality of service. Otherwise, the
slower storage in the collective pool will impact the performance of that provider virtual data storage
pool. Some virtual data centers might end up with faster storage than others.
To gain the benefits of different storage tiers or protocols, define separate provider virtual data
centers, where each provider virtual data center has storage of different protocols or differing qualityof-service storage. For example, provision the following:

A provider virtual data center built on a data store backed by 15K RPM FC disks with loads of
cache in the disk for the highest disk performance tier

A second provider virtual data center built on a data store backed by SATA drives and not much
cache in the array for a lower tier

2013 VCE Company, LLC. All Rights Reserved.

92

When a provider virtual data center shares a data store with another provider virtual data center, the
performance of one provider virtual data center may impact performance of the other provider virtual
data center. Therefore, it is considered best practice to have a provider virtual data center that has a
dedicated data store such that isolation of the storage reduces the chances of introducing different
quality-of-service storage resources in a provider virtual data center.

Design considerations for security and compliance


This section provides information about:

Authentication with LDAP or Active Directory

EMC VNX and RSA enVision

Authentication w ith LDA P or Active Directory


VNX can authenticate users against an LDAP directory, such as Active Directory. Authentication
against an LDAP server simplifies management because you do not need a separate set of
credentials to manage VNX storage systems. It is also more secure, as enterprise password policies
can be enforced for the storage environment.
Figure 57 shows LDAP integration in VNX.

Figur e 57. LDAP configuratio n in VNX

2013 VCE Company, LLC. All Rights Reserved.

93

Role mapping
Once communications are established with the LDAP service, give specific LDAP users or groups
access to Unisphere by mapping them to Unisphere roles. The LDAP service merely performs the
authentication. Once authenticated, a users authorization is determined by the assigned Unisphere
role. The most flexible configuration is to create LDAP groups that correspond to Unisphere roles. This
allows you to control access to Unisphere by managing the members of the LDAP groups.
For example, Figure 58 shows two LDAP groups: Storage Admins and Storage Monitors. It shows
how you can map specific LDAP groups into specific roles.

Figur e 58. Mapping LDAP groups

Component access control


Component access control settings define access to a product by external and internal systems or
components.
CHAP co mponent authentication
SCSI's primary authentication mechanism for iSCSI initiators is the Challenge Handshake
Authentication Protocol (CHAP). CHAP is an authentication protocol used to authenticate iSCSI
initiators at target login and at various random times during a connection. CHAP security consists of a
username and password. You can configure and enable CHAP security for initiators and for targets.
The CHAP protocol requires initiator authentication. Target authentication (mutual CHAP) is optional.

2013 VCE Company, LLC. All Rights Reserved.

94

LUN masking component authorization


A storage group is an access control mechanism for LUNs. It segregates groups of LUNs from access
by specific hosts. When you configure a storage group, you identify a set of LUNs that will be used by
only one or more hosts. The storage system then enforces access to the LUNs from the host. The
LUNs are presented to only the hosts in the storage group. The hosts can see only the LUNs in the
group.
IP filtering
IP filtering adds another layer of security by allowing administrators and security administrators to
configure the storage system to restrict administrative access to specified IP addresse s. These
settings can be applied to the local storage system or to the entire domain of storage systems.
Audi t loggi ng
Audit logging is intended to provide a record of all activities, so that the following can occur:

Checks for su spicious activity can be performed periodically.

The scope of suspicious activity can be determined.

Audit logs are especially important for financial institutions that are monitored by regulators.
Audit information for VNX storage systems is contained within the event log on each storage
processor. The log also contains hardware and software debugging information and a time-stamped
record for each event. Each record contains the following information:

Event code

Description of event

Name of the storage system

Name of the corresponding storage processor

Hostname associated with the storage processor

2013 VCE Company, LLC. All Rights Reserved.

95

VNX and RSA enV ision


VNX storage systems are made even more secure by leveraging the continuous collecting,
monitoring, and analyzing capabilities of RSA enVision. RSA enVision performs the functions listed in
the following table.
RSA function

Description

Collects logs

Can collect ev ent log data f rom ov er 130 ev ent sourcesf rom firewalls to
databases. RSA enVision can also collect data f rom custom, proprietary
sources using standard transports such as Syslog, OBDC, SNMP, SFTP,
OPSEC, or WMI.

Securely stores logs

Compresses and encry pts log data so it can be stored f or later analy sis,
while maintaini ng log conf identiality and integrity .

Analy zes logs

Analy zes data in real time to check f or anomalous behav ior requiring an
immediate alert and response. RSA enVision proprietary logs are also
optimized f or later reporting and f orensic analysis. Built-in reports and
alerts allow administrators and auditors quick and easy access to log data.

Figure 59 provides a detailed look at storage behavior in RSA enVision.

Figur e 59. RSA enVisio n storage behavior

2013 VCE Company, LLC. All Rights Reserved.

96

Netw ork encryption


The Storage Management server provides 256-bit symmetric encryption of all data passed between it
and the administrative client components that communicate with it, as listed under Port Usage (Web
browser, Secure CLI), as well as all data passed between Storage Management servers. The
encryption is provided through SSL/TLS and uses the RSA encryption algorithm, providing the same
level of cryptographic strength as is employed in e-commerce. Encryption protects the transferred
data from prying eyeswhether on the local LANs behind the corporate firewalls, or if the storage
sy stems are being remotely managed over the Internet.

Design considerations for availability and data protection


Availability goes hand in hand with service assurance. While service assurance directs resources at
the tenant level, availability secures resources at the service provider level. Availability ensures that
resources are available for all tenants utilizing a service providers infrastructure, by meeting the
requirements of high availability and local and remote data protection.

High availability
In the storage layer, the high availability design is consistent with the high availability model
implemented at other layers in the Vblock System, comprising physical redundancy and path
redundancy. These are listed in the following types of redundancies:

Link redundancy

Hardware and node redundancy

Link redundancy
Pending the availability of FC port channels on UCS FC ports and FC port trunking, multiple individual
FC links from the 6120 fabric interconnects are connected to each SAN fabric, and VSAN
membership of each link is explicitly configured in the UCS. In the event of an FC (NP) port link failure,
affected hosts will re-logon in a round-robin manner using available ports. FC port channel support,
when available, means that redundant links in the port channel will provide active/active failover
support in the event of a link failure.
Multipathing software from VMware or EMC PowerPath software further enhances high availability,
optimizing use of the available link bandwidth and enhancing load balancing across multiple active
host adapter ports and links with minimal disruption in service.
Hardw are and node redundanc y
The Vblock System trusted multi-tenancy design leverages best practice methodologies for SAN high
availability, prescribing full hardware redundancy at each device in the I/O path from host to SAN. In
terms of hardware redundancy this begins at the server, with dual port adapters per host. Redundant
paths from the hosts feed into dual, redundant MDS SAN switches (that is, with dual supervisors) and
then into redundant SAN arrays with tiered, RAID protection. RAID 1 and 5 were deployed in this
particular design as two more commonly used levels; however the selection of a RAID protection level
depends on a balancing of cost versus the critical nature of the data to be stored.
2013 VCE Company, LLC. All Rights Reserved.

97

The ESXi hosts are protected by the VMware vCenter high availability feature. Storage paths can be
protected using EMC PowerPath/VE. Figure 60 shows the storage path protection.

Figur e 60. Storage path protection

Virtual machines and application data can be protected using EMC Avamar, EMC Data Domain, and
EMC Replication Manager. However these are not within the scope of this guide.
Single point of failure
High availability (HA) systems are the foundation upon which any enterprise-class multi-tenancy
environment is built. High availability systems are designed to be fully redundant with no single point
of failure (SPOF). Additional availability features can be leveraged to address single point of failure in
the trusted multi-tenancy design. The following are some high-level SPOF entity needs to consider:

Dual-ported drives

Redundant FC loops

Battery-backed mirrored write cache dual storage processors

Asymmetric Logical Unit Access (ALUA) dual paths to storage

N+M X-Blade failover clustering

Network link aggregation

Fail-safe network

2013 VCE Company, LLC. All Rights Reserved.

98

Local and remote data protection


It is important to ensure that data is protected for the entirety of its lifecycle. Local replication
technologies, such as snapshots and clones, allow users to roll back to recent points in time in the
event of corruption or accidental deletion. Local replication technologies include SnapSure and
SnapView for VNX. Use Network Data Management Protocol (NDMP) backup to deeply efficient
storage platforms, such as Data Domain, for restoration of data from a point further back in time.
Remote replication is key to protecting user data from site failures. EMC RecoverPoint and MirrorView
software enable remote replication between EMCs Unified Storage systems. Use Replication
Manager to ease the management of replication and ensure consistency between replicas.
Below are some key points for each of these products; however, they are not within the scope of this
guide.
SnapSure
Use SnapSure to create and manage checkpoints on thin and thick file systems. Checkpoints are
point-in-time, logical images of a file system. Checkpoints can be created on file systems that use pool
LUNs or traditional LUNs.
SnapView
For local replication, SnapView snapshots and clones are supported on thin and thick LUNs.
SnapView clones support replication between thick, thin, and traditional LUNs. When cloning from a
thin LUN to a traditional LUN or thick LUN, the physical space of the traditional/thick LUN must equal
the host-visible capacity of the thin LUN. This results in a fully allocated thin LUN if the traditional
LUN/thick LUN is reverse-synchronized. Cloning from traditional/thick to thin LUN results in a fully
allocated thin LUN as the initial synchronization will force the initialization of all the subscribed
capacity.
For more information, refer to EMC SnapView for VNX (Powerlink access required).
Recov erPoint
Replication is also supported through RecoverPoint. Continuous data protection (CDP) and
continuous remote replication (CRR) support replication for thin LUNs, thick LUNs, and traditional
LUNs. When using RecoverPoint to replicate to a thin LUN, only data is copied; unused space is
ignored so the target LUN is thin after the replication. This can provide significant space savings when
replicating from a non-thin volume to a thin volume. When using RecoverPoint, we recommend that
you not use journal and repository volumes on thin LUNs.

2013 VCE Company, LLC. All Rights Reserved.

99

MirrorView
When mirroring a thin LUN to another thin LUN, only consumed capacity is replicated between the
storage systems. This is most beneficial for initial synchronizations. Steady state replication is similar,
since only new writes are written from the primary storage system to the secondary system.
When mirroring from a thin LUN to a traditional or thick LUN, the thin LUNs host-visible capacity must
be equal to the traditional LUNs capacity or the thick LUNs user capacity. Any failback scenario that
requires a full synchronization from the secondary to the thin primary image causes the thin LUN to
become fully allocated. When mirroring from a thick LUN or traditional LUN to a thin LUN, the
secondary thin LUN is fully allocated.
With MirrorView, if the secondary image LUN is added with the no initial synchronization option, the
secondary image retains its thin attributes. However, any subsequent full synchronization from the
traditional LUN or thick LUN to the thin LUN, as a result of a recovery operation, causes the thin LUN
to become fully allocated.
For more information on using pool LUNs with MirrorView, see MirrorView Knowledgebook (Powerlink
access required).
Pow erPath Migration Enabler
EMC PowerPath Migration Enabler (PPME) is a host-based migration tool that enables non-disruptive
or minimally disruptive data migration between storage systems or between logical units within a
single storage system. The Host Copy technology in PPME works with the host operating system to
migrate data from the source logical unit to the target. With PPME 5.3, the Host Copy technology
supports migrating virtually provisioned devices. When migrating to a thin target, the targets thindevice capability is maintained.

2013 VCE Company, LLC. All Rights Reserved.

100

Design considerations for service provider management and control


EMC Unisphere includes a unique self-service support ecosystem that is accessible through one-click,
task-based navigation and controls for intuitive, context-based management. It provides customizable
dashboard views and reporting capabilities that present users with valuable storage management
information.
EMC Unisphere, a unified element management interface for NAS, SAN, replication, and more, offers
a single point of control from which a service provider can manage all aspects of the storage layer.
Service providers can use Unified Infrastructure Manager/Provisioning to manage the entire stack
(compute, network, and storage).
These two products mark a paradigm shift in the way infrastructure is managed.
Figure 61 shows a service provider view of the Unisphere dashboard and shows a connected vCenter
with all the ESXi hosts.

Figur e 61. EMC Unisphere dashboa rd

2013 VCE Company, LLC. All Rights Reserved.

101

Design considerations for networking


Various methods, including zoning and VLANs can enforce network separation. Internet Protocol
Security (IPsec) provides application-independent network encryption at the IP layer for additional
security.
This section describes the design of and rationale behind the trusted multi-tenancy framework for
Vblock System network technologies. The design includes many issues that must be addressed prior
to deployment, as no two environments are alike. Design considerations are provided for each trusted
multi-tenancy element.

Design considerations for secure separation


This section discusse s using the following technologies to achieve secure separation at the network
layer:

VLANs

Virtual Routing and Forwarding

Virtual Device Context

Access Control List

VLANs
VLANs provide a Layer 2 option to scale virtual machine connectivity, providing application tier
separation and multitenant isolation. In general, Vblock Systems have two types of VLANs:

Routed Include management VLANs, virtual machine VLANs, and data VLANs; will pass
through Layer 2 trunks and be routed to the external network

Internal Carry VMkernel traffic, such as vMotion, service console, NFS, DRS/HA, and so forth

This design guide uses three tenants: Tenant Orange, Tenant Vanilla and Tenant Grape. Each tenant
has multiple virtual machines for different applications (such as Web server, email server, and
database), which are associated with different VLANs. It is always recommended to separate data
and management VLANs.

2013 VCE Company, LLC. All Rights Reserved.

102

The following table lists example VLAN categories used in the Vblock System trusted multi-tenancy
design framework.
VLAN type

VLAN name

VLAN number

Mana gement VLANs (routed)

Core Inf ra management

100

C200_ES X_mgt

101

C299_ES X_v motion

102

UCS_mgt and KVM

103

Vblock_ES X_mgt

104

Vblock_ES X_v motion

105

Vblock_ES X_ build

106

Vblock_N1k_pkg

107

Vblock_N1k_control

108

Vblock_NFS

111

Fcoe_USC_to_storageA

109

Fcoe_UCS_to_storageB

110

Vblock_VMNet work

112

Tena nt 1_VMNet work

113

Tena nt-2_V MNetwork

118

Tena nt-3_V MNetwork

123

Internal VLANs (local to Vblock System)

Data VLANs (routed VLAN)

Configure VLAN (both Layer 2 and Layer 3) in all network devices supported in the trusted multitenancy infrastructure to ensure that management, tenant, and Vblock System internal VLANs are
isolated from each other.
Note: Serv ice providers may need additional VLANs for scalability, depending on size requirements.

Virtual routing and forw arding


Use Virtual Routing and Forwarding (VRF) to virtualize each network device and all its physical
interconnects. From a data plane perspective, the VLAN tags can provide logical isolation on each
point-to-point Ethernet link that connects the virtualized Layer 3 network device.
Cisco VRF Lite uses a Layer 2 separation method to provide path isolation for each tenant across a
shared network link. Using VRF Lite in the core and aggregation layers enables segmentation of
tenants hosted on the common physical infrastructure. VRF Lite completely isolates the Layer 2 and
Layer 3 control and forwarding planes of each tenant, allowing flexibility in defining an optimum
network topology for each tenant.

2013 VCE Company, LLC. All Rights Reserved.

103

The following table summarizes the benefits that the Cisco VRF Lite technology provides a trusted
multi-tenancy environment.
Benefit

Description

Virtual replication of phy sical


inf rastructure

Each v irtual network represents an exact replica of the underly ing


phy sical inf rastructure. This eff ect results f rom VRF Lites per hop
technique, which requir es ev ery network dev ice and its interconnections
to be v irtualized.

True routin g and f orwarding


separation

Dedicated data and control planes are def ined to handle traff ic belonging
to groups with v arious requirements or policies. These groups represent
an additional lev el of segregation and security as no communication is
allowed among dev ices belonging to different VRFs unless explicitly
conf igured.

Network separation at Layer 2 is accomplished using VLANs. Figure 62 shows how the VLANs
defined on each access layer device for each tenant are mapped to the same tenant VRF at the
distribution layer.

Figur e 62. VLAN to VRF mappin g

2013 VCE Company, LLC. All Rights Reserved.

104

Use VLANs to achieve network separation at Layer 2. While VRFs are used to identify a tenant,
VLAN-IDs provide isolation at Layer 2.
Tenant VRFs are applied on the Cisco Nexus 7000 Series Switch at the aggregation and core layer,
which are mapped with unique VLANs. All VLANs are carried over the 802.1Q trunking ports.

Virtual dev ice context


The Layer 2 VLANs and Layer 3 VRF features help ensure trusted multi-tenancy secure separation at
the network layer. You can also use the Virtual Device Context (VDC) feature on the Nexus 7000
Series Switch to virtualize the device itself, presenting the physical switch as multiple logical devices.
A virtual device context can contain its own unique and independent set of VLANs and VRFs. Each
virtual device context can be assigned to its physical ports, allowing for the hardware data plane to be
virtualized as well.

Access control list


Access control list (ACL), VLAN access control list (VACL), and port security can be applied in trusted
multi-tenancy Layer 2 and Layer 3 to allow only the desired traffic for an expected destination within
the same tenant domain or among different tenants. This is shown in the following table.
Device name

ACL supported

Cisco Nexus 1000V Series Switch

Y es

Cisco Nexus 5000 Series Switch

Y es

Cisco Nexus 7000 Series Switch

Y es

2013 VCE Company, LLC. All Rights Reserved.

105

Design considerations for service assurance


Service assurance is a core requirement for shared resources and their protection. Network, compute,
and storage resources are guaranteed based on service level agreements. Quality of service enables
differential treatment of specific traffic flows, helping to ensure that in the event of congestion or failure
conditions, critical traffic is provided with a sufficient amount of available bandwidth to meet throughput
requirements.
Figure 63 shows the traffic flow types defined in the Vblock System trusted multi-tenancy design.

Figur e 63. Traffic flow types

2013 VCE Company, LLC. All Rights Reserved.

106

The traffic flow types break down into three traffic categories, as shown in the following table.
Traffic Category

Description

Inf rastructure

Comprises management and control traff ic and v Motion communication.


This is ty pically set to the highest priority to maintain administrativ e
communications during periods of instability or high CPU utilization.

Tena nt

Storage

Diff erentiated into Gold, Silv er, and Bronze serv ice lev els; may include v irtual
machine-to-v irtual machine, virtual machine-to-storage, and/or v irtual machineto-tenant traff ic.

Gold tenant traffic is highest priority, requiring low latency and high bandwidth
guarantees

Silv er traffic requires medium latency and bandwidth guarantees


Bronze traffic is delay -tolerant, requiring low bandwidth guarantees

The Vblock Sy stem trusted multi-tenancy design incorporates both FC and IPattached storage. Since these traff ic ty pes are treated diff erently throughout the
network, storage requires two subcategories:

FC traffic requires a no drop policy


NFS data store traffic is sensitiv e to delay and loss

QoS service assurance for Vblock Systems has been introduced at each layer. Consider the following
features for service assurance at the network layer:

Quality of service tenant marking at the edge

Traffic flow matching

Quality of service bandwidth guarantee

Quality of service rate limit

Traffic originates from three sources:

ESXi hosts and virtual machines

External to data center

Networked-attached devices

Consider traffic classification, bandwidth guarantee with queuing, and rate limiting based on tenant
traffic priority for networking service assurance.

2013 VCE Company, LLC. All Rights Reserved.

107

Design considerations for security and compliance


Trusted multi-tenancy infrastructure networks require intelligent services, such as firewall and load
balancing of servers and hosted applications. This design guide focuses on the Vblock System trusted
multi-tenancy framework, in which a firewall module and other load balancers are the external devices
connected to the Vblock System. A multi-tenant environment consists of numerous service and
infrastructure devices, depending on the business model of the organization. Often, servers, firewalls,
network intrusion prevention systems (IPS), host IPSs, switches, routers, application firewalls, and
server load balancers are used in various combinations within a multi-tenant environment.
The Cisco Firewall Services Module (FWSM) provides Layer 2 and Layer 3 firewall inspection,
protocol inspection, and network address translation (NAT). The Cisco Application Control Engine
(ACE) module provides server load balancing and protocol (IPSec, SSL) off-loading. Both the FWSM
and ACE module can be easily integrated into existing Cisco 6500 Series switches, which are widely
deployed in data center environments.
Note: To use the Cisco ACE module, you must add a Cisco 6500 Series switch.

To succesfully achive trusted multi-tenancy, a service provider needs to adopt each key component
as discussed below. As shown in Figure 3, the trusted multi-tenancy framework has the following key
components:
Component

Description

Core

Prov ides a Lay er 3 routing module f or all traff ic in and out of the serv ice prov ider
data center.

Aggregatio n

Serv es as the Lay er 2 and Lay er 3 boundary for the data center inf rastructure. In
this design, the aggregation lay er also serv es as the connection point f or the
primary data center f irewalls.

Serv ices

Deploy s serv ices such as serv er load balancers, intrusion prev ention sy stems,
application-based f irewalls, network analy sis modules, and additional f irewall
serv ices.

Access

The data center access lay er serv es as a connection point f or the serv er f arm.
The v irtual access lay er ref ers to the virtual network that resides in the phy sical
serv ers when conf igured f or v irtualization.

With this framework, you can add components as demand and load increase.

2013 VCE Company, LLC. All Rights Reserved.

108

The following table describes the high-level security functions for each layer of the data center.
Data center layer

Security component

Purpose

Aggregatio n

Data center f irewalls

Initial f ilter f or data center ingress and egress traff ic.


Virtual context is used to split policies f or serv er-toserv er filtering.

Inf rastructure security

Inf rastructure security features are enabled to protect


dev ice, traff ic plane, and control plane.
Virtual data center prov ides internal/external
segmentation.

Serv ice

Security serv ices

Additional f irewall serv ices f or serv er farmspecif ic


protection.
Serv er load balancing masks serv ers and applications.
Application f irewall mitigates XSS-, HTTP-, SQL-, and
XML- based attacks.

Data center serv ices

IPS/IDS prov ide traffic analy sis and f orensics.


Network analy sis prov ides traff ic monitoring and data
analy sis.
XML Gate way protects and optimizes Web-based
serv ices.

Access

ACLs, CISC, port security, quality of serv ice, CoPP, VN


tag

Virtual access

Lay er 2 security f eatures are av ailable within the


phy sical serv er f or each virtual machine. Features
include ACLs, CISF, port security , Netflow ERSPAN,
quality of serv ice, CoPP, VN tag.

Data center firew alls


The aggregation layer provides an excellent filtering point and the first layer of protection for the data
center. It provides a building block for deploying firewall services for ingress and egress filtering. The
Layer 2 and Layer 3 recommendations for the aggregation layer also provide symmetric traffic
patterns to support stateful packet filtering.
Because of the performance requirements, this design uses a pair of Cisco ASA firewalls connected
directly to the aggregation switches. The Cisco ASA firewalls meet the high-performance data center
firewall requirements by providing 10 GB/s of stateful packet inspection.
The Cisco ASA firewalls are configured in transparent mode, which means the firewalls are configured
in a Layer 2 mode and will bridge traffic between interfaces. The Cisco ASA firewalls are configured
for multiple contexts using the virtual context feature, which allows the firewall to be divided into
multiple logical firewalls, each supporting different interfaces and policies.
Note: The modular aspect of this design allows additional firewalls to be deploy ed at the aggregation lay er as the
serv er f arm grows and perf ormance requirements increase.

2013 VCE Company, LLC. All Rights Reserved.

109

The firewalls are configured in an active-active design, which allows load sharing across the
infrastructure based on the active Layer 2 and Layer 3 traffic paths. Each firewall is configured for two
virtual contexts:

Virtual context 1 is active on ASA1

Virtual context 2 is active on ASA2

This corresponds to the active Layer 2 spanning tree path and the Layer 3 Hot Standby Routing
Protocol (HSRP) configuration.
Figure 64 shows an example of each firewall connection.

Figur e 64. Cisco ASA virtual contexts and Cisco Nexus 7000 virtual device contexts

Virtual context details


The context details on the firewall provide different forwarding paths and policy enforcement,
depending on the traffic type and destination. Incoming traffic that is destined for the data center
services layer (ACE, WAF, IPS, and so on) is forwarded over VLAN 161 from VDC1 on the Cisco
Nexus 7000 to virtual context 1 on the Cisco ASA. The inside interface of virtual context 1 is
configured on VLAN 162. The Cisco ASA filters the incoming traffic and then, in this case, bridges the
traffic to the inside interface on VLAN 162. VLAN 162 is carried to the services switch where traffic has
additional services applied. The same applies to virtual context 2 on VLANs 151 and 152. This context
is active on ASA2.

2013 VCE Company, LLC. All Rights Reserved.

110

Deployment recommenda tions


Firewalls enforce access policies for the data center. A best practice is to create a multilayered
security model to protect the data center from internal or external threats.
The firewall policy will differ, based on the organizational security policy and the types of applications
deployed.
Regardless of the number of ports and protocols allowed either to and from the data center, or from
server to server, there are some baseline recommendations that serve as a starting point for most
deployments. The firewalls should be hardened in a similar fashion to the infrastructure devices. The
following configuration notes apply:

Use HTTPS for device access. Disable HTTP access.

Configure authentication, authorization, and accounting.

Use out-of-band management and limit the types of traffic allowed over the management
interface(s).

Use Secure Shell (SSH). Disable Telnet.

Use Network Time Protocol (NTP) servers.

Depending on traffic types and policies, the goal might not be to send all traffic flows to the services
layer. Some incoming application connections, such as those from a DMZ or client batch jobs (such
as backup), might not need load balancing or additional services. An alternative is to deploy another
context on the firewall to support the VLANs that are not forwarded to the services switches.
Cav eats
Using transparent mode on the Cisco ASA firewalls requires that an IP address be configured for each
context. This is required to bridge traffic from one interface to another and to manage each Cisco ASA
context. While in transparent mode, you cannot allocate the same VLAN across multiple interfaces for
management purposes. A separate VLAN is used to manage each context. The VLANs created for
each context can be bridged back to the primary management VLAN on an upstream switch if
desired.
Note: This prov ides a workaround and does not require allocating new network-wide management VLANs and
IP subnets to manage each context.

2013 VCE Company, LLC. All Rights Reserved.

111

Services layer
Data center security services can be deployed in a variety of combinations. The goal of these designs
is to provide a modular approach to deploying security by allowing additional capacity to be added
easily for each service. Additional Web application firewalls, intrusion prevention systems (IPS),
firewalls, and monitoring services can all be scaled without requiring an overall redesign of the data
center.
Figure 65 illustrates how the services layer fits into the data center security environment.

Figur e 65. Data center security and the services layer

Cisco Application Control Engine


This design features the Cisco Application Control Engine (ACE) service module for the Cisco
Catalyst 6500. Cisco ACE is designed as an application- and server-scaling tool, but it has security
benefits as well. Cisco ACE can mask a servers real IP address and provide a single IP address for
clients to connect over a single or multiple protocols such as HTTP, HTTPS, FTP, and so forth.
This design uses Cisco ACE to scale the Web application firewall appliances, which are configured as
a server farm. Cisco ACE distributes connections to the Web application firewall pool.
As an added benefit, Cisco ACE can store server certificates locally. This allows Cisco ACE to proxy
Secure Socket Layer (SSL) connections for client requests and forward the requests in clear text to
the server.

2013 VCE Company, LLC. All Rights Reserved.

112

Cisco ACE provides a highly available and scalable data center solution from which the VMware
vCloud Director environment can benefit. Use Cisco ACE to apply a different context and associated
policies, interfaces, and resources for one vCloud Director cell and a completely different context for
another vCloud Director cell.
In this design, Cisco ACE is terminating incoming HTTPS requests and decrypting the traffic prior to
forwarding it to the Web application firewall farm. The Web application firewall and subsequent Cisco
IPS devices can now view the traffic in clear text for inspection purposes.
Note: Some compliance standards and security policies dictate that traffic be encry pted f rom client to server. It is
possible to modify the design so traffic is re-encrypted on Cisco ACE after inspection prior to being
forwarded to the serv er.

Web Application Firew all


Cisco ACE Web Application Firewall (WAF) provides firewall services for Web-based applications. It
secures and protects Web applications from common attacks, such as identity theft, data theft,
application disruption, fraud, and targeted attacks. These attacks can include cross-site scripting
(XSS) attacks, SQL and command injection, privilege escalation, cross-site request forgeries (CSRF),
buffer overflows, cookie tampering, and denial-of-service (DoS) attacks.
In the trusted multi-tenancy design, the two Web application firewall appliances are considered as a
cluster and are load balanced by Cisco ACE. Each Web application firewall cluster member can be
seen in the Cisco ACE Web Application Firewall Management Dashboard.
The Cisco ACE Web Application Firewall acts as a reverse proxy for the Web servers it is configured
to protect. The Virtual Web Application creates a virtual URL that intercepts incoming client
connections. You can configure a virtual Web application based on the protocol and port as well as
the policy you want applied.
The destination server IP address is Cisco ACE. Because the Web application firewall is being load
balanced by Cisco ACE, it is configured as a one-armed connection to Cisco ACE to send and receive
traffic.

2013 VCE Company, LLC. All Rights Reserved.

113

Cisco ACE and Web Application Firew all design


Cisco ACE Web Application Firewall is deployed in a one-armed design and is connected to Cisco
ACE over a single interface.

Figur e 66. Cisco ACE module an d We b Application Fir ewall integration

Cisco Intrusion Pr evention System


The Cisco Intrusion Prevention System (IPS) provides deep packet and anomaly inspection to protect
against both common and complex embedded attacks.
The IPS devices used in this design are Cisco IPS 4270s with 10 GbE modules. Because of the
nature of IPS and the intense inspection capabilities, the amount of overall throughput varies
depending on the active policy. Default IPS policies were used in the examples presented in this
design guide.
In this design, the IPS appliances are configured for VLAN pairing. Each IPS is connected to the
services switch with a single 10 GbE interface. In this example, VLAN 163 and VLAN 164 are
configured as the VLAN pair.

2013 VCE Company, LLC. All Rights Reserved.

114

The IPS deployment in the data center leverages EtherChannel load balancing from the service
switch. This method is recommended for the data center because it allows the IPS services to scale to
meet the data center requirements. This is shown in Figure 67.

Figur e 67. IPS ECLB in the services layer

A port channel is configured on the services switch to forward traffic over each 10 GB link to the
receiving IPS. Since Cisco IPS does not support Link Aggregate Control Protocol (LACP) or Port
Aggregation Protocol (PAgP), the port channel is set to on to ensure no negotiation is necessary for
the channel to become operational.
It is very important to ensure all traffic for a specific flow goes to the same Cisco IPS. To best
accomplish this, it is recommended to set the hash for the port channel to source and destination IP
address. Each EtherChannel supports up to eight ports per channel.

2013 VCE Company, LLC. All Rights Reserved.

115

This design can scale up to eight Cisco IPS 4270s per channel. Figure 68 illustrates Cisco IPS
EtherChannel load balancing.

Figur e 68. Cisco IPS EtherChan nel load bala ncing

Cav eats
Spanning tree plays an important role in IPS redundancy in this design. Under normal operating
conditions traffic, a VLAN always follows the same active Layer 2 path. If a failure occurs (a service
switch failure or a service switch link failure), spanning tree converges, and the active Layer 2 traffic
path changes to the redundant service switch and Cisco IPS appliances.

2013 VCE Company, LLC. All Rights Reserved.

116

Cisco A CE, Cisco ACE Web Applic ation Firew all, Cisco IPS traffic flow s
The security services in this design reside between the VDC1 and VDC2 on the Cisco Nexus 7000
Series Switch. All security services are running in a Layer 2 transparent configuration. As traffic flows
from VDC1 to the outside Cisco ASA context, it is bridged across VLANs and forwarded through each
security service until it reaches the inside VDC2, where it is routed directly to the correct server or
application.
Figure 69 shows the service flow for client-to-server traffic through the security services in the red
traffic path. In this example, the client is making a Web request to a virtual IP address (VIP) defined on
the Cisco ACE virtual context.

Figur e 69. Security service traffic flow (client to server)

2013 VCE Company, LLC. All Rights Reserved.

117

The following table describes the stages associated with Figure 69.
Stage

What happens

Client is directed through Cisco Nexus 7000-1 VDC1 to the activ e Cisco ASA v irtual context
transparently bridging traff ic between VDC1 and VDC2 on the Cisco Nexus 7000.

The transpare nt Cisco ASA v irtual context forwards traffic from VLAN 161 to VLAN 162 towards
Cisco Nexus 7000-1 VDC2.

VDC2 shows spanning tree root f or VLAN 162 through connection to serv ices switch SS1. SS1
shows spanning tree root f or VLAN 162 through the Cisco ACE transparent v irtual context.

The Cisco ACE transparent v irtual context applies an input serv ice policy on VLAN 162. This
serv ice policy , named AGGREGA TE_S LB, has the v irtual IP def inition. The v irtual IP rules
associated with this policy enf orce SSL-termination serv ices and load-balancing serv ices to a
Web application f irewall serv er farm.
HTTP -based prob es determine the state of the Web application f irewall serv er f arm. The
request is f orwarded to a specif ic Web application f irewall appli ance def ined in the Cisco ACE
serv er farm. The client IP address is inserted as an HTTP header by Cisco ACE to maintain the
integrity of serv er-based logging within the f arm. The source IP address of the request
f orwarded to the Web application f irewall is that of the originating clientin this example,
10.7.54.34.

In this example, the Web application f irewall has a v irtual Web application def ined named Crack
Me. The Web app lication f irewall appliance receiv es on port 81 the HTTP requ est that was
f orwarded f rom Cisco ACE. The Web application f irewall applies all relev ant security policies f or
this traffic and proxies the request back to a VIP (10.8.162.200) located on the same v irtual
Cisco ACE context on VLAN interf ace 190.

Traff ic is f orwarded f rom the Web application f irewall on VLAN 163. A port channel is
conf igured to carry VLAN 163 and VLAN 164 on each member trunk interf ace. Cisco IPS
receiv es all traff ic on VLAN 163, perf orms inline inspection, and f orwards the traff ic back ov er
the port channel on VLAN 164.

Access layer
In this design, the data center access layer provides Layer 2 connectivity for the server farm. In most
cases the primary role of the access layer is to provide port density for scaling the server farm. Figure
70 shows the data center access layer.

2013 VCE Company, LLC. All Rights Reserved.

118

Figur e 70. Data center access layer

Recommenda tions
Security at the access layer is primarily focused on securing Layer 2 flows. Best practices include:

Using VLANs to segment server traffic

Associating access control lists (ACL) to prevent any undesired communication

Additional security mechanisms that can be deployed at the access layer include:

Private VLANs (PVLAN)

Catalyst Integrated Security features, which include Dynamic Address Re solution Protocol
(ARP) inspection, Dynamic Host Configuration Protocol (DHCP) Snooping, and IP Source
Guard

Port security can also be used to lock down a critical server to a specific port.
The access layer and virtual access layer serve the same logical purpose. The virtual access layer is
a new location and a new footprint of the traditional physical data center access layer. These features
are also applicable to the traditional physical access layer.

2013 VCE Company, LLC. All Rights Reserved.

119

Virtual access layer security


Server virtualization is creating new challenges for security deployments. Visibility into virtual machine
activity and isolation of server traffic becomes more difficult when virtual machinesourced traffic can
reach other virtual machines within the same server without being sent outside the physical server.
When applications reside on virtual machines and multiple virtual machines reside within the same
physical server, it may not be necessary for traffic to leave the physical server and pass through a
physical access switch for one virtual machine to communicate with another. Enforcing network
policies in this type of environment can be a significant challenge. The goal remains to provide in this
new virtual access layer many of the same security services and features as are used in the traditional
access layer.
The virtual access layer resides in and across the physical servers running virtualization software.
Virtual networking occurs within these servers to map virtual machine connectivity to that of the
physical server. A virtual switch is configured within the server to provide virtual machine port
connectivity. How each virtual machine connects, and to which physical server port it is mapped, are
configured on this virtual switching component. While this new access layer resides within the server,
it is really the same concept as the traditional physical access layer. It is just participating in a
virtualized environment.
Figure 71 illustrates the deployment of a virtual switching platform in the context of this environment.

Figur e 71. Cisco Nexus 1000V data center deploy ment

2013 VCE Company, LLC. All Rights Reserved.

120

When a network policy is defined on the Cisco Nexus 1000V, it is updated in the virtual data center
and displayed as a port group. The network and security teams can configure a predefined policy and
make it available to the server administrators using the same methods they use to apply policies
today. Cisco Nexus 1000V policies are defined through a feature called port profiles.
Policy enforcement
Use port profiles to configure network and security features under a single profile that can be applied
to multiple interfaces. Once you define a port profile, you can inherit that profile and any setting
defined on one or more interfaces. You can define multiple profilesall assigned to different
interfaces.
This feature provides multiple security benefits:

Network security policies are still defined by network and security administrators and are applied
to the virtual switch in the same way as on physical access switches.

Once the features are defined in a port profile and assigned to an interface, the server
administrator need only pick the available port group and assign it to the virtual machine. This
alleviates the chances of misconfiguration and overlapping, or of non-compliant security policies
being applied.

Visibility
Server virtualization brings new challenges for visibility into what is occurring at the virtual network
level. Traffic flows can occur within the server between virtual machines without needing to traverse a
physical access switch. Although vCloud Director and vShield Edge restrict vApp traffic inside the
organization, if there is a specific situation where dedicated tenant environment virtual machines are
available and a tenant-specific virtual machine is infected or compromised, it may be more difficult for
administrators to spot the problem without the traffic forwarding through security appliances.
Encapsulated Remote Switched Port Analyzer (ERSPAN) is a useful tool for gaining visibility into
network traffic flows. This feature is supported on the Cisco Nexus 1000V. ERSPAN can be enabled
on the Cisco Nexus 1000V and traffic flows can be exported from the server to external devices. See
Figure 72.

2013 VCE Company, LLC. All Rights Reserved.

121

Figur e 72. Cisco Nexus 1000V and E RSP AN IDS and NAM at services switch

The following table describes what happens in Figure 72.


Stage

What happens

ERSPAN f orwards copies of the v irtual machine traffic to the Cisco IPS appliance and the
Cisco Network Analy sis Module (NA M). Both the Cisco IPS and Cisco NAM are located at the
serv ice lay er in the serv ice switch.

A new v irtual sensor (VS1) has been created on the existing Cisco IPS appliances to prov ide
monitoring f or only the ERSPAN session f rom the serv er. Up to f our v irtual sensors can be
conf igured on a single Cisco IPS appliance and they can be conf igured in either intrusion
prev ention system or instruction detection sy stem (IDS) mode. In this case the new v irtual
sensor VS1 has been set to IDS or monitor mode. It receiv es a copy of the v irtual machine
traffic ov er the ERSPAN session f rom the Cisco Nexus 1000V.

Two ERSPAN sessions hav e been created on the Cisco Nexus 1000V:

Session 1 has a destination of the Cisco NAM


Session 2 has a destination of the Cisco IPS appliance

Each session terminates on the 6500 serv ice switch.

2013 VCE Company, LLC. All Rights Reserved.

122

Using a different ERSPAN-id for each session provides isolation. A maximum of 66 source and
destination ERSPAN sessions can be configured per switch.
Cav eats
ERSPAN can affect overall system performance, depending on the number of ports sending data and
the amount of traffic being generated. It is always a good idea to monitor system performance when
you enable ERSPAN to verify the overall effects on the system.
Note: Y ou must permit protocol type header 0x88BE f or ERSPAN Generic Routing Encapsulation (GRE)
connections.

Security recommendations
The following are some best practice security recommendations:

Harden data center infrastructure devices and use authentication, authorization, and accounting
for role-based access control and logging.

Authenticate and authorize device access using TACACS+ to a Cisco Access Control Server
(ACS).

Enable local fallback if the Cisco ACS is unreachable.

Define local usernames and secrets for user accounts in the ADMIN group. The local username
and secret should match that defined in the TACACS server.

Define the ACLs to limit the type of traffic to and from the device from the out-of-band
management network.

Enable network time protocol (NTP) on all devices. NTP synchronizes timestamps for all logging
across the infrastructure, which makes it an invaluable tool for troubleshooting.

For detailed infrastructure security recommendations and best practices, see the Cisco Network
Security Baseline and the following URL:
www.cisco.com/en/US/docs/solutions/Enterprise/Security/Baseline_Security/securebasebook.html

2013 VCE Company, LLC. All Rights Reserved.

123

Thr eats mitigated


The following table indicates the threats mitigated with the data security design described in this guide.
Cisco
AS A
Firewall

Cisco IPS

Authorized access

Y es

Y es

Mal ware, v iruses, worms,


DoS

Y es

Y es

Application attacks (XSS,


SQL injection, directory
transv ersal, and so f orth)

Y es

Y es

Tunn eled attacks

Y es

Y es

Visibility

Y es

Y es

Cisco
ACE

Cisco
ACE W AF

RSA
enVision

Infrastructure
protection

Y es

Y es

Y es

Y es

Y es

Y es
Y es

Y es

Y es

Y es

Y es

Y es

Y es

Y es

Y es

Y es

Vblock Systems secur ity features


Within the Vblock System, the following security features can be applied to the trusted multi-tenancy
design framework:

Port security

ACLs

Port security
Cisco Nexus 5000 Series switches provide port security features that reject intrusion attempts and
report these intrusions to the administrator.
Typically, any fibre channel device in a SAN can attach to any SAN switch port and access SAN
services based on zone membership. Port security features prevent unauthorized access to a switch
port in the Cisco Nexus 5000 Series switch.
ACLs
A router ACL (RACL) is an ACL that is applied to an interface with a Layer 3 address assigned to it. It
can be applied to any port that has an IP address, including the following:

Routed interfaces

Loopback interfaces

VLAN interfaces

The security boundary is to permit or deny traffic moving between subnets or networks. The RACL is
supported in hardware and has no effect on performance.

2013 VCE Company, LLC. All Rights Reserved.

124

A VLAN access control list (VACL) is an ACL that is applied to a VLAN. It can be applied only to a
VLANno other type of interface. The security boundary is to permit or deny moving traffic between
VLANs and permit or deny traffic within a VLAN. The VLAN ACL is supported in hardware.
A port access control list (PACL) entry is an ACL applied to a Layer 2 switch port interface. It cannot
be applied to any other type of interface. It works in only the ingress direction. The security boundary
is to permit or deny moving traffic within a VLAN. The PACL is supported in hardware and has no
effect on performance.

Design considerations for availability and data protection


Availability is defined as the probability that a service or network is operational and functional as
needed at any point in time. Cloud data centers offer IaaS to either internal enterprise customers or
external customers of service providers. The services are controlled using SLAs, which can be stricter
in service provider deployments than in enterprise deployments. A highly available data center
infrastructure is the foundation of SLA guarantee and successful cloud deployment.

Physical redundancy design cons ideration


To build an end-to-end resilient design, hardware redundancy is the first layer of protection that
provides rapid recovery from failures. Physical redundancy must be enabled at various layers of the
infrastructure, as described in the following table.
Physical redundancy method

Details

Node redunda ncy

Redundant pair of dev ices

Hardwar e redund ancy within the node

Dual superv isors


Distributed port-channel across line cards
Redundant line cards per v irtual dev ice context

Link redundancy

Distributed port-channel across line cards


Virtual port channel

Figure 73 shows the overall network availability for each layer.

2013 VCE Company, LLC. All Rights Reserved.

125

Figur e 73. Network availability for each layer

In addition to physical layer redundancy, the following logical redundancy features help provide a
highly reliable and robust environment that will guarantee the customers service with minimum
interruption during the network failure or maintenance:

Virtual port channel

Hot standby router protocol

Nexus 1000V and Mac pinning

Nexus 1000V VSM redundancy

2013 VCE Company, LLC. All Rights Reserved.

126

Virtual por t channel


A virtual port channel (vPC) is a port-channeling concept extending link aggregation to two separate
physical switches. It allows links that are physically connected to two Cisco Nexus devices to appear
as a single port channel to any other device, including a switch or server. This feature is transparent to
neighboring devices. A virtual port channel can provide Layer 2 multipathingwhich creates
redundancy through increased bandwidthto enable multiple active parallel paths between nodes and
to load balance traffic where alternative paths exist. The following devices support virtual port
channels:

Cisco Nexus 1000V Series Switch

Cisco Nexus 5000 Series Switch

Cisco Nexus 7000 Series Switch

Cisco UCS 6120 fabric interconnect

Hot sta ndby router pr otocol


Hot Standby Router Protocol (HSRP) is Cisco's standard method of providing high network availability
by providing first-hop redundancy for IP hosts on an IEEE 802 LAN configured with a default gateway
IP address. HSRP routes IP traffic without relying on the availability of any single router. It enables a
set of router interfaces to work together to present the appearance of a single virtual router or default
gateway to the hosts on a LAN.
When HSRP is configured on a network or segment, it provides a virtual Media Access Control (MAC)
address and an IP address that is shared among a group of configured routers. HSRP allows two or
more HSRP-configured routers to use the MAC address and IP network address of a virtual router.
The virtual router does not exist; it represents the common target for routers that are configured to
provide backup to each other. One of the routers is selected to be the active router and another to be
the standby router, which assumes control of the group MAC and IP address should the designated
active router fail.
Figure 74 shows active and standby HSRP routers configured on Switch 1 and Switch 2.

2013 VCE Company, LLC. All Rights Reserved.

127

Figur e 74. Active and standby HSRP routers

Virtual port channel is used across the trusted multi-tenancy network between the different layers.
HSRP is configured at the Nexus 7000 sub-aggregation layer, which provides the backup default
gateway if the primary default gateway fails.
Cisco Nexus 1000V and MAC pinni ng
The Cisco Nexus 1000V Series Switch uses the MAC pinning feature to provide more granular loadbalancing methods and redundancy. Virtual machine NICs can be pinned to an uplink path using port
profiles definitions. Using port profiles, an administrator defines the preferred uplink path to use. If
these uplinks fail, another uplink is dynamically chosen. If an active physical link goes down, the Cisco
Nexus 1000V Series Switch sends notification packets upstream of a surviving link to inform upstream
switches of the new path required to reach these virtual machines. These notifications are sent to the
Cisco UCS 6100 Series fabric interconnect, which updates its MAC address tables and sends
gratuitous ARP messages on the uplink ports so the data center access layer network can learn the
new path.
Nexus 1000V VSM redundanc y
Define one Virtual Supervisor Module (VSM) as the primary module and the other as the secondary
module. The two VSMs run as an active-standby pair, similar to supervisors in a physical chassi s, and
provide high-availability switch management. The Cisco Nexus 1000V Series VSM is not in the data
path, so even if both VSMs are powered down, the Virtual Ethernet Module (VEM) is not affected and
continues to forward traffic. Each VSM in an active-standby pair is required to run on a separate
VMware ESXi host. This setup helps ensure high availability if even one VMware ESXi server fails.

2013 VCE Company, LLC. All Rights Reserved.

128

Design considerations for service provider management and control


The Cisco Data Center Network Manager infrastructure can actively monitor the SAN and LAN. With
DCNM, many features of Cisco NX-OSincluding Ethernet switching, physical ports and port
channels, and ACLscan be configured and monitored.
Integration of Cisco Data Center Network Manager and Cisco Fabric Manager provides overall uptime
and reliability of the cloud infrastructure and improves business continuity.
Nexus 5000 Series switches provide many management features to help provision and manage the
device, including:

CLI-based console to provide detailed out-of-band management

Virtual port channel configuration synchronization

SSHv2

Authentication, authorization, and accounting

Authentication, authorization, and accounting with RBAC

2013 VCE Company, LLC. All Rights Reserved.

129

Design considerations for additional security technologies


Security and compliance ensures the confidentiality, integrity, and availability of each tenants
environment at every layer of the trusted multi-tenancy stack using technologies like identity
management and access control, encryption and key management, firewalls, malware protection, and
intrusion prevention. This is a primary concern for both service provider and tenant. The ability to have
an accurate, clear picture of the security and compliance posture of the Vblock System is vital to the
success of the service provider in ensuring a trusted, multi-tenant environment; and for the tenants to
adopt the converged resources in alignment with their business objectives.
The trusted multi-tenancy design ensures that all activities performed in the provisioning,
configuration, and management of the multi-tenant environment, as well as day-to-day activities and
events for individual tenants, are verified and continuously monitored. It is also important that all
operational events are recorded and that these records are available as evidence during audits.
The security and compliance element of trusted multi-tenancy encircles the other elements. It is the
verify component of the maximTrust, but verifyin that all configurations, technologies, and
solutions must be auditable and their status verifiable in a timely manner. Governance, Risk, and
Compliance (GRC), specifically IT GRC, is the foundation of this element.
The IT GRC domain focuses on the management of IT-related controls. This is vital to the converged
infrastructure provider, as surveys indicate that security ranks highest among the concerns for using
cloud-based solutions. The ability to ensure oversight and report on security controls such as firewalls,
hardening configurations, and identity access management; and non-technical controls such as
consistent use of processe s, background checks for employees, and regular review of policies is
paramount to the success of the provider in ensuring the security and compliance objectives
demanded by their customers. Key benefits of a robust IT GRC solution include:

Creating and distributing policies and controls and mapping them to regulations and internal
compliance requirements

Assessing whether the controls are actually in place and working, and remediating those that
are not

Easing risk assessment and mitigation

2013 VCE Company, LLC. All Rights Reserved.

130

Design considerations for secure separation


This section discusse s using RSA Archer eGRC and RSA enVision to achieve secure separation.

RSA Archer eGRC


With respect to secure separation, the RSA Archer eGRC Platform is a multi-tenant software platform,
supporting the configuration of separate instances in provider-hosted environments. These individual
instances support data segmentation, as well as discrete user experiences and branding. By utilizing
inherited record permissions and role-based access controls built into the platform, both service
providers and tenants are provided secure and separate spaces within a single installation of RSA
Archer eGRC.
Based upon tenant requirements, it is also possible to provision a discrete RSA Archer eGRC
instance per tenant. Unless a larger number of concurrent users will be accessing the instance or a
high-availability solution is required, this deployment can run within a single virtual machine with the
application and database components running on the same server.

RSA enV ision


Deploying separate instances of RSA enVision for the service provider and the tenants results in a
discrete and secure separation of the collected and stored data. For the service provider, an RSA
enVision instance centrally collects and stores event information from all the Vblock System
components separately from each tenants data.

Design considerations for service assurance


This section discusse s using RSA Archer eGRC and RSA enVision to achieve service assurance.

RSA Archer eGRC


The RSA Archer eGRC Platform supports the trusted multi-tenancy element of service assurance by
providing a clear and consistent mechanism for providing metric and service level agreement data to
both service providers and tenants through robust reporting and dashboard views. Through integration
with RSA enVision and engagements with RSA Professional Services, these reports and dashboards
can be automated using data points from the element managers and products using RSA enVision.
Figure 75 shows an example RSA Archer eGRC dashboard.

2013 VCE Company, LLC. All Rights Reserved.

131

Figur e 75. Sa mpl e RSA Arche r eGRC dashb oard

RSA enV ision


RSA enVision integrates with RSA Archer eGRC in the RSA Security Incident Management Solution
to complete and streamline the entire lifecycle for security incident management. By capturing all
event and alert data from the Vblock System components, service providers are able to establish
baselines and then be automatically alerted to anomaliesfrom an operational and security
perspective.
The correlation capabilities allow for seemingly innocent information from separate logs to identify real
events when read holistically. This allows for quick responses to those events in the environment, their
resolution, and subsequent root cause analysis and remediation. From the tenant point of view, this
provides for a more stable and reliable solution for business needs.

2013 VCE Company, LLC. All Rights Reserved.

132

Design considerations for security and compliance


This section discusse s using RSA Archer eGRC and RSA enVision to achieve security and
compliance.

RSA Archer eGRC


The RSA Solution for Cloud Security and Compliance for RSA Archer eGRC enables user
organizations and service providers to orchestrate and visualize the security of their virtualization
infrastructure and physical infrastructure from a single console. The solution extends the Enterprise,
Compliance, and Policy modules within the RSA Archer eGRC Platform with content from the Archer
Library, dashboard views, and questionnaires to provide a solution based on cloud security and
compliance.
The RSA Solution for Cloud Security and Compliance provides the service provider the mechanism to
perform continuous monitoring of the VMware infrastructure against the more than 130 control
procedures in the library written specifically against the VMware vSphere 4.0 Security Hardening
Guide. In addition to providing the service provider the necessary means to oversee and govern the
security and compliance posture, the RSA Solution also allows for:
1. Discovery of new devices
2. Configuration measurement of new devices
3. Establishment of baselines using questionnaires
4. Remediation of compliance issues

Figur e 76. RSA Solutio n for Cloud Security and Co mpl iance

2013 VCE Company, LLC. All Rights Reserved.

133

Using this solution gives the service provider a means to ensure and, very importantly, prove the
compliance of the virtualized infrastructure to authoritative sources such as PCI-DSS, COBIT, NIST,
HIPAA, and NERC.

RSA enV ision


RSA enVision includes preconfigured integration with all Vblock System infrastructure components,
including the Cisco UCS and Nexus components, EMC storage, and VMware vSphere, vCenter,
vShield, and vCloud Director. This ensures a consistent and centralized means of collecting and
storing the events and alerts generated by the various Vblock System components.
From the service provider viewpoint, RSA enVision provides the means to ensure compliance with
regulatory requirements regarding secure logging and monitoring.

Design considerations for availability and data protection


This section discusse s using RSA Archer eGRC and RSA enVision to achieve availability and data
protection.

RSA Archer eGRC


The powerful and flexible nature of the RSA Archer eGRC Platform provides both service providers
and tenants the mechanism to integrate business critical data points and information into their
governance program. The consistent understanding of where business sensitive data is located, as
well as its criticality rating, is fundamental in making provisioning and availability decisions. Through
consultation with RSA Professional Services, it is possible to integrate workflow-managed
questionnaires to ensure consistent capturing of this information. This captured information can then
be used as data points for the creation of custom reporting dashboards and reports.

Figur e 77. Workflow questionnaire

In addition to this information classification, RSA Archer integrates with RSA enVision as its collection
entity from sources such as data loss prevention, anti-virus, and intruder detection/prevention systems
to bring these data points into the centralized governance dashboards.

2013 VCE Company, LLC. All Rights Reserved.

134

RSA enV ision


RSA enVision helps the service provider ensure the continued availability of the environment and the
protection of the data contained in the Vblock System. By centralizing and correlating alerts and
events, RSA enVision provides the service provider the visibility into the environment needed to
identify and act upon security events within the environment. Real-time notification provides the
means to prevent possible compromises and impact to the services and the tenants.

Design considerations for tenant management and control


This section discusse s using RSA Archer eGRC and RSA enVision to achieve tenant management
and control.

RSA Archer eGRC


The multi-tenant reporting capabilities of the RSA Archer eGRC Platform give each tenant a
comprehensive, real-time view of the eGRC program. Tenants can take advantage of prebuilt reports
to monitor activities and trends and generate ad hoc reports to access the information needed to
make decisions, address issues, and complete tasks. The cloud provider can build customizable
dashboards tailored by tenant or audience, so that users get exactly the information they need based
on their roles and responsibilities.

RSA enV ision


For tenants requiring centralized event management for their virtualized systems, dedicated instances
of RSA enVision are provisioned for their exclusive use. As a virtual appliance under the tenants
control, RSA enVision in this use case provides the mechanism for the virtualized operating systems,
applications, and services to centralize their event and logs. The tenant can use the reports and
dashboards within their RSA enVision instance, or integrate it with an instance of RSA Archer eGRC,
to ensure transparency to the operational and security events within their hosted environment.

2013 VCE Company, LLC. All Rights Reserved.

135

Design considerations for service provider management and control


This section discusse s using RSA Archer eGRC and RSA enVision to achieve service provider
management and control.

RSA Archer eGRC


Similar to providing the tenants with reporting capabilities, the RSA Archer eGRC Platform empowers
the service provider with comprehensive, real-time visibility into their governance, risk, and compliance
program. This transparency allows the provider to more effectively manage the risks to their
environment, and in turn, manage the risks to their customers hosted resources. Through the
continuous monitoring of controls and the remediation workflow capabilities, service providers can
ensure that the shared and dedicated infrastructure meets both the requirements set forth by
regulatory authorities and those agreed upon with their tenants.

Figur e 78. Sa mpl e report

RSA enV ision


Service providers in a multi-tenant environment need the complete visibility that RSA enVision
provides into their converged infrastructure environment. By consolidating the alerts and events from
all the Vblock System components, service providers can efficiently and effectively monitor, manage,
and control the environment. The realtime knowledge of what is happening in the Vblock System
empowers the service provider in the facilitation of each of the VCE elements of trusted multi-tenancy.

2013 VCE Company, LLC. All Rights Reserved.

136

Conclusion
The six foundational elements of secure separation, service assurance, security and compliance,
availability and data protection, tenant management and control, and service provider management
and control form the basis of the Vblock System trusted multi-tenancy design framework.
The following table summarizes the technologies used to ensure trusted multi-tenancy at each layer of
the Vblock System.
Trusted multitenancy element
Secure separation

Compute

Storage

Network

Use of serv ice


prof iles f or tenants

VSAN segmentation

VLAN segmentation

Zoning

VRF

Phy sical blade


separation

Mapp ing and


masking

UCS organization al
groups

RAID groups and


pools

Cisco Nexus 7000


Virtual Dev ice
Context (VDC)

UCS RBAC, serv ice


prof iles, and serv er
pools

Virtual Data Mov er

Access Control Lists


(ACL), Nexus 1000V
port prof iles

Security
technologies
Discrete, separate
instances of RSA
Archer eGRC and
RSA enVision f or the
serv ice prov ider and
f or each tenant as
needed

VMware v Shield
Apps, Edge

UCS VLANs
UCS VSANs
VMware v Cloud
Director
Serv ice assurance

UCS quality of
serv ice
Port channels
Serv er pools
VMware v Cloud
Director
VMware High
Av ailability

EMC Unisphere
Quality of Serv ice
Mana ger

Nexus
1000/5000/70 00
quality of serv ice

EMC Fully
Automated Storage
Tierin g (FAST)

Quality of serv ice


band width control

Pools

Quality of serv ice


rate limiting
Quality of serv ice
traffic classif ication

VMware Fault
Toler ance

Robust reports and


dashboard v iews
with RSA Archer
eGRC
Audit logging and
alerting with RSA
enVision integrated
into the incident
management
lif ecycle

Quality of serv ice


queuing

VMware Distributed
Resource Scheduler
(DRS)
VMware v Sphere
Resource Pools
Security and
compliance

UCS RBAC
LDAP
vCenter
Administrator group
RADIUS or
TACACS+

Authentication with
LDAP or Activ e
Directory

ASA f irewalls

VNX User Account


Roles

Cisco Intrusion
Prev ention Sy stem
(IPS)

VNX and RSA


enVision
IP Filtering

2013 VCE Company, LLC. All Rights Reserved.

Cisco Application
Control Engine

Port security
ACLs

Lif ecycle and


reporting of
automated and nonautomated control
compliance with
RSA Archer eGRC
Regulatory logging
and auditing
requirements met
with RSA enVision

137

Trusted multitenancy element


Av ailability and
data protection

Compute

Storage

Network

Security
technologies

Cisco UCS High


Av ailability (dual
f abric interconnect)

High Av ailability : link


redunda ncy ,
hard ware and no de
redunda ncy

Cisco Nexus OS
v irtual port channels
(v PC)

Data classif ication


questionnaires with
RSA Archer eGRC

Cisco Hot Standby


Router Protocol

Real-time
correlations and
alerting through
integration of
sy stems with RSA
enVision

Fabric interconnect
clustering
Serv ice prof ile
dy namic mobility

Local and remote


data protection
EMC Snap Sure

VMware v Sphere
High Av ailability

EMC Snap Vie w

VMware v Motion

EMC Mir rorVie w

VMware v Center
Heartbeat

EMC Po werPath
Migratio n Enabler

EMC Recov erPoint

Cisco Nexus 1000V


and MAC pinni ng
Dev ice/Link
Redundancy
Nexus 1000V
Activ e/Standby VSM

VMware v Cloud
Director cells
VMware v Center
Site Recov ery
Mana ger (SR M)
Tena nt
management and
control

VMware v Cloud
Director

VMware v Cloud
Director

VMware v Cloud
Director

RSA enVision

Tena nt v isibility into


their security and
compliance posture
through discrete
instances of RSA
Archer eGRC
Instances of RSA
enVision to address
specif ic tenant
requirements and
regulatory needs

Serv ice prov ider


management and
control

VMware v Center

EMC Unisphere

Cisco UCS Manager

EMC Ionix UIM/P

VMware v Cloud
Director
VMware v Shield
Mana ger
VMware v Center
Chargeback
Cisco Nexus 1000V

Cisco Data Center


Network Mana ger
(DCNM)
Cisco Fabric
Mana ger (FM)

Prov ider gov ernance


and insight ov er
entire security and
compliance posture
with RSA Archer
eGRC
Centralize logg ing
and alerting to
maximize
efficiencies with
RSA enVision

EMC Ionix Unif ied


Inf rastructure
Mana ger

2013 VCE Company, LLC. All Rights Reserved.

138

Next steps
To learn more about this and other solutions, contact a VCE representative or visit www.vce.com.

2013 VCE Company, LLC. All Rights Reserved.

139

Acronym glossary
The following table defines acronyms used throughout this guide.
Acronym

Definition

ABE

Access based enumeration

ACE

Application Control Engine

ACL

Access control list

ACS

Access Control Serv er

AD

Activ e Directory

AMP

Adv anced Management Pod

API

Application progr amming interf ace

CDP

Continuous data protection

CHAP

Challenge Handshake Authentication Protocol

CLI

Command-line interf ace

CNA

Conv erged network adapter

CoS

Class of serv ice

CRR

Continuous remote replication

DR

Disaster recov ery

DRS

Distributed Resource Scheduler

EFD

Enterprise f lash driv e

ERSPAN

Encapsulated Remote Switched Port Analy zer

FAST

Fully Automated Storage Tiering

FC

Fibre channel

FCoE

Fibre Channel ov er Ethernet

FWSM

Firewall Serv ices Module

GbE

Gigabit Ethernet

HA

High Av ailability

HBA

Host bus adapter

HSRP

Hot standby router protocol

IaaS

Inf rastructure as a serv ice

IDS

Intrusion detection system

IPS

Intrusion prev ention system

IPsec

Internet protocol security

2013 VCE Company, LLC. All Rights Reserved.

140

Acronym

Definition

LACP

Link aggregate control protocol

LUN

Logical unit number

MAC

Medi a access control

NAM

Network Analy sis Module

NAT

Network address translation

NDMP

Network Data Management Protocol

NPV

N port v irtualization

NTP

Network Time Protocol

PAgP

Port Aggregation Protocol

PACL

Port access control list

PCI-DSS

Pay ment card industry data security standards

PPME

Powe rPath Mi gration Enab ler

QoS

Quality of serv ice

RACL

Router access control list

RBAC

Role-based access control

SAN

Storage area net work

SLA

Serv ice lev el agreement

SPOF

Single point of f ailure

SRM

Site Recov ery Manager

SSH

Secure shell

SSL

Secure socket lay er

TMT

Trusted multi-tenancy

UIM/P

Unif ied Inf rastructure Manage r Prov isioning

UCS

Unif ied Computing System

UQM

Unisphere Quality of Serv ice Manager

VACL

VLAN access control list

vCD

vCloud Director

vDC

Virtual data center

VDC

Virtual dev ice context

VDM

Virtual data mov er

VEM

Virtual Ethernet Mod ule

vHBA

Virtual host bus adapter

2013 VCE Company, LLC. All Rights Reserved.

141

Acronym

Definition

VIC

Virtual interf ace card

VIP

Virtual IP

VLAN

Virtual local area netwo rk

VM

Virtual machine

VMDK

Virtual machine disk

VMFS

Virtual machine f ile sy stem

vNIC

Virtual network interf ace card

v PC

Virtual port channel

VRF

Virtual routing and f orwarding

VSAN

Virtual storage area network

v SM

v Shield Manager

VSM

Virtual Superv isor Module

WAF

Web application f irewall

2013 VCE Company, LLC. All Rights Reserved.

142

ABOUT VCE
VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of converged infrastructure and
cloud-based computing models that dramatically reduce the cost of IT while improving time to market for our customers. VCE,
through the Vblock Systems, delivers the industry's only fully integrated and fully virtualized cloud infrastructure system. VCE
solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and
application development environments, allowing customers to focus on business innovation instead of integrating, validating, and
managing IT infrastructure.
For more information, go to www.vce.com.

Copyright 2013 VCE Company, LLC. All Rights Reserved. Vblock and the VCE logo are registered trademarks or trademarks of VCE Company, LLC and/or its
affiliates in the United States or other countries. All other trademarks used herein are the property of their respective owners.

2013 VCE Company, LLC. All Rights Reserved.

S-ar putea să vă placă și