Sunteți pe pagina 1din 73

Dell EMC

VxBlock™ and Vblock® Systems 740


Architecture Overview

Document revision 1.19


December 2018
Revision history
Date Document revision Description of changes

December 2018 1.19 Added support for VMware DVS switch and
VxBlock Central.

September 2018 1.18 Editorial update.

August 2018 1.17 Added support for AMP-3S.

April 2018 1.16 Removed vCHA.

December 2017 1.15 Added Cisco UCS B-Series M5 servers.

August 2017 1.14 Added support for VMware vSphere 6.5.

August 2017 1.13 Added support for VMAX 950F and 950FX storage
arrays.

July 2017 1.12 Added support for Symmetrix Remote Data Facility
(SRDF)

March 2017 1.11 Added support for the Cisco Nexus 93180YC-EX
Switch

February 2017 1.10 Added the 256 GB cache option for VMAX 250F
and VMAX 250FX.

December 2016 1.9 Added support for AMP-2 on Cisco UCS C2x0 M4
servers with VMware vSphere 5.5.

November 2016 1.8 • Added support for Dell EMC network attached
storage (eNAS).
• Added support for VMAX 250F and VMAX
250FX.
• Removed IPI Appliance from elevations.

September 2016 1.7 • Added support for AMP-2S and AMP


enhancements.
• Added support for the Cisco MDS 9396S
Multilayer Fabric Switch

August 2016 1.6 Added support for Dell EMC embedded


management (eMGMT).

July 2016 1.5 • Updated the VMAX configuration information


to indicate support for a single engine.

April 2016 1.4 • Removed physical planning information


removed from this book and moved it to the
Converged Systems Physical Planning Guide
• Added the VMAX All Flash option
• Added support for the Cisco Nexus 3172TQ
Switch

October 2015 1.3 Added support for vSphere 6.0 with Cisco Nexus
1000V switches.

August 2015 1.2 Added support for VxBlock Systems. Added


support for vSphere 6.0 with VDS.

Revision history | 2
Date Document revision Description of changes

February 2015 1.1 Updated Intelligent Physical Infrastructure


appliance information.

December 2014 1.0 Initial release

3 | Revision history
Contents
Introduction................................................................................................................................................. 6

System overview.........................................................................................................................................7
Base configurations ...............................................................................................................................8
Scaling up compute resources.......................................................................................................11
Network topology..................................................................................................................................12

Compute layer...........................................................................................................................................14
Cisco UCS............................................................................................................................................14
Compute connectivity........................................................................................................................... 14
Cisco UCS fabric interconnects............................................................................................................15
Cisco Trusted Platform Module............................................................................................................ 15
Disjoint layer 2 configuration................................................................................................................ 15
Bare metal support policy.....................................................................................................................16

Storage layer............................................................................................................................................. 18
VMAX3 storage arrays......................................................................................................................... 19
VMAX3 storage array features.......................................................................................................22
VMAX All Flash storage arrays...................................................................................................... 22
Embedded Network Attached Storage...........................................................................................23
Symmetrix Remote Data Facility ...................................................................................................24

Network layer overview............................................................................................................................26


LAN layer..............................................................................................................................................26
Cisco Nexus 3064-T Switch - management networking................................................................ 26
Cisco Nexus 3172TQ Switch - management networking...............................................................27
Cisco Nexus 5548UP Switch......................................................................................................... 28
Cisco Nexus 5596UP Switch......................................................................................................... 28
Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch - segregated networking.................. 29
SAN layer............................................................................................................................................. 30
Cisco MDS 9148S Multilayer Fabric Switch...................................................................................30
Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director..........31

Virtualization layer overview....................................................................................................................32


Virtualization components.................................................................................................................... 32
VMware vSphere Hypervisor ESXi.......................................................................................................32
VMware vCenter Server (VMware vSphere 6.5).................................................................................. 33
VMware vCenter Server (VMware vSphere 5.5 and 6.0)..................................................................... 35

Management..............................................................................................................................................38
VxBlock Central options....................................................................................................................... 38
VxBlock Central architecture................................................................................................................ 42
Datacenter architecture........................................................................................................................ 48
AMP overview...................................................................................................................................... 50
AMP hardware components...........................................................................................................51
Management software components (vSphere 5.5 and 6.0)........................................................... 52
Management software components (VMware vSphere 6.5)..........................................................53
Contents | 4
AMP-2 management network connectivity.....................................................................................54
AMP-3S management network connectivity ................................................................................. 61

Sample configurations............................................................................................................................. 64
Sample VxBlock System 740 and Vblock System 740 with VMAX 400K............................................ 64
Sample VxBlock System 740 and Vblock System 740 with VMAX 200K............................................ 65
Sample VxBlock System 740 and Vblock System 740 with VMAX 100K............................................ 67

Additional references............................................................................................................................... 70
Virtualization components.................................................................................................................... 70
Compute components.......................................................................................................................... 70
Network components............................................................................................................................71
Storage components............................................................................................................................ 72

5 | Contents
Introduction
This document describes the high-level design of the Converged System and the hardware and software
components.

In this document, the VxBlock System and Vblock System are referred to as Converged Systems.

Refer to the Glossary for a description of terms specific to Converged Systems.

Introduction | 6
System overview
Converged Systems are modular platforms with defined scale points that meet the higher performance
and availability requirements of business-critical applications.

SAN storage mediums are used for deployments involving large numbers of VMs and users and provide
the following features:

• Multicontroller, scale-out architecture with consolidation and efficiency for the enterprise.

• Scaling of resources through common and fully redundant building blocks.

Local boot disks are optional and available only for bare metal blades.

Components

Converged Systems contain the following key hardware and software components:

Resource Components

Converged Systems Management Vision Intelligent Operations for Converged Systems or VxBlock Central for
VxBlock Systems.
The following options are available for VxBlock Central:
• The Base option provides the VxBlock Central user interface.
• The Advanced option adds VxBlock Central Orchestration, which provides:
— VxBlock Central Orchestration Services
— VxBlock Central Orchestration Workflows
• The Advanced Plus option adds VxBlock Central Operations and VxBlock
Central Orchestration .

Virtualization and management • VMware vSphere Server Enterprise Plus


• VMware vSphere ESXi
• VMware vCenter Server
• VMware vSphere Web Client
• VMware Single Sign-On (SSO) Service
• Cisco UCS Servers for AMP2 and AMP-3S
• PowerPath/VE
• Cisco UCS Manager
• Unisphere for VMAX
• Secure Remote Services (ESRS)
• PowerPath Management Appliance
• Cisco Data Center Network Manager (DCNM) for SAN

Compute • Cisco UCS 5108 Blade Server Chassis


• Cisco UCS B-Series M3 Blade Servers with Cisco UCS VIC 1240, optional
port expander or Cisco UCS VIC 1280
• Cisco UCS B-Series M4 or M5 Blade Servers with one of the following:
— Cisco UCS VIC 1340, with optional port expander

7 | System overview
Resource Components

— Cisco UCS VIC 1380


• Cisco UCS 2204XP Fabric Extenders or Cisco UCS 2208XP Fabric
Extenders
• Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric
Interconnects

Network • Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP Switches, Cisco
Nexus 93180YC-EX, or Cisco Nexus 9396PX Switches
• Cisco MDS 9148S, Cisco MDS 9396S 16G Multilayer Fabric Switches, or
Cisco MDS 9706 Multilayer Directors
• Cisco Nexus 3064-T Switches or Cisco Nexus 3172TQ Switches
• Optional Cisco Nexus 1000V Series Switches
• Optional VMware vSphere Distributed Switch (VDS) for VxBlock Systems
• Optional VMware NSX Virtual Networking for VxBlock Systems

Storage • VMAX 400K


• VMAX 200K
• VMAX 100K
• VMAX All Flash 950F and 950FX
• VMAX All Flash 850F and 850FX
• VMAX All Flash 450F and 450FX
• VMAX All Flash 250F and 250FX

Base configurations
Converged Systems have a base configuration that is a minimum set of compute and storage
components, and fixed network resources.

These components are integrated in one or more 28-inch 42 U cabinets. In the base configuration, you
can customize the following hardware aspects:

Hardware How it can be customized

Compute • Cisco UCS B-Series M4 and M5 Servers


• Up to 16 chassis per Cisco UCS domain
• Up to 4 Cisco UCS domains (4 pairs of FIs)
— Supports up to 32 double-height Cisco UCS blade servers per domain
— Supports up to 64 full-width Cisco UCS blade servers per domain
— Supports up to 128 half-width Cisco UCS blade servers per domain

Edge servers (with optional Refer to the Dell EMC for VMware NSX Architecture Overview.
VMware NSX)

Network • 1 pair of Cisco MDS 9148S Multilayer Fabric Switches, or one pair of Cisco
MDS 9396S Multilayer Fabric Switches, or one pair of Cisco MDS 9706
Multilayer Directors
• One pair of Cisco Nexus 93180YC-EX Switches or one pair of Cisco Nexus
9396PX Switches
• 1 pair of Cisco Nexus 3172TQ Switches

System overview | 8
Hardware How it can be customized

Storage Supports 2.5-inch drives, 3.5-inch drives, and both 2.5 and 3.5 inch drives
(VMAX 400K, VMAX 200K, and VMAX 100K only)
• VMAX 400K
— Contains 1–8 engines
— Contains a maximum of 256 front-end ports
— Supports 10–5760 drives
• VMAX 200K
— Contains 1–4 engines
— Contains a maximum of 128 front-end ports
— Supports 10–2880 drives
• VMAX 100K
— Contains 1–2 engines
— Contains a maximum of 128 front-end ports
— Supports 10–1440 drives

Supports 2.5-inch drives (VMAX All Flash models only)


• VMAX All Flash 950F and 950FX
— Contains 1–8 engines
— Contains a maximum of 192 front-end ports
— Supports 17–1920 drives
• VMAX All Flash 850F and 850FX
— Contains 1–8 engines
— Contains a maximum of 192 front-end ports
— Supports 17–1920 drives
• VMAX All Flash 450F and 450FX
— Contains 1–4 engines
— Contains a maximum of 96 front-end ports
— Supports 17–960 drives
• VMAX All Flash 250F and 250FX
— Contains 1 or 2 engines
— Contains a maximum of 64 front-end ports
— Supports 17–100 drives
A single-cabinet configuration of the VMAX All Flash 250F and 250FX is
available with a single engine. Expansion of the storage array requires the
addition of a storage technology extension.

9 | System overview
Hardware How it can be customized

Storage policies Policy levels are applied at the storage group level of array masking.
Array storage is organized by the following service level objectives (VMAX
400K, VMAX 200K, and VMAX 100K only):
• Optimized (default) - system optimized
• Bronze - 12 ms response time, emulating 7.2 K drive performance
• Silver - 8 ms response time, emulating 10K RPM drives
• Gold - 5 ms response time, emulating 15 K RPM drives
• Platinum - 3 ms response time, emulating 15 K RPM drives and enterprise
flash drive (EFD)
• Diamond - < 1 ms response time, emulating EFD
Array storage is organized by the following service level objective (VMAX All
Flash models only):
• Diamond - <1 ms response time, emulating EFD

Supported disk drives VMAX 400K, 200K, and 100K


Disk drive maximums:
• VMAX 400K = 5760
• VMAX 200K = 2880
• VMAX 100K = 1440
Tier 1 drives:
• Solid state: 200/400/800/1600 GB
Tier 2 drives:
• 15 K RPM: 300 GB
• 10K RPM: 300/600/1200 GB
Tier 3 drives:
• 7.2 K RPM: 2/4 TB

VMAX All Flash models


Disk drive maximums:
• VMAX All Flash 950F and 950FX = 1920
• VMAX All Flash 850F and 850FX = 1920
• VMAX All Flash 450F and 450FX = 960
• VMAX All Flash 250F and 250FX = 100
Tier 1 drives:
• Solid state: 960/1920/3840 GB (VMAX All-Flash arrays)
• Solid state: 960/1920/3840/7680/15360 GB (VMAX All Flash 250F and
250FX and 950F and 950FX models only)

Management hardware options • AMP-2 is available in multiple configurations that use their own resources
to run workloads without consuming resources on the Converged System.
• AMP-3S is available in a single configuration that uses its own resources to
run workloads without consuming resources on the Converged System.

Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the
compute and storage arrays in the Converged System. All components have N+N or N+1 redundancy.

System overview | 10
Depending upon the configuration, the following maximums apply:

Component Maximum configurations

Cisco UCS 62xxUP Fabric The maximum number of Cisco Series Blade Server Chassis with 4 Cisco UCS
Interconnects domains is:
• 32 for Cisco UCS 6248UP Fabric Interconnects
• 64 for Cisco UCS 6296UP Fabric Interconnects
Maximum blades are as follows:
• Half width = 512
• Full width = 256
• Double height = 128

Scaling up compute resources


To scale up compute resources, you can add blade packs and chassis activation kits when Converged
Systems are built or after they are deployed.

Blade packs

Cisco UCS blades are sold in packs of two, and include two identical Cisco UCS blades.

The base configuration of Converged Systems includes two blade packs. The maximum number of blade
packs depends on the selected scale point.

Each blade type must have a minimum of two blade packs as a base configuration and can be increased
in single blade pack increments thereafter. Each blade pack is added along with license packs for the
following software:

• Cisco UCS Manager (UCSM)

• VMware vSphere ESXi

• Cisco Nexus 1000V Series Switch (Cisco Nexus 1000V Advanced Edition only)

• PowerPath/VE

License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switch, and
PowerPath/VE are not available for bare metal blades.

Additional chassis

The power supplies and fabric extenders for all chassis are pre-populated and cabled, and all required
Twinax cables and transceivers are populated. However, in base Converged Systems configurations,
there is a minimum of two Cisco UCS 5108 Blade Server Chassis. There are no unpopulated server
chassis unless they are ordered that way. This limited licensing reduces the entry cost for Converged
Systems.

As more blades are added and additional chassis are required, additional chassis are added
automatically to an order. The kit contains software licenses to enable additional fabric interconnect ports.

Only enough port licenses for the minimum number of chassis to contain the blades are ordered.
Additional chassis can be added up-front to allow for flexibility in the field or to initially spread the blades
across a larger number of chassis.
11 | System overview
Network topology
In the network topology for Converged Systems, LAN and SAN connectivity is segregated into separate
Cisco Nexus switches.

LAN switching uses the Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch and the Cisco Nexus
55xxUP Switch. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, the Cisco MDS
9396S 16G Multilayer Fabric Switch, or the Cisco MDS 9706 Multilayer Director.

The optional VMware NSX feature uses the Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX switches
for LAN switching. For more information, refer to the Converged Systems for VMware NSX Architecture
Overview.

The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCS
fabric interconnects connect to the Cisco Nexus switches in the Ethernet network through 10 GbE port
channels and to the Cisco MDS switches through port channels made up of multiple 8 Gb links.

The front-end IO modules in the storage array connect to the Cisco MDS switches in the network layer
over 16 Gb FC links.

System overview | 12
The following illustration shows a network block storage configuration for Converged Systems:

SAN boot storage configuration

VMware vSphere ESXi hosts always boot over the FC SAN from a 10 GB boot LUN (VMware vSphere
6.x) which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The
remainder of the storage can be presented as virtual machine file system (VMFS) data stores or as raw
device mappings (RDMs).

VMware vSphere ESXi hosts always boot over the FC SAN from a 15 GB boot LUN (VMware vSphere
6.5), which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The
remainder of the storage can be presented as VMFS data stores or as raw device mappings.

13 | System overview
Compute layer
Cisco UCS B-Series Blade Servers installed in the Cisco UCS server chassis provide computing power
within Converged Systems.

FEX within the Cisco UCS server chassis connect to FIs over converged Ethernet. Up to eight 10-GbE
ports on each FEX connect northbound to the FIs regardless of the number of blades in the server
chassis. These connections carry IP and FC traffic.

Dell EMC has reserved some of the FI ports to connect to upstream access switches within the
Converged System. These connections are formed into a port channel to the Cisco Nexus switches, and
carry IP traffic destined for the external network links. In a unified storage configuration, this port channel
can also carry NAS traffic to the within the storage layer.

Each FI also has multiple ports reserved by Dell EMC for FC ports. These ports connect to Cisco SAN
switches. These connections carry FC traffic between the compute layer and the storage layer. SAN port
channels carrying FC traffic are configured between the FIs and upstream Cisco MDS switches.

Cisco UCS
The Cisco UCS data center platform unites compute, network, and storage access. Optimized for
virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb/s Ethernet unified network fabric
with enterprise-class, x86-based Cisco UCS B-Series Servers.

Converged Systems contain a number of Cisco UCS 5108 Server Chassis. Each chassis can contain up
to eight half-width Cisco UCS B-Series M4 and M5 Blade Servers, four full-width, or two double-height
blades. The full-width, double-height blades must be installed at the bottom of the chassis.

In a Converged System, each chassis also includes Cisco UCS fabric extenders and Cisco UCS B-Series
Converged Network Adapters.

Converged Systems powered by Cisco UCS offer the following features:

• Built-in redundancy for high availability

• Hot-swappable components for serviceability, upgrade, or expansion

• Fewer physical components than in a comparable system built piece by piece

• Reduced cabling

• Improved energy efficiency over traditional blade server chassis

Compute connectivity
Cisco UCS B-Series Blades installed in the Cisco UCS chassis, along with C-series Compute Technology
Extension Servers, provide computing power in a Converged System.

Fabric extenders (FEX) in the Cisco UCS chassis connect to Cisco fabric interconnects (FIs) over
converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to
the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP
and storage traffic.

Compute layer | 14
Dell EMC uses multiple ports for each fabric interconnect for 8 Gbps FC. These ports connect to Cisco
MDS storage switches and the connections carry FC traffic between the compute layer and the storage
layer. These connections also enable SAN booting of the Cisco UCS blades.

Cisco UCS fabric interconnects


Cisco UCS fabric interconnects provide network connectivity and management capability to the Cisco
UCS blades and chassis.

Northbound, the FIs connect directly to Cisco network switches for Ethernet access into the external
network. They also connect directly to Cisco MDS switches for FC access of the attached Converged
System storage. These connections are currently 8 Gbps FC. VMAX storage arrays have 16 Gbps FC
connections into the Cisco MDS switches.

VMware NSX

This VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the required
port count for VMware NSX external connectivity (edges).

Cisco Trusted Platform Module


Cisco Trusted Platform Module (TPM) provides authentication and attestation services that provide safer
computing in all environments.

Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption
keys that are used to authenticate remote and local server sessions. Cisco TPM is available by default as
a component in the Cisco UCS B- and C-Series blade servers, and is shipped disabled.

Only the Cisco TPM hardware is supported, Cisco TPM functionality is not supported. Because making
effective use of the Cisco TPM involves the use of a software stack from a vendor with significant
experience in trusted computing, defer to the software stack vendor for configuration and operational
considerations relating to the Cisco TPM.

Related information

www.cisco.com

Disjoint Layer 2 configuration


Traffic is split between two or more networks at the FI in a Disjoint Layer 2 configuration to support two or
more discrete Ethernet clouds.

Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks enable two or
more Ethernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain.

The following illustration provides an example implementation of Disjoint Layer 2 networking into a Cisco
UCS domain:

15 | Compute layer
vPCs 101 and 102 are production uplinks that connect to the network layer of the Converged System.
vPCs 105 and 106 are external uplinks that connect to other switches. If using Ethernet performance port
channels (103 and 104, by default), port channels 101 through 104 are assigned to the same VLANs.
Disjoint Layer 2 network connectivity can be configured with an individual uplink on each FI.

Bare metal support policy


Since many applications cannot be virtualized due to technical and commercial reasons, Converged
Systems support bare metal deployments, such as non-virtualized operating systems and applications.

Compute layer | 16
While it is possible for Converged Systems to support these workloads (with the following caveats), due to
the nature of bare metal deployments, Dell EMC can only provide reasonable effort support for systems
that comply with the following requirements:

• Converged Systems contain only Dell EMC published, tested, and validated hardware and
software components. The Release Certification Matrix provides a list of the certified versions of
components for Converged Systems.

• The operating systems used on bare metal deployments for compute components must comply
with the published hardware and software compatibility guides from Cisco and Dell EMC.

• For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.)
those hypervisor technologies are not supported by Dell EMC. Dell EMC support is provided only
on VMware Hypervisors.

Dell EMC reasonable effort support includes Dell EMC acceptance of customer calls, a determination of
whether a Converged System is operating correctly, and assistance in problem resolution to the extent
possible.

Dell EMC is unable to reproduce problems or provide support on the operating systems and applications
installed on bare metal deployments. In addition, Dell EMC does not provide updates to or test those
operating systems or applications. The OEM support vendor should be contacted directly for issues and
patches related to those operating systems and applications.

17 | Compute layer
Storage layer
VMAX3 storage arrays are high-end, storage systems built for the virtual data center.

Architected for reliability, availability, and scalability, these storage arrays use specialized engines, each
of which includes two redundant director modules providing parallel access and replicated copies of all
critical data.

The following table shows the software components that are supported by the VMAX3 storage array:

Component Description

Virtual Provisioning (Virtual Pools) Virtual Provisioning, based on thin provisioning, is the ability to
present an application with more capacity than is physically
allocated in the storage array. The physical storage is allocated to
the application on demand as it is needed from the shared pool of
capacity. Each disk group in the VMAX3 (all-inclusive) is carved
into a separate virtual pool.

Storage Resource Pool (SRP) An SRP is a collection of virtual pools that make up an FAST
domain. A virtual pool can only be included in one SRP. Each
VMAX initially contains a single SRP that contains all virtual pools
in the array.

Fully Automated Storage Tiering for Virtual FAST is employed to migrate sub-LUN chunks of data between the
Pools (FAST VP) various virtual pools in the SRP. Tiering is automatically optimized
by dynamically allocating and relocating application workloads
based on the defined service level objective (SLO).

SLO An SLO defines the ideal performance operating range of an


application. Each SLO contains an expected maximum response
time range. An SLO uses multiple virtual pools in the SRP to
achieve its response time objective. SLOs are predefined with the
array and are not customizable.

Embedded Network Attached Storage (eNAS) eNAS consists of virtual instances of the VNX NAS hardware
incorporated into the HYPERMAX OS architecture. The software X-
Blades and control stations run on VMs embedded in a VMAX
engine.

Embedded Management (eMGMT) eMGMT is the management model for VMAX3 which is a
combination of Solutions Enabler and Unisphere for VMAX running
locally on the VMAX using virtual servers.

HYPERMAX OS The storage operating environment for VMAX3 delivering


performance, array tiering, availability, and data integrity.

Unisphere for VMAX Browser-based GUI for device creating, managing, and monitoring
on storage arrays.

VMAX All Flash Inline Compression Inline Compression is only for the VMAX All Flash models, and
enables customers to compress data for increased effective
capacity.

Symmetrix Remote Data Facility (SRDF) A VMAX native replication technology that enables a VMAX system
to copy data to one or more VMAX systems.

D@RE Data at Rest Encryption

Storage layer | 18
VMAX3 storage arrays
VMAX3 storage arrays have characteristics that are common across all models.

VMAX3 hybrid storage arrays (400K, 200K, and 100K) include the following features:

• Two 16 Gb multimode (MM), FC, four-port IO modules per director (four per engine) - two slots for
additional front-end connectivity are available per director.

• Minimum of five drives with a maximum of 360 3.5 inch drives or 720 2.5 inch drives per engine.

• Option of 2.5 inch, 3.5 inch or a combination of 2.5 inch and 3.5 inch drives.

• Racks may be dispersed, however, each rack must be within 25 meters of the first rack.

• Number of supported Cisco UCS domains and servers depends on the number of array engines.

VMAX3 All Flash storage arrays include the following features:

• Two 16 Gb multimode (MM), FC, four-port IO modules per director (four per engine). The VMAX
All Flash 250F and 250FX have two additional slots per director for front-end connectivity. All
other VMAX All Flash arrays have one slot for front-end connectivity.

• Minimum of 17 drives per V-Brick. VMAX 250F and 250FX have a minimum of 9 drives per V-
Brick.

• Maximum of 240 drives per V-Brick. VMAX 250F and 250FX have a maximum of 50 drives per V-
Brick.

• Racks may be dispersed, however, each rack must be within 25 meters of the first rack.

• Number of supported Cisco UCS domains and servers depends on the number of array engines.

Only 2.5 inch drives are supported for VMAX All Flash models.

19 | Storage layer
The following illustration shows the interconnection of the VMAX3 in Converged Systems:

The following table shows the engines with the maximum blades:

Domain count depends on LAN switch connectivity.

Engines Maximum blades (half-width)

1 128

2 256

3 384

4 512

5 512

6 512

Storage layer | 20
Engines Maximum blades (half-width)

7 512

8 512

Supported drives

The following tables shows the supported drives and RAID protection levels for VMAX3 Hybrid models:

Drive Flash SAS NL-SAS

3½ inch drive capacity/speed 200 GB 300 GB -10K 2 TB - 7.2K

400 GB 300 GB - 15K 4 TB - 7.2K

800 GB 600 GB - 10K

1200 GB - 10K

2 ½ inch drive capacity/speed 200 GB 300 GB -10K

400 GB 300 GB - 15K

800 GB 600 GB - 10K

1200 GB - 10K

RAID protection R5 (3+1) (default) Mirrored (default) R6 (6+2) (default)

R5 (7+1) R5 (3+1) R6 (14+2)

R5 (7+1)

The following table shows the supported drives and RAID protection levels for VMAX All Flash models:

250F/FX 450F/FX 850F/FX 950F/FX

Flash drives 960 GB 960 GB 960 GB 960 GB


1.92 TB 1.92 TB 1.92 TB 1.92 TB
3.84 TB 3.84 TB 3.84 TB 3.84 TB
7.68 TB 7.68 TB
15.36 TB 15.36 TB

RAID protection R5 (3+1) R5 (7+1) R5 (7+1) R5 (7+1)


R5 (7+1) R6 (14+2) R6 (14+2) R6 (14+2)
R6 (6+2)

The following table shows the supported drives on VMAX:

Drive types VMAX3 VMAX AFA


VMAX 100K, 200K, 400K VMAX 250F, 450F, 850F, 950F

SSD 2.5 in. 960 GB, 1.9 TB 960 GB, 1.9, 3.8, 7.6, and 15.3 TB

SSD 3.5 in. Not offered N/A

15K 2.5 in. Not offered N/A

15K 3.5 in. Not offered N/A

21 | Storage layer
Drive types VMAX3 VMAX AFA
VMAX 100K, 200K, 400K VMAX 250F, 450F, 850F, 950F

10K 2.5 in. 600, 1200 GB N/A

10K 3.5 in. Not offered N/A

7.2K 3.5 in. (NL-SAS) Not offered N/A

VMAX3 storage array features


Each VMAX 100K, 200K, and 400K storage array has minimum and maximum configurations, engines
with processors and cache options.

VMAX 400K storage arrays

VMAX 400K storage arrays include the following features:

• Minimum configuration contains one engine and the maximum contains eight engines

• VMAX 400K engines contain Intel 2.7 GHz Ivy Bridge processors with 48 cores

• Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)

VMAX 200K storage arrays

VMAX 200K storage arrays include the following features:

• Minimum configuration contains one engine, and the maximum contains four engines

• VMAX 200K engines contain Intel 2.6 GHz Ivy Bridge processors with 32 cores

• Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)

VMAX 100K storage arrays

VMAX 100K contains the following features:

• Minimum configuration contains a single engine

• Maximum configuration contains two engines

• VMAX 100K engines contain Intel 2.1 GHz Ivy Bridge processors with 24 cores

• Available cache options are: 512 GB and 1024 GB per engine

VMAX All Flash storage arrays


Availability of the VMAX All Flash option is based on drive capacity and available software of the VMAX
All Flash arrays.

Storage layer | 22
Overview

VMAX All Flash models include the following features:

250F/FX 450F/FX 850F/FX 950F/FX

Number of V- 1-2 1-4 1-8 1-8


Bricks

Cache per V- 512 GB 1024 GB 1024 GB 1024 GB


Brick 1024 GB 2048 GB 2048 GB 2048 GB
2048 GB

Initial capacity 11.3 TBu 52.6 TBu 52.6 TBu 56.6 TBu
per V-Brick

Incremental 11.3 TBu 13.2 TBu 13.2 TBu 13.2 TBu


capacity

CPU Intel Xeon E5-2650- Intel Xeon Intel Xeon E5-2697-v2, Intel Xeon E5-2697-v4,
v4, E5-2650-v2, 2.7 GHz 12 core 2.3 GHz 18 core
2.2 GHz 12 core 2.6 GHz 8 core

Note the following best practices and options:

• All V-Bricks in the array must contain identical cache types.

• A single cabinet Vblock configuration of the VMAX All Flash 250F and 250FX is available. It has a
single engine and expansion of the storage array requires the addition of a Converged
Technology Extension for Dell EMC Storage.

• By default data devices have compression enabled at the Storage Group level. Disable
compression if the application is not designed for compression.

The following table describes the VMAX All Flash versions:

Software VMAX All Flash 950F, 850F, 450F, 250F VMAX All Flash 950FX, 850FX, 450FX, 250FX
package

Base software • HyperMAX operating system • HyperMAX operating system


(Included) • Thin provisioning • Thin provisioning
• QoS or host I/O limits • QoS or host I/O limits
• eNAS • eNAS
• Embedded Management • Embedded Management
• Inline Backend Compression • Inline Backend Compression
• D@RE

Optional software • D@RE (disabled by default) By default, all software is included.


• SRDF (disabled by default)

Embedded Network Attached Storage


VMAX3 arrays support Unified storage by consolidating Block and File data into a single array.

Embedded Network Attached Storage (eNAS) uses the hypervisor to create and run a set of VMs on
VMAX3 controllers. The VMs host software X-Blades and control stations, which are two major elements

23 | Storage layer
of eNAS. The virtual elements are distributed across the VMAX3 system to evenly consume VMAX3
resources for both performance and capacity.

The following table shows the elements for the VMAX3 modules:

VMAX Maximum Usable capacity Maximum eNAS I/O I/O supported modules
number of X- modules/software X-
Blades Blades

100K 2 (1 active + 1 256 TBu 2 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
4 x 8 Gb FC Opt (NDMP)

200K 4 (3 active + 1 1536 TBu 2* 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
4 x 8 Gb FC Opt (NDMP)

400K 8 (7 active + 1 3584 TBu 2* 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
4 x 8 Gb FC Opt (NDMP)

250F/250FX 4 (3 active + 1 1.1 PBu 2* 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
Requires 2 V- 4 x 8 Gb FC Opt (NDMP)
Bricks

450F/450FX 4 (3 active + 1 1.5 PBu 1* 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
Requires 4 x 1 GbE Cu
minimum 2 V-
4 x 8 Gb FC Opt (NDMP)
Bricks

850F/850FX 8 (7 active + 1 3.5 PBu 1* 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
Requires 4 x 1 GbE Cu
minimum 4 V-
4 x 8 Gb FC Opt (NDMP)
Bricks

950F/950FX 8 (7 active + 1 3.5 PBu 1* 2 x 10 GbE Cu


standby) 2 x 10 GbE Opt
Requires 4 x 8 Gb FC Opt (NDMP)
minimum 4 V-
Bricks

*Converged Systems require at least two of the available slots to have 16 Gb FC IOMs per director for
host connectivity.

Symmetrix Remote Data Facility


VMAX3 arrays support remote replication using Symmetrix Remote Data Facility (SRDF).

SRDF requires:

• CPU cores dedicated to the RDF protocol (CPU Slice running RDF emulation)

Storage layer | 24
• Dedicated front-end ports assigned to the RDF over FC (RF) or RDF over Ethernet (RE)
emulation

SRDF ports must be on dedicated IO Modules (SLICs). Front-end ports used for host
connectivity should not share 16 Gb FC SLICs with SRDF.

SRDF is not configured in the factory. Dell EMC Professional Services configures SRDF after the VxBlock
System is installed at the customer site.

Converged Systems support the following SRDF connectivity models. All participating VMAX arrays
connect as follows:

• To the customer SAN/LAN switches (default connectivity)

• To the same VxBlock SAN/LAN switches

• To the same fabric technology SAN/LAN switches

VxBlock System SAN switches should never connect directly to a customer SAN. It is
permissible for a VxBlock System SAN fabric to connect to a different VxBlock System for Data
Protection purposes. Refer to the Integrated Data Protection Design Guide for additional
information.

The following table shows the elements for the VMAX3 storage arrays:

I/O module VMAX 400K VMAX 850F/FX VMAX 950F/FX


VMAX 200K VMAX 450F/FX VMAX 250 F/FX
VMAX 100K

4 x 16 Gb FC Yes Yes Yes

4 x 8 Gb FC Yes Yes Yes

4 x 10 Gb Ethernet No Yes Yes

2 x 10 Gb Ethernet Yes Yes No

2 x 1 Gb Ethernet Cu Yes Yes Yes

2 x 1 Gb Ethernet Opt Yes Yes Yes

25 | Storage layer
Network layer
LAN and SAN make up the network layer.

LAN layer
The LAN layer includes a pair of Cisco Nexus switches.

The Converged System includes a pair of Cisco Nexus 55xxUP, Cisco Nexus 3172TQ, and Cisco Nexus
93180YC-EX or Cisco Nexus 9396PX Switches.

The Cisco Nexus switches provide 10-GbE connectivity:

• Between internal components

• To the site network

• To the AMP-2 through redundant connections between AMP-2 and the Cisco Nexus 9000 Series
Switches

• To the AMP-3S through redundant connections between AMP-3S and the Cisco Nexus 9000
Series Switches. AMP-3S with VMware vSphere 6.5 is supported only with the Cisco Nexus
93180YC-EX.

The following table shows LAN layer components:

Component Description

Cisco Nexus 93180YC-EX • 1 U appliance


• Supports 48 fixed 10/25-Gbps SFP+ ports and 6 fixed 40/100-Gbps
QSFP+ ports
• No expansion modules available

Cisco Nexus 9396PX Switch • 2 U appliance


• Supports 48 fixed, 10-Gbps SFP+ ports and 12 fixed, 40-Gbps QSFP
+ ports
• No expansion modules available

Cisco Nexus 3172TQ Switch • 1 U appliance


• Supports 48 fixed, 100 Mbps/1000 Mbps/10 Gbps twisted pair
connectivity ports, and 6 fixed, 40-Gbps QSFP+ ports for the
management layer of the Converged System

Cisco Nexus 3064-T Switch - management networking


The base Cisco Nexus 3064-T Switch provides 48 100Mbps/1GbE/10GbE Base-T fixed ports and 4-
QSFP+ ports to provide 40GbE connections.

Network layer | 26
The following table shows core connectivity for the Cisco Nexus 3064-T Switch for management
networking and reflects the AMP-2 HA base for two servers:

Feature Used ports Port speeds Media

Management uplinks from fabric 2 1 GbE Cat6


interconnect (FI)

Uplinks to customer core 2 Up to 10 G Cat6

vPC peer links 2QSFP+ 10 GbE/40 GbE Cat6/MMF 50µ/125


LC/LC

Uplinks to management 1 1 GbE Cat6

Cisco Nexus management ports 1 1 GbE Cat6

Cisco MDS management ports 2 1 GbE Cat6

VMAX3 management ports 4 1 Gbe Cat6

AMP-2-CIMC ports 1 1 GbE Cat6

AMP-2 ports 2 1 GbE Cat6

AMP-2-10G ports 2 10 GbE Cat6

VNXe management ports 1 1 GbE Cat6

VNXe_NAS ports 4 10 GbE Cat6

Gateways 14 100 Mb/1 GbE Cat6

The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and their
necessary management connections.

Cisco Nexus 3172TQ Switch - management networking


Each Cisco Nexus 3172TQ Switch provides 48 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivity
and six 40 GbE QSFP+ ports.

Cisco Nexus 3172TQ Switch on AMP

The following table shows core connectivity for the Cisco Nexus 3172TQ Switches for management
networking and reflects the base for two servers:

The following table shows core connectivity for the Cisco Nexus 3172TQ Switches for management
networking and reflects the base for six servers:

The port count is divided between the switches.

The number of ports are split between the two switches.

Feature Used ports Port speeds Media

Management uplinks from FI 2 1 GbE Cat6

Uplinks to customer core 2 Up to 10 G Cat6

vPC peer links 2QSFP+ 10 GbE/40 GbE Cat6/MMF 50µ/125


LC/LC
27 | Network layer
Feature Used ports Port speeds Media

Uplinks to management 1 1 GbE Cat6

Cisco Nexus management ports 2 1 GbE Cat6

Cisco MDS management ports 2 1 GbE Cat6

VMAX3 management ports 4

AMP-2 or AMP-3S CIMC ports 16 1 GbE Cat6

Dell EMC AMP-2S NAS/iSCSI ports 8 10 GbE Cat6

Gateways 14 100 Mb/1 GbE Cat6

Dell EMC AMP-3S Unity 2 1 GbE Cat6


management ports

Dell EMC AMP-3S Unity iSCSI port 12 10 GbE Cat6

The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and their
necessary management connections.

Cisco Nexus 5548UP Switch


The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1 Gbps or 10 Gbps connectivity
for all Converged System production traffic.

The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module):

Feature Used ports Port speeds Media

Uplinks from fabric interconnect (FI) 8 10 Gbps Twinax

Uplinks to customer core 8 Up to 10 Gbps SFP+

Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps Twinax

Uplinks to management 3 10 Gbps Twinax

Customer IP backup 4 1 Gbps or 10 SFP+


Gbps

If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional
ports are available to provide additional network connectivity.

Cisco Nexus 5596UP Switch


The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivity
for LAN traffic.

The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module):

Feature Used ports Port speeds Media

Uplinks from Cisco UCS fabric interconnect 8 10 Gbps Twinax

Network layer | 28
Feature Used ports Port speeds Media

Uplinks to customer core 8 Up to 10 Gbps SFP+

Uplinks to other Cisco Nexus 55xxUP Switches 2 10 Gbps Twinax

Uplinks to management 2 10 Gbps Twinax

The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the
following additional connectivity option:

Feature Used ports Port speeds Media

Customer IP backup 4 1 Gbps or 10 SFP+


Gbps

If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports are
available to provide additional network connectivity.

Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch -


segregated networking
The Cisco Nexus 93180YC-EX Switch provides 48 10/25 Gbps SFP+ ports and six 40/100 Gbps QSFP+
uplink ports. The Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps
connectivity and 12 40 Gbps QSFP+ ports.

The following table shows core connectivity for the Cisco Nexus 93180YC-EX Switch or Cisco Nexus
9396PX Switch with segregated networking for six servers:

Feature Used ports Port speeds Media

Uplinks from FI 8 10 GbE Twinax

Uplinks to customer core 8 (10 GbE)/2 (40 GbE) Up to 40 GbE SFP+/


QSFP+

vPC peer links 2 40 GbE Twinax

AMP-3S ESXi management** 6 12 10 GbE SFP+

**Only supported on Cisco Nexus 93180YC-EX.

The remaining ports in the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch provide
support for a combination of the following additional connectivity options:

Feature Available Port speeds Media


ports

RecoverPoint WAN links (one per appliance pair) 4 1 GbE GE T SFP+

Customer IP backup 8 1 GbE or 10 SFP+


GbE

Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10 GbE Twinax

29 | Network layer
SAN layer
Two Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9706 Multilayer Directors, or Cisco MDS
9396S 16G Multilayer Fabric Switches make up two separate fabrics to provide 16 Gbps of FC
connectivity between the compute and storage layer components.

Connections from the storage components are over 16 Gbps connections.

With 10 Gbps connectivity, Cisco UCS FIs provide a FC port channel of four 8 Gbps connections (32
Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches. This can be
increased to eight connections for 128 Gbps bandwidth. The Cisco MDS 9396S 16G Multilayer Fabric
Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 128 Gbps bandwidth
per fabric.

The Cisco MDS switches provide:

• FC connectivity between compute and storage layer components

• Connectivity for backup and business continuity requirements (if configured)

InterSwitch Links (ISLs) to the existing SAN or between switches is not permitted.

The following table shows SAN network layer components:

Component Description

Cisco MDS 9148S Multilayer Fabric • 1 U appliance


Switch • Provides 12–48 line-rate ports for nonblocking 16 Gbps throughput
• 12 ports are licensed - more ports can be licensed

Cisco MDS 9396S 16G Multilayer • 2 U appliance


Fabric Switch • Provides 48–96 line-rate ports for nonblocking 16 Gbps throughput
• 48 ports are licensed - more ports can be licensed in 12-port
increments

Cisco MDS 9706 Multilayer Director • 9 U appliance


• Provides up to 12 Tbps front panel FC line rate nonblocking, system
level switching
• Dell EMC uses the advanced 48-port line cards at line rate of
16 Gbps for all ports
• Consists of two 48-port line cards per director - up to two more 48-
port line cards can be added
• Dell EMC requires that 4 fabric modules are included with all Cisco
MDS 9706 Multilayer Directors for an N+1 configuration
• 4 PDUs
• 2 supervisors

Cisco MDS 9148S Multilayer Fabric Switch


Converged Systems incorporate the Cisco MDS 9148S Multilayer Fabric Switch provide 12-48 line-rate
ports for non-blocking, 16 Gbps throughput. In the base configuration, 24 ports are licensed. Additional
ports can be licensed as needed.

Network layer | 30
The following table provides core connectivity for the Cisco MDS 9148S Multilayer Fabric Switch:

Feature Used ports Port speeds Media

FI uplinks 4 or 8 8 Gb SFP+

Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706
Multilayer Director
Converged Systems incorporate the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS
9706 Multilayer Director to provide FC connectivity from storage to compute.

Cisco MDS 9706 Multilayer Directors provide 48 to 192 line-rate ports for nonblocking 16 Gbps
throughput. Port licenses are not required for the Cisco MDS 9706 Multilayer Director. The Cisco MDS
9706 Multilayer Director is a director-class SAN switch with four IOM expansion slots for 48-port 16 Gb
FC line cards. It deploys two supervisor modules for redundancy.

Cisco MDS 9396S 16G Multilayer Fabric Switches provide 48 to 96 line-rate ports for nonblocking, 16
Gbps throughput. The base license includes 48 ports. More ports can be licensed in 12-port increments.

The Cisco MDS 9396S 16G Multilayer Fabric Switch is a 96-port fixed switch with no IOM modules for
port expansion.

The following table provides core connectivity for the Cisco MDS 9396S 16G Multilayer Fabric Switch and
the Cisco MDS 9706 Multilayer Director:

Feature Used ports Port speeds Media

Cisco UCS 6248UP 48-Port FI 4 or 8 8 Gb SFP+

Cisco UCS 6296UP 96-Port FI 8 or 16 8 Gb SFP+

31 | Network layer
Virtualization layer

Virtualization components
VMware vSphere is the virtualization platform that provides the foundation for the private cloud. The core
VMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server for
management.

VMware vSphere 5.5 includes a Single Sign-on (SSO) component as a standalone Windows server or as
an embedded service on the vCenter server. Only VMware vSphere vCenter server on Windows is
supported.

VMware vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSO
service. Either the VMware vCenter Service Appliance or the VMware vCenter Server for Windows can
be deployed.

VMware vSphere 6.5 includes a pair of Platform Service Controller Linux appliances to provide the SSO
service. For VMware vSphere 6.5 and later releases, VMware vCenter Server Appliance is the default
deployment model for vCenter Server.

The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of
resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility
with the use of VMware vMotion and Storage vMotion technology.

VMware vSphere Hypervisor ESXi


The VMware vSphere Hypervisor ESXi runs in the management servers and Converged Systems using
VMware vSphere Server Enterprise Plus.

The lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install)
with minimal management overhead.

In some instances, the hypervisor may be installed on a 32 GB or larger Cisco FlexFlash SD Card
(mirrored HV partition). Beginning with VMware vSphere 6.x, all Cisco FlexFlash (boot) capable hosts are
configured with a minimum of two 32 GB or larger SD cards.

The compute hypervisor supports 4-6 10GigE physical NICs (pNICS) on the Converged Systems VICs.

VMware vSphere ESXi does not contain a console operating system. The VMware vSphere Hypervisor
ESXi boots from the SAN through an independent FC LUN presented from the storage array to the
compute blades. The FC LUN also contains the hypervisor's locker for persistent storage of logs and
other diagnostic files to provide stateless computing in Converged Systems. The stateless hypervisor
(PXE boot into memory) is not supported.

Cluster configuration

VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusters
contain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters can
scale up to a maximum of 32 hosts for VMware vSphere 5.5 and 64 hosts for VMware vSphere 6.0.
Clusters can support thousands of VMs.

Virtualization layer | 32
The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Some
advanced CPU functionality might be unavailable if more than one blade model is running in a given
cluster.

Datastores

Converged Systems support a mixture of data store types: block level storage using VMFS or file level
storage using NFS. The maximum size per VMFS volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginning
with VMware vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a
maximum of 255 volumes.

Advanced settings are optimized for VMware vSphere ESXi hosts deployed in Converged Systems to
maximize the throughput and scalability of NFS data stores. Converged Systems currently support a
maximum of 256 NFS data stores per host.

Virtual networks

Virtual networking in the AMP-2 uses the VMware Virtual Standard Switch. Virtual networking is managed
by either the Cisco Nexus 1000V distributed virtual switch or VMware vSphere Distributed Switch (VDS).
The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all
servers in the data center by allowing policies to move with a VM during live migration. This provides
persistent network, security, and storage compliance.

Alternatively, virtual networking in Converged Systems is managed by VMware VDS with comparable
features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a
VMware Virtual Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to
the hypervisor.

The implementation of Cisco Nexus 1000V for VMware vSphere 5.5 and VMware VDS for VMware
vSphere 5.5 use intelligent network CoS marking and QoS policies to appropriately shape network traffic
according to workload type and priority. With VMware vSphere 6.0, QoS is set to Default (Trust Host). The
vNICs are equally distributed across all available physical adapter ports to ensure redundancy and
maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco
UCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vSphere ESXi has a
predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies
are assigned to the vNICs to ensure consistency in case the uplinks need to be migrated to the VMware
VDS after manufacturing.

VMware vCenter Server (VMware vSphere 6.5)


VMware vCenter Server is a central management point for the hypervisors and VMs. VMware vCenter
Server 6.5 resides on the VMware vCenter Server Appliance (vCSA).

By default, VMware vCenter Server is deployed using the VMware vCSA. VMware Update Manager
(VUM) is fully integrated with the VMware vCSA and runs as a service to assist with host patch
management.

AMP

AMP and the Converged System have a single VMware vCSA instance.

VMware vCenter Server provides the following functionality:

• Cloning of VMs

33 | Virtualization layer
• Creating templates

• VMware vMotion and VMware Storage vMotion

• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere
high-availability clusters

VMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. Converged
System administrators can create and apply the following alarms to all managed objects in VMware
vCenter Server:

• Data center, cluster and host health, inventory, and performance

• Data store health and capacity

• VM usage, performance, and health

• Virtual network usage and health

Databases

The VMware vCSA uses the embedded PostgreSQL database. The VMware Update Manager and
VMware vCSA share the same PostgreSQL database server, but use separate PostgreSQL database
instances.

Authentication

Converged Systems support the VMware Single Sign-On (SSO) Service capable of the integration of
multiple identity sources including AD, Open LDAP, and local accounts for authentication. VMware
vSphere 6.5 includes a pair of VMware Platform Service Controller (PSC) Linux appliances to provide the
VMware SSO service. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and
Update Manager run as separate services. Each service can be configured to use a dedicated service
account depending on the security and directory services requirements.

Virtualization layer | 34
Supported features

Dell EMC supports the following VMware vCenter Server features:

• VMware SSO Service

• VMware vSphere Platform Service Controller

• VMware vSphere Web Client (used with Vision Intelligent Operations or VxBlock Central)

• VMware vSphere Distributed Switch (VDS)

• VMware vSphere High Availability

• VMware DRS

• VMware Fault Tolerance

• VMware vMotion

• VMware Storage vMotion - Layer 3 capability available for compute resources, version 6.0 and
higher

• Raw Device Mappings

• Resource Pools

• Storage DRS (capacity only)

• Storage driven profiles (user-defined only)

• Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

• VMware Syslog Service

• VMware Core Dump Collector

• VMware vCenter Web Client

VMware vCenter Server (VMware vSphere 5.5 and 6.0)


VMware vCenter Server is the central management point for the hypervisors and VMs.

VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bit
Windows Server and runs as a service to assist with host patch management.

AMP-2

VMware vCenter Server provides the following functionality:

• Cloning of VMs

• Template creation

• VMware vMotion and VMware Storage vMotion

35 | Virtualization layer
• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere
high-availability clusters

VMware vCenter Server also provides monitoring and alerting capabilities for hosts and VMs. System
administrators can create and apply alarms to all managed objects in VMware vCenter Server, including:

• Data center, cluster, and host health, inventory, and performance

• Data store health and capacity

• VM usage, performance, and health

• Virtual network usage and health

Databases

The back-end database that supports VMware vCenter Server and VUM is the remote Microsoft SQL
Server 2008 (VMware vSphere 5.1) and Microsoft SQL 2012 (VMware vSphere 5.5/6.0). The SQL Server
service can be configured to use a dedicated service account.

Authentication

VMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory,
Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vSphere 5.x and
later. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run as
separate Windows services, which can be configured to use a dedicated service account depending on
security and directory services requirements.

Supported features

Dell EMC supports the following VMware vCenter Server features:

• VMware SSO Service (version 5.x and later)

• VMware vSphere Web Client (used with Vision Intelligent Operations or VxBlock Central)

• VMware vSphere Distributed Switch (VDS)

• VMware vSphere High Availability

• VMware DRS

• VMware Fault Tolerance

• VMware vMotion: Layer 3 capability available for compute resources (version 6.0 and later)

• VMware Storage vMotion

• Raw Device Maps

• Resource Pools

• Storage DRS (capacity only)

• Storage-driven profiles (user-defined only)

Virtualization layer | 36
• Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

• VMware Syslog Service

• VMware Core Dump Collector

• VMware vCenter Web Services

37 | Virtualization layer
Management
Use VxBlock Central to manage and monitor VxBlock Systems in a data center.

The VxBlock Central provides the ability to:

• View the health and RCM compliance of multiple VxBlock Systems.

• View charts of key performance indicators (KPI) for one or more components or elements.

• Download software and firmware components to maintain compliance with the current RCM.

• Track real-time information regarding critical faults, errors, and issues affecting VxBlock Systems.

• Configure multisystem Active Directory (AD) integration and map AD Groups to VxBlock Central
roles.

• Set up compute, storage, networks, and PXE services, manage credentials, and upload ISO
images for server installation.

• Monitor VxBlock System analytics and manage capacity through integration with VMware
vRealize Operations (vROPs).

VxBlock Central options


VxBlock Central is available in Base, Advanced, and Advanced Plus options to manage your VxBlock
System.

Base option

The Base option enables you to monitor the health and compliance of VxBlock Systems through a central
dashboard.

VxBlock System health is a bottom-up calculation that monitors health or operational status of the
following:

• The VxBlock System as a whole system.

• The physical components such as a chassis, disk array enclosure, fan, storage processor, or X-
Blade.

• The compute, network, storage, and management components that logically group the physical
components.

The landing page of VxBlock Central provides a view of the health and compliance of multiple VxBlock
Systems. You can run a compliance scan on one or more VxBlock Systems. You can view key
performance indicators (KPI) for one or more components.

VxBlock Central contains dashboards that allow you to:

• View all components for selected VxBlock Systems, including detailed information such as serial
numbers, IP address, firmware version, and location.

• View compliance scores and security and technical scan risks.

Management | 38
• View and compare RCMs on different systems.

• View real-time alerts for your system including severity, time, the system where the alert
occurred, the ID, message, and status.

• Configure roles with AD integration.

The following table describes each dashboard:

Dashboard Description

Inventory Provides high-level details of the components that are configured in a single view. It provides the
name, IP address, type, and element manager, RCM scan results, and alert count for
components.
An inventory item can be selected to suppress alerts and enable alerts. When alerts are
suppressed for a specific component, real-time alert notifications are suspended.
You can search VxBlock Systems for specific components or subcomponents and export a
spreadsheet of your search.

RCM Provides the compliance score, security, and technical risks associated with each VxBlock
System. From the dashboard, you can:
• View noncompliant components, security, and technical risks associated for components.
• Download software and firmware for your VxBlock System components to upgrade to a new
RCM or remediate drift from your current RCM.
• Run compliance scans and download and assess the results.
• Check the base profile to determine whether components have the correct firmware
versions.
• Upload and install the latest compliance content.
• Customize the compliance profile.

Alerts Provides real-time alerting to monitor and receive alerts for critical failures on compute, storage,
and network components. Administrators and Dell EMC Support can respond faster to incidents
to minimize any impact of failures. Using the predefined alert notification templates to create
alert notification profiles, you can specify how you want to be notified for a critical alert.

Roles When VxBlock Central is integrated with Active Directory (AD), VxBlock Central authenticates
AD users and supports mapping between AD groups and roles.
Role mappings control the actions that a user is authorized to perform. By mapping an AD group
to a role, you can control user permissions. When an AD user logs in to VxBlock Central, role
mappings are checked for AD groups to which the user is assigned. The set of available
permissions depends on the roles mapped to the groups in which the user is a member.

Advanced Provides access to Advanced and Advanced Plus.


Management

Advanced

The Advanced option provides automation and orchestration for daily provisioning tasks through the
following features:

• VxBlock Central Orchestration Services

• VxBlock Central Orchestration Workflows

VxBlock Central Orchestration provides automation and orchestration for daily provisioning tasks through
integration with VMware vRealize Orchestrator (vRO).

39 | Management
VxBlock Central Orchestration Services

VxBlock Central Orchestration Services sets up compute, storage, network, and PXE services. VxBlock
Central Orchestration Services manages credentials and uploads ISO images for server installation.

The VxBlock Central Orchestration vRO Adapter provides supported workflows for VxBlock System
compute expansion.

VxBlock Central Orchestration Workflows

VxBlock Central Orchestration Workflows simplify complex compute, storage, and network provisioning
tasks using automated workflows for VMware vRO.

Automated VxBlock Central Orchestration Workflows enable you to concurrently provision multiple
VMware vSphere ESXi hosts and add these hosts to the VMware vCenter cluster. The workflows
implement Dell EMC best practices for VxBlock Systems and provide the validation and resilience that is
required for enterprise-grade operations. Once hosts are provisioned, workflows trigger an RCM
compliance scan to ensure compliance with RCM standards. The VMware vRO workflows also support
bare-metal server provisioning.

The following table provides an overview of available workflows and their tasks:

Workflow Description Available workflow tasks

Establishes the connection between Add VxBlock Central Orchestration Services API
VMware vRO with automation workflows gateway
and VxBlock Central Orchestration
Configuration Update VxBlock Central Orchestration Services API
Services to run workflow automation.
gateway

Add a vCenter Server instance

Provides a presentation layer for user input Provision a host (bare metal)
and data validation. Service workflows
Provision a host (bare metal) - VMAX3/PowerMax
create multiple instances of fulfillment
boot
workflows to run concurrently.
Service
Provision a host (ESXi) and add to a cluster -
VMAX3/PowerMax boot

Add a host to a cluster

Performs overall orchestration of resources Provision a bare metal server (UCS)


and runs automation tasks. You can run
Provision bare metal servers (optional - with VMAX
Fulfillment multiple fulfillment workflows concurrently
boot LUN)
based on user input.
Provision an ESXi host using VMAX boot LUN

Advanced Plus

The Advanced Plus option contains VxBlock Central Orchestration, and VxBlock Central Operations.
VxBlock Central Operations provides features that simplify operations you must perform for Vblock
Systems through advanced monitoring, system analytics, and simplified capacity management.

VMware vRealize Operations (vROps) Manager integration with VxBlock Central presents the topology
and relationship of VxBlock Systems with compute, storage, network, virtualization, and management
components. VxBlock Central Operations provides advanced monitoring, system analytics, and simplified
capacity management through integration with VMware vROps Manager.

Management | 40
VxBlock Central Operations allows you to:

• Monitor health, performance, and capacity through predicative analytics.

• Troubleshoot and optimize your environment though alerts and recommended actions.

• Manage inventory and create reports.

• Define custom alerts for performance and capacity metrics in the following actions:

— Collect data from VxBlock Systems every 15 minutes by default.

— Collect real-time alerts from VxBlock Systems every three minutes, by default.

— View VxBlock Central VM relationships to physical infrastructure. Core VM, MSM VM, and
MSP VM resource monitoring enables you to identify and monitor a collection of resources
associated with a VM.

The following illustration provides an overview of how VxBlock Central uses VMware vRealize:

41 | Management
VxBlock Central architecture
VxBlock Central uses VMs to provide services.

The following table provides an overview of VxBlock Central VMs:

VM Description

Core Discovers and gathers information about the inventory, location, and health of the
VxBlock System.

MSM Provides functions to manage multiple VxBlock Systems. In a data center


environment, one MSM VM can be associated with up to 8 Core VMs.

MSP (optional) Provides functions for RCM content prepositioning.

VMware vRO Provides workflow engine and workflow designer capabilities.

VxBlock Central Provides firmware repository management, credentials management, log


Orchestration Services management, PXE management VxBlock System workflows require.

VxBlock Central includes the Core VM and the multisystem management (MSM) VM as a minimum
configuration. The multisystem prepositioning (MSP) VM deployment is optional for prepositioning.

Discovery

The discovery model resides within a database and is exposed through REST and SNMP interfaces.
Initial discovery is performed during manufacturing of the VxBlock System and relies on an .XML file that
contains build and configuration information. Core VM uses the .XML file to populate basic information
about the VxBlock System and establish communication with components.

After initial discovery, Core VM uses the following methods to discover the VxBlock System, including
physical components and logical entities:

• XML API

• SNMP

• SMI-S

• Vendor CLIs, such as Unisphere CLI

• Platform Management Interface

Core VM performs discovery every 15 minutes, by default. This setting can be changed as desired.

Management | 42
The following illustration is a high-level overview of integration between Core VM and various products
and protocols:

Data collection

VxBlock Central uses data collectors to unzip required data from various web services.

43 | Management
The following table describes the data collectors:

Data collector Description

VxBlock Central collector Uses the VxBlock Central REST API to collect the VxBlock System
configuration data and key performance indicators (KPI) already
discovered in Core VM. The configuration is stored with KPI data from
Core VM into the Cassandra and Elasticsearch databases.

SMI-S collector Works with the CIM Object Manager (ECOM) service that runs on SMI
components to discover metrics for VMAX:
• Storage array
• Storage processor
• Storage volume
• Storage pool
• Storage tier
• Disk

SNMP collector Collects information from SNMP enabled devices such as Cisco Nexus
and MDS switches to discover metrics. Information can be collected
from the following network components:
• Switches
• Network chassis
• Container
• Fan
• Expansion module
• Power supply bay
• PSU
• Network temperature sensor
• SFP
• IPI appliance

vSphere API collector Works with VMware vCenter Server using the VMware vSphere API to
discover metrics, for datastores, disk partitions, and clusters.

Dell EMC Unity REST collector Collects configuration data from a Dell EMC Unity storage array and its
components.

XIO REST collector Collects metrics for storage array, storage volume, disk, and port.
VxBlock Central collects all other configuration information with
thecollector.

XML API collector Collects information from the Cisco UCS using the XML API to discover
metrics.

VMware NSX collector Collects information about VMware NSX components, such as Virtual
Appliance Management and the NSX controllers. The NSX collector
interfaces with the NSX Manager APIs.

VxBlock Central Shell

The VxBlock Central Shell removes the complexity of working with individual component interfaces and
provides a plug-in structure that can be extended to include more functionality. VxBlock Central Shell
creates an abstraction layer that removes the burden of having to use different login credentials, IP
addresses, and syntax to make configuration changes across multiple components. VxBlock Central Shell
can help manage multiple VxBlock Systems.
Management | 44
For example, to update the NTP server IP addresses for all switches on a VxBlock System, you can issue
a single command without having to log on to each component.

>%ntp switch set ['10.1.139.235', '10.1.219.13']


[Switch 'N5B' at 10.1.139.23:, result: ['10.1.139.235', '10.1.219.13'],
Switch 'N5A' at 10.1.139.22:, result: ['10.1.139.235', '10.1.219.13'],
Switch 'MGMT-N3B' at 10.1.139.2:, result: ['10.1.139.235', '10.1.219.13'],
Switch 'MGMT-N3A' at 10.1.139.1:, result: ['10.1.139.235', '10.1.219.13'],
Switch 'N1A' at 10.1.140.235:, result: ['10.1.139.235', '10.1.219.13'],
Switch 'M9A' at 10.1.139.20:, result: ['10.1.139.235', '10.1.219.13'],
Switch 'M9B' at 10.1.139.21:, result: ['10.1.139.235', '10.1.219.13']]

The shell is a framework layer built on top of Python and VxBlock CentralAPI bindings. In addition to the
commands provided, any valid Python command can be run in the shell.

Developers writing extensions for the VxBlock Central Shell can provide a single interface for all
components and enable users to:

• Perform operations on each VxBlock System as a single logical entity rather than a collection of
components.

• Configure and manage settings at the individual VxBlock System component level.

Secure Remote Services

VxBlock Central can connect to Secure Remote Services to automatically send system inventory, real-
time alerts, and RCM fitness information through the Secure Remote Services connection to collect and
analyze data.

Use the VxBlock Central Shell Secure Remote Services Extension Pack to perform the following
functions:

• Configure VxBlock Central to use Secure Remote Services

• Deregister VxBlock Central with Secure Remote Services

• Update a Secure Remote Services gateway configuration or VxBlock Central ID (SWID)

• Upload information to Secure Remote Services about your VxBlock System:

— Release Certification Matrix (RCM) compliance scan results (ZIP file containing CSV, XLS,
PDF, and XML files) (if you have installed RCM content and selected a default profile)

— VxBlock System inventory files (JSON)

— VxBlock System real-time alerts are automatically sent to SRS if SRS notification is
configured.

• Modify the schedule VxBlock Central uses to regularly send RCM and inventory information to
Secure Remote Services

Key performance indicators

Access key performance indicator (KPI) information using VxBlock Central or MSM VM. VxBlock Central
displays charts and graphs of KPI information for the selected element type.

45 | Management
The following table provides examples of KPI information:

Element type Examples of KPI information

storagearray Remaining raw capacity

Total space available for user data

Remaining managed space

Total IO per second

storagepool User capacity

disk Disk raw capacity

Bandwidth

switch Current bandwidth

Number of error inbound packets

rack Monitor total energy.

Monitor average power consumption.

computeserver Total memory

Temperature

The MSM VM API for multisystem services retrieves the following KPI data:

• All existing KPI definitions in the VxBlock System.

• Existing KPI definitions for a particular element type and/or component category.

• Time series KPI data:

— A particular time resolution.

— A start time for time series queries.

— An end time for time series queries.

Management | 46
The following illustration shows VxBlock Central Orchestration with VMware vRealize Orchestrator (vRO):

47 | Management
The following illustration shows components and services for VxBlock Central:

Data center architecture


VxBlock Central supports a clustered environment that includes multiple MSM VMs configured to run in a
single physical data center or in multiple, geographically separate physical data centers.

Management | 48
In a data center environment, one MSM VM can be associated with up to eight Core VMs:

49 | Management
The following illustration shows a single-site environment consisting of three MSM VMs, each associated
with a single Core VM:

MSM VMs are configured to form a cluster. Capabilities and functionality are exposed after deployment
and configuration.

In a single-site configuration with one datacenter, VxBlock Central supports up to three MSM VMs running
within the data center. Up to eight Core VMs are supported.

VxBlock Central supports a multisite clustering configuration that includes a maximum of three data
centers. Up to two MSM VMs are supported.

AMP overview
The AMP provides a single management point for a Converged System.

The AMP provides the ability to:

• Run core and Dell EMC optional management workloads

• Monitor and manage health, performance, and capacity

• Provide network and fault isolation for management

• Eliminate resource overhead

Management | 50
The core management workload is the minimum required management software to install, operate, and
support the Converged System. This includes all hypervisor management, element managers, virtual
networking components, Vision Intelligent Operations or VxBlock Central.

The Dell EMC optional management workload is non-core management workloads supported and
installed by Dell EMC to manage components in the Converged System. The list includes, but is not
limited to:

• Data protection

• Security or storage management tools such as:

— Unisphere for RecoverPoint

— Unisphere for VPLEX

— Avamar Administrator

— InsightIQ for Isilon

AMP hardware components


AMPs are available in multiple configurations that use their own resources to run workloads without
consuming resources on the Converged System.

The following operational relationships apply between Cisco UCS Servers and VMware vSphere versions
for Converged Systems:

• Cisco UCS C240 M3 servers are configured with VMware vSphere 6.0.

• Cisco UCS C2x0 M4 servers are configured with VMware vSphere 6.x.

• Cisco UCS C2x0 M4 servers are configured with VMware vSphere 6.5 only with AMP-3S.

The following table describes the various AMP options:

AMP Number of Cisco UCS Storage Description


servers

AMP-2HA Baseline 2 • FlexFlash SD for Provides HA/DRS


VMware vSphere ESXi functionality and shared
boot storage using the
• VNXe3200 with Fast VNXe3200.
Cache for VM data
stores

AMP-2HA Performance 3 • FlexFlash SD for Adds additional compute


VMware vSphere ESXi capacity with a third server
boot and storage performance
• VNXe3200 with Fast with the inclusion of FAST
Cache for VM data VP.
stores

51 | Management
AMP Number of Cisco UCS Storage Description
servers

AMP-2S* 2 - 12 • FlexFlash SD for Provides scalability


VMware vSphere ESXi configuration using Cisco
boot UCS C220 Servers and
• VNXe3200 with Fast additional storage
Cache and FAST VP expansion capacity.
for VM data stores

AMP-3S** 2-6 • FlexFlash SD for Provides scalability


VMware vSphere ESXi configuration using Cisco
boot UCS C220 Servers and
• Dell EMC Unity 300 additional storage
with FAST cache and expansion capacity.
FAST VP

*AMP-2S is supported on Cisco UCS C220 M4 servers with VMware vSphere 6.x.

**AMP-3S is only supported on Cisco UCS C220 M4 servers with VMware vSphere 6.5. The AMP-3S
management platform can be configured with two to six servers. A minimum number of three servers is
strongly recommended to build a viable AMP-3S VMware vSphere HA and DRS cluster based on the
core and optional workload. This is due to the fact that some management applications require memory
or CPU reservations, which adversely affect the AMP-3S vSphere HA Cluster memory and CPU Slot size.
Memory and CPU reservations impact the available number of slots on a VMware vSphere ESXi host and
limit the number of VMS which can be supported on each host. If the AMP-3S is configured with two
servers and depending on the workload, vSphere HA admission control may disallow VM migration due to
a lack of available slots on a VMware vSphere ESXi host. If an AMP-2S is built with two servers, it is
imperative the Dell EMC vArchitect determine if the proposed configuration can support vSphere HA and
DRS.

The AMP-3S can be configured with two to six servers. A minimum number of three servers is strongly
recommended to build a viable AMP-3S VMware vSphere HA and DRS cluster based on the core and
optional workload. With the deployment of the standard core workload for VxBlock System management,
VMware vSphere HA is not supported with two servers.

AMP software components (VMware vSphere 5.5 and 6.0)


AMPs are delivered with specific installed software components that depend on the selected Release
Certification Matrix (RCM).

AMP-2 software components

The following components are installed:

• Microsoft Windows Server 2008 R2 SP1 Standard x64

• Microsoft Windows Server 2012 R2 Standard x64

• VMware vSphere Enterprise Plus

• VMware vSphere Hypervisor ESXi

• VMware Single Sign-On (SSO) Service

Management | 52
• VMware vSphere Platform Services Controller

• VMware vSphere Web Client Service

• VMware vSphere Inventory Service

• VMware vCenter Server Appliance

For VMware vSphere 6.0, the preferred instance is created using VMware vSphere
vCenter Server Appliance. An alternate instance may be created using the Windows
version. Only one of these options can be implemented. For VMware vSphere 5.5,
only VMware vSphere vCenter with Windows is supported.

• VMware vCenter Database using Microsoft SQL Server 2012 Standard Edition

• VMware vCenter Update Manager (VUM) - Integrated with VMware vCenter Server Appliance

For VMware vSphere 6.0, the preferred configuration (with VMware vSphere vCenter
Server Appliance) embeds the SQL server on the same VM as the VUM. The alternate
configuration leverages the remote SQL server with VMware vCenter Server on
Windows. Only one of these options can be implemented.

• VMware vSphere client

• VMware vSphere Syslog Service (optional)

• VMware vSphere Core Dump Service (optional)

• VMware vSphere Distributed Switch (VDS)

• PowerPath/VE Management Appliance (PPMA)

• Secure Remote Support (SRS)

• Array management modules, including but not limited to Unisphere for VMAX

• Cisco Prime Data Center Network Manager and Device Manager

• (Optional) RecoverPoint management software that includes the management application and
deployment manager

AMP software components (VMware vSphere 6.5)


AMPs are delivered with specific installed software components dependent on the selected Release
Certification Matrix (RCM).

53 | Management
The following components are installed dependent on the selected RCM:

• Microsoft Windows Server 2008 R2 SP1 Standard x64 (AMP-2or AMP-3S)

• Microsoft Windows Server 2012 R2 Standard x64

• VMware vSphere Enterprise Plus

• VMware vSphere Hypervisor ESXi

• VMware Single Sign-On (SSO) Service

• VMware vSphere Platform Services Controller (PSC)

• VMware vSphere Web Client Service

• VMware vSphere Inventory Service

• VMware vCenter Server Appliance (vCSA)

For VMware vSphere 6.5, only the VMware vSphere vCenter Server Appliance
deployment model is offered.

• VMware vCenter Update Manager (VUM – Integrated with VMware vCenter Server Appliance)

• VMware vSphere client

• VMware vSphere Web Client (Flash/Flex client)

• VMware Host client (HTML5 based)

• VMware vSphere Syslog Service (optional)

• VMware vSphere Core Dump Service (optional)

• VMware vSphere Distributed Switch (VDS)

• PowerPath/VE Management Appliance (PPMA)

• Secure Remote Support (ESRS)

• Array management modules

• Cisco Prime Data Center Network Manager and Device Manager (DCNM)

• (Optional) RecoverPoint management software that includes the management application and
deployment manager

AMP-2 management network connectivity


Converged Systems offer several types of AMP network connectivity and server assignments.

Management | 54
AMP-2S network connectivity with Cisco UCS C220 M4 servers (VMware vSphere 6.0)

The following illustration shows the network connectivity for the AMP-2S on the Cisco UCS C220 M4
servers:

AMP-2S server assignments with Cisco UCS C220 M4 servers (VMware vSphere 6.0)

The following illustration shows the VM server assignments for AMP-2S on the Cisco UCS C220 M4
servers. This configuration implements the default VMware vCenter Server configuration using the

55 | Management
VMware 6.0 vCenter Server Appliance and VMware Update Manager with embedded MS SQL Server
database:

Management | 56
The following illustration shows the VM server assignments for AMP-2S on the Cisco UCS C2x20 M4
servers, which implements the alternate VMware vCenter Servers configuration, using the VMware 6.0
vCenter Server, Database Servers, and VMware Update Manager:

Converged Systems that use VMware vSphere Distributed Switch (VDS) do not include the Cisco Nexus
1000V VSM VMs.

The AMP-2S option leverages the DRS functionality of the VMware vCenter to optimize resource usage
(CPU and memory) so that VM assignment to a VMware vSphere ESXi host is managed automatically.

57 | Management
AMP-2S on Cisco UCS C220 M4 servers (VMware vSphere 6.5)

The following illustration provides an overview of the network connectivity for the AMP-2S on the Cisco
C220 M4 servers:

* No default gateway

The default VMware vCenter Server configuration contains the VMware vCenter Server 6.5 Appliance
with integrated VMware Update Manager

Beginning with vSphere 6.5, Microsoft SQL will no longer be used since VMware vCenter and VUM will
utilize the Postgres database embedded within the vCSA.

The following illustration provides an overview of the VM server assignment for AMP-2S on C220 M4 with
the default configuration:

Management | 58
59 | Management
AMP-2HA network connectivity on Cisco UCS C240 M3 servers

The following illustration shows the network connectivity for AMP-2HA on the Cisco UCS C240 M3
servers:

Management | 60
AMP-2HA server assignments with Cisco UCS C240 M3 servers

The following illustration shows the VM server assignment for AMP-2HA with Cisco UCS C240 M3
servers:

AMP-3S management network connectivity


Network connectivity and server assignment illustrations on Cisco UCS C220 M4 servers are provided for
VxBlock Systems.

61 | Management
AMP-3S on Cisco UCS C220 M4 servers (VMware vSphere 6.5)

The following illustration provides an overview of network connectivity on Cisco C220 M4 servers:

AMP-3S uses VMware virtual distributed switches with network input output configuration (NIOC) in place
of VMware standard switches.

Management | 62
The following illustration shows VM placement with two AMP-3S servers:

The following illustration shows VM placement with three AMP-3S servers:

63 | Management
Sample configurations
Cabinet elevations vary based on the specific configuration requirements.

Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.

Sample VxBlock and Vblock Systems 740 with VMAX 400K


Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.

VMAX array cabinets are excluded from the sample elevations.

Cabinet 1

Sample configurations | 64
Cabinet 2

Sample VxBlock and Vblock Systems 740 with VMAX 200K


Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.

VMAX array cabinets are excluded from the sample elevations.

65 | Sample configurations
Cabinet 1

Sample configurations | 66
Cabinet 2

Sample VxBlock and Vblock Systems 740 with VMAX 100K


Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.

VMAX array cabinets are excluded from the sample elevations.

67 | Sample configurations
Cabinet 1

Sample configurations | 68
Cabinet 2

69 | Sample configurations
Additional references
References to related documentation for virtualization, compute, network and storage components are
provided.

Virtualization components
Virtualization component information and links to documentation are provided.

Product Description Link to documentation

VMware vCenter Provides a scalable and extensible platform that forms http://www.vmware.com/products/
Server the foundation for virtualization management. vcenter-server/

VMware vSphere Virtualizes all application servers and provides http://www.vmware.com/products/


ESXi VMware high availability (HA) and dynamic resource vsphere/
scheduling (DRS).

Compute components
Compute component information and links to documentation are provided.

Product Description Link

Cisco UCS C-Series Servers that provide unified computing in an www.cisco.com/c/en/us/products/


Blade Servers industry-standard form factor to reduce TCO and servers-unified-computing/ucs-c-
increase agility. series-rack-servers/index.html

Cisco UCS B-Series Servers that adapt to application demands, www.cisco.com/en/US/products/


Blade Servers intelligently scale energy use, and offer best-in- ps10280/index.html
class virtualization.

Cisco UCS Manager Provides centralized management capabilities for www.cisco.com/en/US/products/


the Cisco UCS. ps10281/index.html

Cisco UCS 2200 Series Bring unified fabric into the blade-server chassis, www.cisco.com/c/en/us/support/
Fabric Extenders providing up to eight 10 Gbps connections each servers-unified-computing/
between blade servers and the fabric ucs-2200-series-fabric-extenders/
interconnect. tsd-products-support-series-
home.html

Cisco UCS 5108 Series Chassis that supports up to eight blade servers www.cisco.com/en/US/products/
Blade Server Chassis and up to two fabric extenders in a six U ps10279/index.html
enclosure.

Cisco UCS 6200 Series Cisco UCS family of line-rate, low-latency, www.cisco.com/en/US/products/
Fabric Interconnects lossless, 10 Gigabit Ethernet, Fibre Channel over ps11544/index.html
Ethernet (FCoE), and Fibre Channel functions.
Provide network connectivity and management
capabilities.

Additional references | 70
Network components
Network component information and links to documentation are provided.

Product Description Link to documentation

Cisco Nexus 1000V Series Switches A software switch on a server that www.cisco.com/en/US/products/
delivers Cisco VN-Link services to ps9902/index.html
VMs hosted on that server.

VMware vSphere Distributed Switch A VMware vCenter-managed http://www.vmware.com/products/


(VDS) software switch that delivers vsphere/features/distributed-
advanced network services to VMs switch.html
hosted on that server.

Cisco Nexus 5000 Series Switches Simplifies data center transformation http://www.cisco.com/c/en/us/
by enabling a standards-based, high- products/switches/nexus-5000-
performance unified fabric. series-switches/index.html

Cisco MDS 9706 Multilayer Director Provides 48 line-rate 16 Gbps ports http://www.cisco.com/c/en/us/
and offers cost-effective scalability products/storage-networking/
through on-demand activation of mds-9706-multilayer-director/
ports. index.html

Cisco MDS 9148S Multilayer Fabric Provides 48 line-rate 16 Gbps ports http://www.cisco.com/c/en/us/
Switch and offers cost-effective scalability products/collateral/storage-
through on-demand activation of networking/mds-9148s-16g-
ports. multilayer-fabric-switch/datasheet-
c78-731523.html

Cisco Nexus 3064-T Switch Provides management access to all http://www.cisco.com/c/en/us/


Converged System components support/switches/nexus-3064-t-
using vPC technology to increase switch/model.html
redundancy and scalability.

Cisco Nexus 3172TQ Provides management access to all http://www.cisco.com/c/en/us/


Converged System components products/collateral/switches/
using vPC technology to increase nexus-3000-series-switches/
redundancy and scalability. data_sheet_c78-729483.html

Cisco MDS 9396S 16G Multilayer Provides up to 96 line-rate 16 Gbps http://www.cisco.com/c/en/us/


Fabric Switch ports and offers cost-effective products/collateral/storage-
scalability through on-demand networking/mds-9396s-16g-
activation of ports. multilayer-fabric-switch/datasheet-
c78-734525.html

Cisco Nexus 9396PX Switch Provides high scalability, http://www.cisco.com/c/en/us/


performance, and exceptional energy support/switches/nexus-9396px-
efficiency in a compact form factor. switch/model.html

Cisco Nexus 93180YC-EX Switch Provides high scalability, http://www.cisco.com/c/en/us/


performance, and exceptional energy support/switches/nexus-93180yc-ex-
efficiency in a compact form factor. switch/model.html

71 | Additional references
Storage components
Storage component information and links to documentation are provided.

Product Description Link to documentation

VMAX 400K Delivers industry-leading http://www.emc.com/collateral/


performance, scale, and efficiency hardware/specification-sheet/
for hybrid cloud environments. h13217-vmax3-ss.pdf

VMAX 200K Dell EMC high-end storage array that http://www.emc.com/collateral/


delivers infrastructure services in the hardware/specification-sheet/
next generation data center. Built for h13217-vmax3-ss.pdf
reliability, availability, and scalability,
VMAX 200K uses specialized
engines, each of which includes two
redundant director modules providing
parallel access and replicated copies
of all critical data.

VMAX 100K Delivers full enterprise-class http://www.emc.com/collateral/


availability, data integrity, and hardware/specification-sheet/
security. h13217-vmax3-ss.pdf

VMAX All Flash Provides high-density flash storage. https://support.emc.com/products/


40306_VMAX-All-Flash

Dell EMC Unity AMP-3S shared storage array http://www.dellemc.com/en-us/


storage/unity.htm#collapse
%3D&collapse

Additional references | 72
The information in this publication is provided "as is." Dell Inc. makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness
for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2014-2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are
trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published
in the USA in December 2018.

Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to
change without notice.

73 | Copyright

S-ar putea să vă placă și