Sunteți pe pagina 1din 73

Dell EMC

VxBlock and Vblock ® Systems 740 Architecture Overview

Document revision 1.19 December 2018

Dell EMC VxBlock ™ and Vblock ® Systems 740 Architecture Overview Document revision 1.19 December 2018

Revision history

Date

Document revision

Description of changes

December 2018

1.19

Added support for VMware DVS switch and VxBlock Central.

September 2018

1.18

Editorial update.

August 2018

1.17

Added support for AMP-3S.

April 2018

1.16

Removed vCHA.

December 2017

1.15

Added Cisco UCS B-Series M5 servers.

August 2017

1.14

Added support for VMware vSphere 6.5.

August 2017

1.13

Added support for VMAX 950F and 950FX storage arrays.

July 2017

1.12

Added support for Symmetrix Remote Data Facility (SRDF)

March 2017

1.11

Added support for the Cisco Nexus 93180YC-EX Switch

February 2017

1.10

Added the 256 GB cache option for VMAX 250F and VMAX 250FX.

December 2016

1.9

Added support for AMP-2 on Cisco UCS C2x0 M4 servers with VMware vSphere 5.5.

November 2016

1.8

Added support for Dell EMC network attached storage (eNAS).

Added support for VMAX 250F and VMAX

250FX.

Removed IPI Appliance from elevations.

September 2016

1.7

Added support for AMP-2S and AMP enhancements.

Added support for the Cisco MDS 9396S Multilayer Fabric Switch

August 2016

1.6

Added support for Dell EMC embedded management (eMGMT).

July 2016

1.5

Updated the VMAX configuration information to indicate support for a single engine.

April 2016

1.4

Removed physical planning information removed from this book and moved it to the Converged Systems Physical Planning Guide

Added the VMAX All Flash option

Added support for the Cisco Nexus 3172TQ Switch

October 2015

1.3

Added support for vSphere 6.0 with Cisco Nexus 1000V switches.

August 2015

1.2

Added support for VxBlock Systems. Added support for vSphere 6.0 with VDS.

August 2015 1.2 Added support for VxBlock Systems. Added support for vSphere 6.0 with VDS. Revision

Revision history

|

2

Date

Document revision

Description of changes

February 2015

1.1

Updated Intelligent Physical Infrastructure appliance information.

December 2014

1.0

Initial release

3 |

Revision history

Physical Infrastructure appliance information. December 2014 1.0 Initial release 3 | Revision history

Contents

Introduction

6

System overview

7

Base configurations

8

Scaling up compute resources

11

Network topology

12

Compute layer

14

Cisco UCS

14

Compute connectivity

14

Cisco UCS fabric interconnects

15

Cisco Trusted Platform Module

15

Disjoint layer 2 configuration

15

Bare metal support policy

16

Storage layer

18

VMAX3 storage arrays

19

VMAX3 storage array features

22

VMAX All Flash storage arrays

22

Embedded Network Attached Storage

23

Symmetrix Remote Data Facility

24

Network layer overview

26

LAN layer

26

Cisco Nexus 3064-T Switch - management networking

26

Cisco Nexus 3172TQ Switch - management networking

27

Cisco Nexus 5548UP Switch

28

Cisco Nexus 5596UP Switch

28

Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch - segregated networking

29

SAN layer

30

Cisco MDS 9148S Multilayer Fabric Switch

30

Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director

31

Virtualization layer overview

32

Virtualization components

32

VMware vSphere Hypervisor ESXi

32

VMware vCenter Server (VMware vSphere 6.5)

33

VMware vCenter Server (VMware vSphere 5.5 and 6.0)

35

Management 38

Server (VMware vSphere 5.5 and 6.0) 35 Management 38 VxBlock Central options 38 VxBlock Central

VxBlock Central options

38

VxBlock Central architecture

42

Datacenter architecture

48

AMP overview

50

AMP hardware components

51

Management software components (vSphere 5.5 and 6.0)

52

Management software components (VMware vSphere 6.5)

53

Contents

|

4

AMP-2 management network connectivity

54

AMP-3S management network connectivity

61

Sample configurations

64

Sample VxBlock System 740 and Vblock System 740 with VMAX 400K

64

Sample VxBlock System 740 and Vblock System 740 with VMAX 200K

65

Sample VxBlock System 740 and Vblock System 740 with VMAX 100K

67

Additional references

70

Virtualization components

70

Compute components

70

Network components

71

Storage components

72

5

|

Contents

components 70 Compute components 70 Network components 71 Storage components 72 5 | Contents

Introduction

This document describes the high-level design of the Converged System and the hardware and software components.

In this document, the VxBlock System and Vblock System are referred to as Converged Systems.

Refer to the Glossary for a description of terms specific to Converged Systems.

as Converged Systems. Refer to the Glossary for a description of terms specific to Converged Systems.

Introduction

|

6

System overview

Converged Systems are modular platforms with defined scale points that meet the higher performance and availability requirements of business-critical applications.

SAN storage mediums are used for deployments involving large numbers of VMs and users and provide the following features:

Multicontroller, scale-out architecture with consolidation and efficiency for the enterprise.

Scaling of resources through common and fully redundant building blocks.

Local boot disks are optional and available only for bare metal blades.

Components

Converged Systems contain the following key hardware and software components:

Resource

Components

Converged Systems Management

Vision Intelligent Operations for Converged Systems or VxBlock Central for VxBlock Systems.

The following options are available for VxBlock Central:

The Base option provides the VxBlock Central user interface.

The Advanced option adds VxBlock Central Orchestration, which provides:

VxBlock Central Orchestration Services

VxBlock Central Orchestration Workflows

The Advanced Plus option adds VxBlock Central Operations and VxBlock Central Orchestration .

Virtualization and management

VMware vSphere Server Enterprise Plus

VMware vSphere ESXi

VMware vCenter Server

VMware vSphere Web Client

VMware Single Sign-On (SSO) Service

Cisco UCS Servers for AMP2 and AMP-3S

PowerPath/VE

Cisco UCS Manager

Unisphere for VMAX

Secure Remote Services (ESRS)

PowerPath Management Appliance

Cisco Data Center Network Manager (DCNM) for SAN

Compute

Cisco UCS 5108 Blade Server Chassis

Cisco UCS B-Series M3 Blade Servers with Cisco UCS VIC 1240, optional port expander or Cisco UCS VIC 1280

Cisco UCS B-Series M4 or M5 Blade Servers with one of the following:

Cisco UCS VIC 1340, with optional port expander

7 |

System overview

Blade Servers with one of the following: — Cisco UCS VIC 1340, with optional port expander

Resource

Components

 

Cisco UCS VIC 1380

Cisco UCS 2204XP Fabric Extenders or Cisco UCS 2208XP Fabric Extenders

Cisco UCS 6248UP Fabric Interconnects or Cisco UCS 6296UP Fabric Interconnects

Network

Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP Switches, Cisco Nexus 93180YC-EX, or Cisco Nexus 9396PX Switches

Cisco MDS 9148S, Cisco MDS 9396S 16G Multilayer Fabric Switches, or Cisco MDS 9706 Multilayer Directors

Cisco Nexus 3064-T Switches or Cisco Nexus 3172TQ Switches

Optional Cisco Nexus 1000V Series Switches

Optional VMware vSphere Distributed Switch (VDS) for VxBlock Systems

Optional VMware NSX Virtual Networking for VxBlock Systems

Storage

VMAX 400K

VMAX 200K

VMAX 100K

VMAX All Flash 950F and 950FX

VMAX All Flash 850F and 850FX

VMAX All Flash 450F and 450FX

VMAX All Flash 250F and 250FX

Base configurations

Converged Systems have a base configuration that is a minimum set of compute and storage components, and fixed network resources.

These components are integrated in one or more 28-inch 42 U cabinets. In the base configuration, you can customize the following hardware aspects:

Hardware

How it can be customized

Compute

Cisco UCS B-Series M4 and M5 Servers

Up to 16 chassis per Cisco UCS domain

Up to 4 Cisco UCS domains (4 pairs of FIs)

Supports up to 32 double-height Cisco UCS blade servers per domain

Supports up to 64 full-width Cisco UCS blade servers per domain

Supports up to 128 half-width Cisco UCS blade servers per domain

Edge servers (with optional VMware NSX)

Refer to the Dell EMC for VMware NSX Architecture Overview.

Network

1 pair of Cisco MDS 9148S Multilayer Fabric Switches, or one pair of Cisco MDS 9396S Multilayer Fabric Switches, or one pair of Cisco MDS 9706 Multilayer Directors

One pair of Cisco Nexus 93180YC-EX Switches or one pair of Cisco Nexus 9396PX Switches

1 pair of Cisco Nexus 3172TQ Switches

Switches or one pair of Cisco Nexus 9396PX Switches • 1 pair of Cisco Nexus 3172TQ

System overview

|

8

Hardware

How it can be customized

Storage

Supports 2.5-inch drives, 3.5-inch drives, and both 2.5 and 3.5 inch drives (VMAX 400K, VMAX 200K, and VMAX 100K only)

VMAX 400K

Contains 1–8 engines

Contains a maximum of 256 front-end ports

Supports 10–5760 drives

VMAX 200K

Contains 1–4 engines

Contains a maximum of 128 front-end ports

Supports 10–2880 drives

VMAX 100K

Contains 1–2 engines

Contains a maximum of 128 front-end ports

Supports 10–1440 drives

Supports 2.5-inch drives (VMAX All Flash models only)

VMAX All Flash 950F and 950FX

Contains 1–8 engines

Contains a maximum of 192 front-end ports

Supports 17–1920 drives

VMAX All Flash 850F and 850FX

Contains 1–8 engines

Contains a maximum of 192 front-end ports

Supports 17–1920 drives

VMAX All Flash 450F and 450FX

Contains 1–4 engines

Contains a maximum of 96 front-end ports

Supports 17–960 drives

VMAX All Flash 250F and 250FX

Contains 1 or 2 engines

Contains a maximum of 64 front-end ports

Supports 17–100 drives

A single-cabinet configuration of the VMAX All Flash 250F and 250FX is available with a single engine. Expansion of the storage array requires the addition of a storage technology extension.

9 |

System overview

engine. Expansion of the storage array requires the addition of a storage technology extension. 9 |

Hardware

How it can be customized

Storage policies

Policy levels are applied at the storage group level of array masking.

Array storage is organized by the following service level objectives (VMAX 400K, VMAX 200K, and VMAX 100K only):

Optimized (default) - system optimized

Bronze - 12 ms response time, emulating 7.2 K drive performance

Silver - 8 ms response time, emulating 10K RPM drives

Gold - 5 ms response time, emulating 15 K RPM drives

Platinum - 3 ms response time, emulating 15 K RPM drives and enterprise flash drive (EFD)

Diamond - < 1 ms response time, emulating EFD

Array storage is organized by the following service level objective (VMAX All Flash models only):

Diamond - <1 ms response time, emulating EFD

Supported disk drives

VMAX 400K, 200K, and 100K

Disk drive maximums:

VMAX 400K = 5760

VMAX 200K = 2880

VMAX 100K = 1440

Tier 1 drives:

Solid state: 200/400/800/1600 GB

Tier 2 drives:

15 K RPM: 300 GB

10K RPM: 300/600/1200 GB

Tier 3 drives:

7.2 K RPM: 2/4 TB

VMAX All Flash models

Disk drive maximums:

VMAX All Flash 950F and 950FX = 1920

VMAX All Flash 850F and 850FX = 1920

VMAX All Flash 450F and 450FX = 960

VMAX All Flash 250F and 250FX = 100

Tier 1 drives:

Solid state: 960/1920/3840 GB (VMAX All-Flash arrays)

Solid state: 960/1920/3840/7680/15360 GB (VMAX All Flash 250F and 250FX and 950F and 950FX models only)

Management hardware options

AMP-2 is available in multiple configurations that use their own resources to run workloads without consuming resources on the Converged System.

AMP-3S is available in a single configuration that uses its own resources to run workloads without consuming resources on the Converged System.

Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the compute and storage arrays in the Converged System. All components have N+N or N+1 redundancy.

to the compute and storage arrays in the Converged System. All components have N+N or N+1

System overview

|

10

Depending upon the configuration, the following maximums apply:

Component

Maximum configurations

Cisco UCS 62xxUP Fabric Interconnects

The maximum number of Cisco Series Blade Server Chassis with 4 Cisco UCS domains is:

32 for Cisco UCS 6248UP Fabric Interconnects

64 for Cisco UCS 6296UP Fabric Interconnects

Maximum blades are as follows:

Half width = 512

Full width = 256

Double height = 128

Scaling up compute resources

To scale up compute resources, you can add blade packs and chassis activation kits when Converged Systems are built or after they are deployed.

Blade packs

Cisco UCS blades are sold in packs of two, and include two identical Cisco UCS blades.

The base configuration of Converged Systems includes two blade packs. The maximum number of blade packs depends on the selected scale point.

Each blade type must have a minimum of two blade packs as a base configuration and can be increased in single blade pack increments thereafter. Each blade pack is added along with license packs for the following software:

Cisco UCS Manager (UCSM)

VMware vSphere ESXi

Cisco Nexus 1000V Series Switch (Cisco Nexus 1000V Advanced Edition only)

PowerPath/VE

Nexus 1000V Advanced Edition only) • PowerPath/VE License packs for VMware vSphere ESXi, Cisco Nexus 1000V

License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switch, and PowerPath/VE are not available for bare metal blades.

Additional chassis

The power supplies and fabric extenders for all chassis are pre-populated and cabled, and all required Twinax cables and transceivers are populated. However, in base Converged Systems configurations, there is a minimum of two Cisco UCS 5108 Blade Server Chassis. There are no unpopulated server chassis unless they are ordered that way. This limited licensing reduces the entry cost for Converged Systems.

As more blades are added and additional chassis are required, additional chassis are added automatically to an order. The kit contains software licenses to enable additional fabric interconnect ports.

Only enough port licenses for the minimum number of chassis to contain the blades are ordered. Additional chassis can be added up-front to allow for flexibility in the field or to initially spread the blades across a larger number of chassis.

11 |

System overview

for flexibility in the field or to initially spread the blades across a larger number of

Network topology

In the network topology for Converged Systems, LAN and SAN connectivity is segregated into separate Cisco Nexus switches.

LAN switching uses the Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch and the Cisco Nexus 55xxUP Switch. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, the Cisco MDS 9396S 16G Multilayer Fabric Switch, or the Cisco MDS 9706 Multilayer Director.

The optional VMware NSX feature uses the Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX switches for LAN switching. For more information, refer to the Converged Systems for VMware NSX Architecture Overview.

The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCS fabric interconnects connect to the Cisco Nexus switches in the Ethernet network through 10 GbE port channels and to the Cisco MDS switches through port channels made up of multiple 8 Gb links.

The front-end IO modules in the storage array connect to the Cisco MDS switches in the network layer over 16 Gb FC links.

in the storage array connect to the Cisco MDS switches in the network layer over 16

System overview

|

12

The following illustration shows a network block storage configuration for Converged Systems:

a network block storage configuration for Converged Systems: SAN boot storage configuration VMware vSphere ESXi hosts

SAN boot storage configuration

VMware vSphere ESXi hosts always boot over the FC SAN from a 10 GB boot LUN (VMware vSphere 6.x) which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storage can be presented as virtual machine file system (VMFS) data stores or as raw device mappings (RDMs).

VMware vSphere ESXi hosts always boot over the FC SAN from a 15 GB boot LUN (VMware vSphere 6.5), which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The remainder of the storage can be presented as VMFS data stores or as raw device mappings.

13

|

System overview

files. The remainder of the storage can be presented as VMFS data stores or as raw

Compute layer

Cisco UCS B-Series Blade Servers installed in the Cisco UCS server chassis provide computing power within Converged Systems.

FEX within the Cisco UCS server chassis connect to FIs over converged Ethernet. Up to eight 10-GbE ports on each FEX connect northbound to the FIs regardless of the number of blades in the server chassis. These connections carry IP and FC traffic.

Dell EMC has reserved some of the FI ports to connect to upstream access switches within the Converged System. These connections are formed into a port channel to the Cisco Nexus switches, and carry IP traffic destined for the external network links. In a unified storage configuration, this port channel can also carry NAS traffic to the within the storage layer.

Each FI also has multiple ports reserved by Dell EMC for FC ports. These ports connect to Cisco SAN switches. These connections carry FC traffic between the compute layer and the storage layer. SAN port channels carrying FC traffic are configured between the FIs and upstream Cisco MDS switches.

Cisco UCS

The Cisco UCS data center platform unites compute, network, and storage access. Optimized for virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb/s Ethernet unified network fabric with enterprise-class, x86-based Cisco UCS B-Series Servers.

Converged Systems contain a number of Cisco UCS 5108 Server Chassis. Each chassis can contain up to eight half-width Cisco UCS B-Series M4 and M5 Blade Servers, four full-width, or two double-height blades. The full-width, double-height blades must be installed at the bottom of the chassis.

In a Converged System, each chassis also includes Cisco UCS fabric extenders and Cisco UCS B-Series Converged Network Adapters.

Converged Systems powered by Cisco UCS offer the following features:

Built-in redundancy for high availability

Hot-swappable components for serviceability, upgrade, or expansion

Fewer physical components than in a comparable system built piece by piece

Reduced cabling

Improved energy efficiency over traditional blade server chassis

Compute connectivity

Cisco UCS B-Series Blades installed in the Cisco UCS chassis, along with C-series Compute Technology Extension Servers, provide computing power in a Converged System.

Fabric extenders (FEX) in the Cisco UCS chassis connect to Cisco fabric interconnects (FIs) over converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP and storage traffic.

regardless of the number of blades in the chassis. These connections carry IP and storage traffic.

Compute layer

|

14

Dell EMC uses multiple ports for each fabric interconnect for 8 Gbps FC. These ports connect to Cisco MDS storage switches and the connections carry FC traffic between the compute layer and the storage layer. These connections also enable SAN booting of the Cisco UCS blades.

Cisco UCS fabric interconnects

Cisco UCS fabric interconnects provide network connectivity and management capability to the Cisco UCS blades and chassis.

Northbound, the FIs connect directly to Cisco network switches for Ethernet access into the external network. They also connect directly to Cisco MDS switches for FC access of the attached Converged System storage. These connections are currently 8 Gbps FC. VMAX storage arrays have 16 Gbps FC connections into the Cisco MDS switches.

VMware NSX

This VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the required port count for VMware NSX external connectivity (edges).

Cisco Trusted Platform Module

Cisco Trusted Platform Module (TPM) provides authentication and attestation services that provide safer computing in all environments.

Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption keys that are used to authenticate remote and local server sessions. Cisco TPM is available by default as a component in the Cisco UCS B- and C-Series blade servers, and is shipped disabled.

Only the Cisco TPM hardware is supported, Cisco TPM functionality is not supported. Because making effective use of the Cisco TPM involves the use of a software stack from a vendor with significant experience in trusted computing, defer to the software stack vendor for configuration and operational considerations relating to the Cisco TPM.

Related information

Disjoint Layer 2 configuration

Traffic is split between two or more networks at the FI in a Disjoint Layer 2 configuration to support two or more discrete Ethernet clouds.

Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks enable two or more Ethernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain.

The following illustration provides an example implementation of Disjoint Layer 2 networking into a Cisco UCS domain:

15

|

Compute layer

illustration provides an example implementation of Disjoint Layer 2 networking into a Cisco UCS domain: 15
vPCs 101 and 102 are production uplinks that connect to the network layer of the

vPCs 101 and 102 are production uplinks that connect to the network layer of the Converged System. vPCs 105 and 106 are external uplinks that connect to other switches. If using Ethernet performance port channels (103 and 104, by default), port channels 101 through 104 are assigned to the same VLANs. Disjoint Layer 2 network connectivity can be configured with an individual uplink on each FI.

Bare metal support policy

Since many applications cannot be virtualized due to technical and commercial reasons, Converged Systems support bare metal deployments, such as non-virtualized operating systems and applications.

Systems support bare metal deployments, such as non-virtualized operating systems and applications. Compute layer | 16

Compute layer

|

16

While it is possible for Converged Systems to support these workloads (with the following caveats), due to the nature of bare metal deployments, Dell EMC can only provide reasonable effort support for systems that comply with the following requirements:

Converged Systems contain only Dell EMC published, tested, and validated hardware and software components. The Release Certification Matrix provides a list of the certified versions of components for Converged Systems.

The operating systems used on bare metal deployments for compute components must comply with the published hardware and software compatibility guides from Cisco and Dell EMC.

For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.) those hypervisor technologies are not supported by Dell EMC. Dell EMC support is provided only on VMware Hypervisors.

Dell EMC reasonable effort support includes Dell EMC acceptance of customer calls, a determination of whether a Converged System is operating correctly, and assistance in problem resolution to the extent possible.

Dell EMC is unable to reproduce problems or provide support on the operating systems and applications installed on bare metal deployments. In addition, Dell EMC does not provide updates to or test those operating systems or applications. The OEM support vendor should be contacted directly for issues and patches related to those operating systems and applications.

17

|

Compute layer

be contacted directly for issues and patches related to those operating systems and applications. 17 |

Storage layer

VMAX3 storage arrays are high-end, storage systems built for the virtual data center.

Architected for reliability, availability, and scalability, these storage arrays use specialized engines, each of which includes two redundant director modules providing parallel access and replicated copies of all critical data.

The following table shows the software components that are supported by the VMAX3 storage array:

Component

Description

Virtual Provisioning (Virtual Pools)

Virtual Provisioning, based on thin provisioning, is the ability to present an application with more capacity than is physically allocated in the storage array. The physical storage is allocated to the application on demand as it is needed from the shared pool of capacity. Each disk group in the VMAX3 (all-inclusive) is carved into a separate virtual pool.

Storage Resource Pool (SRP)

An SRP is a collection of virtual pools that make up an FAST domain. A virtual pool can only be included in one SRP. Each

VMAX initially contains a single SRP that contains all virtual pools

in

the array.

Fully Automated Storage Tiering for Virtual Pools (FAST VP)

FAST is employed to migrate sub-LUN chunks of data between the various virtual pools in the SRP. Tiering is automatically optimized by dynamically allocating and relocating application workloads based on the defined service level objective (SLO).

SLO

An SLO defines the ideal performance operating range of an application. Each SLO contains an expected maximum response time range. An SLO uses multiple virtual pools in the SRP to achieve its response time objective. SLOs are predefined with the array and are not customizable.

Embedded Network Attached Storage (eNAS)

eNAS consists of virtual instances of the VNX NAS hardware incorporated into the HYPERMAX OS architecture. The software X- Blades and control stations run on VMs embedded in a VMAX engine.

Embedded Management (eMGMT)

eMGMT is the management model for VMAX3 which is a combination of Solutions Enabler and Unisphere for VMAX running locally on the VMAX using virtual servers.

HYPERMAX OS

The storage operating environment for VMAX3 delivering performance, array tiering, availability, and data integrity.

Unisphere for VMAX

Browser-based GUI for device creating, managing, and monitoring on storage arrays.

VMAX All Flash Inline Compression

Inline Compression is only for the VMAX All Flash models, and enables customers to compress data for increased effective capacity.

Symmetrix Remote Data Facility (SRDF)

A

VMAX native replication technology that enables a VMAX system

to

copy data to one or more VMAX systems.

D@RE

Data at Rest Encryption

a VMAX system to copy data to one or more VMAX systems. D@RE Data at Rest

Storage layer

|

18

VMAX3 storage arrays

VMAX3 storage arrays have characteristics that are common across all models.

VMAX3 hybrid storage arrays (400K, 200K, and 100K) include the following features:

Two 16 Gb multimode (MM), FC, four-port IO modules per director (four per engine) - two slots for additional front-end connectivity are available per director.

Minimum of five drives with a maximum of 360 3.5 inch drives or 720 2.5 inch drives per engine.

Option of 2.5 inch, 3.5 inch or a combination of 2.5 inch and 3.5 inch drives.

Racks may be dispersed, however, each rack must be within 25 meters of the first rack.

Number of supported Cisco UCS domains and servers depends on the number of array engines.

VMAX3 All Flash storage arrays include the following features:

Two 16 Gb multimode (MM), FC, four-port IO modules per director (four per engine). The VMAX All Flash 250F and 250FX have two additional slots per director for front-end connectivity. All other VMAX All Flash arrays have one slot for front-end connectivity.

Minimum of 17 drives per V-Brick. VMAX 250F and 250FX have a minimum of 9 drives per V- Brick.

Maximum of 240 drives per V-Brick. VMAX 250F and 250FX have a maximum of 50 drives per V- Brick.

Racks may be dispersed, however, each rack must be within 25 meters of the first rack.

Number of supported Cisco UCS domains and servers depends on the number of array engines.

19

|

and servers depends on the number of array engines. 19 | Only 2.5 inch drives are

Only 2.5 inch drives are supported for VMAX All Flash models.

Storage layer

depends on the number of array engines. 19 | Only 2.5 inch drives are supported for

The following illustration shows the interconnection of the VMAX3 in Converged Systems:

shows the interconnection of the VMAX3 in Converged Systems: The following table shows the engines with

The following table shows the engines with the maximum blades:

following table shows the engines with the maximum blades: Domain count depends on LAN switch connectivity.

Domain count depends on LAN switch connectivity.

Engines

Maximum blades (half-width)

1

128

2

256

3

384

4

512

5

512

6

512

blades (half-width) 1 128 2 256 3 384 4 512 5 512 6 512 Storage layer

Storage layer

|

20

Engines

Maximum blades (half-width)

7

512

8

512

Supported drives

The following tables shows the supported drives and RAID protection levels for VMAX3 Hybrid models:

Drive

Flash

SAS

NL-SAS

3½ inch drive capacity/speed

200

GB

300

GB -10K

2

TB - 7.2K

400

GB

300

GB - 15K

4

TB - 7.2K

800

GB

600

GB - 10K

 
 

1200

GB - 10K

 

2 ½ inch drive capacity/speed

200

GB

300

GB -10K

 

400

GB

300

GB - 15K

 

800

GB

600

GB - 10K

 
 

1200

GB - 10K

 

RAID protection

R5 (3+1) (default)

Mirrored (default)

R6 (6+2) (default)

R5 (7+1)

R5 (3+1)

R6 (14+2)

 

R5 (7+1)

 

The following table shows the supported drives and RAID protection levels for VMAX All Flash models:

 

250F/FX

450F/FX

850F/FX

950F/FX

Flash drives

960 GB

960 GB

960 GB

960 GB

1.92

TB

1.92

TB

1.92

TB

1.92

TB

3.84

TB

3.84

TB

3.84

TB

3.84

TB

7.68

TB

   

7.68

TB

15.36 TB

15.36 TB

RAID protection

R5 (3+1)

R5 (7+1)

R5 (7+1)

R5 (7+1)

R5 (7+1)

R6 (14+2)

R6 (14+2)

R6 (14+2)

R6 (6+2)

The following table shows the supported drives on VMAX:

Drive types

VMAX3

VMAX AFA VMAX 250F, 450F, 850F, 950F

VMAX 100K, 200K, 400K

SSD 2.5 in.

960 GB, 1.9 TB

960 GB, 1.9, 3.8, 7.6, and 15.3 TB

SSD 3.5 in.

Not offered

N/A

15K 2.5 in.

Not offered

N/A

15K 3.5 in.

Not offered

N/A

21

|

Storage layer

in. Not offered N/A 15K 2.5 in. Not offered N/A 15K 3.5 in. Not offered N/A

Drive types

VMAX3

VMAX AFA VMAX 250F, 450F, 850F, 950F

VMAX 100K, 200K, 400K

10K 2.5 in.

600, 1200 GB

N/A

10K 3.5 in.

Not offered

N/A

7.2K 3.5 in. (NL-SAS)

Not offered

N/A

VMAX3 storage array features

Each VMAX 100K, 200K, and 400K storage array has minimum and maximum configurations, engines with processors and cache options.

VMAX 400K storage arrays

VMAX 400K storage arrays include the following features:

Minimum configuration contains one engine and the maximum contains eight engines

VMAX 400K engines contain Intel 2.7 GHz Ivy Bridge processors with 48 cores

Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)

VMAX 200K storage arrays

VMAX 200K storage arrays include the following features:

Minimum configuration contains one engine, and the maximum contains four engines

VMAX 200K engines contain Intel 2.6 GHz Ivy Bridge processors with 32 cores

Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)

VMAX 100K storage arrays

VMAX 100K contains the following features:

Minimum configuration contains a single engine

Maximum configuration contains two engines

VMAX 100K engines contain Intel 2.1 GHz Ivy Bridge processors with 24 cores

Available cache options are: 512 GB and 1024 GB per engine

VMAX All Flash storage arrays

Availability of the VMAX All Flash option is based on drive capacity and available software of the VMAX All Flash arrays.

VMAX All Flash option is based on drive capacity and available software of the VMAX All

Storage layer

|

22

Overview

VMAX All Flash models include the following features:

 

250F/FX

450F/FX

850F/FX

950F/FX

Number of V- Bricks

1-2

1-4

1-8

1-8

Cache per V- Brick

512 GB

1024

GB

1024

GB

1024

GB

1024

GB

2048

GB

2048

GB

2048

GB

 

2048

GB

     

Initial capacity

11.3

TBu

52.6

TBu

52.6

TBu

56.6

TBu

per V-Brick

       

Incremental

11.3

TBu

13.2

TBu

13.2

TBu

13.2

TBu

capacity

       

CPU

Intel Xeon E5-2650-

Intel Xeon

Intel Xeon E5-2697-v2,

Intel Xeon E5-2697-v4,

v4,

E5-2650-v2,

2.7 GHz 12 core

2.3 GHz 18 core

2.2 GHz 12 core

2.6 GHz 8 core

Note the following best practices and options:

All V-Bricks in the array must contain identical cache types.

A single cabinet Vblock configuration of the VMAX All Flash 250F and 250FX is available. It has a single engine and expansion of the storage array requires the addition of a Converged Technology Extension for Dell EMC Storage.

By default data devices have compression enabled at the Storage Group level. Disable compression if the application is not designed for compression.

The following table describes the VMAX All Flash versions:

Software

VMAX All Flash 950F, 850F, 450F, 250F

VMAX All Flash 950FX, 850FX, 450FX, 250FX

package

Base software

HyperMAX operating system

HyperMAX operating system

(Included)

Thin provisioning

Thin provisioning

QoS or host I/O limits

QoS or host I/O limits

eNAS

eNAS

Embedded Management

Embedded Management

Inline Backend Compression

Inline Backend Compression

D@RE

Optional software

D@RE (disabled by default)

By default, all software is included.

SRDF (disabled by default)

Embedded Network Attached Storage

VMAX3 arrays support Unified storage by consolidating Block and File data into a single array.

Embedded Network Attached Storage (eNAS) uses the hypervisor to create and run a set of VMs on VMAX3 controllers. The VMs host software X-Blades and control stations, which are two major elements

23

|

Storage layer

on VMAX3 controllers. The VMs host software X-Blades and control stations, which are two major elements

of eNAS. The virtual elements are distributed across the VMAX3 system to evenly consume VMAX3 resources for both performance and capacity.

The following table shows the elements for the VMAX3 modules:

VMAX

Maximum number of X- Blades

Usable capacity

Maximum eNAS I/O modules/software X- Blades

I/O supported modules

100K

(1 active + 1 standby)

2

256 TBu

2

2

x 10 GbE Cu

2

x 10 GbE Opt

 

4

x 8 Gb FC Opt (NDMP)

200K

(3 active + 1 standby)

4

1536

TBu

2*

2

x 10 GbE Cu

2

x 10 GbE Opt

 

4

x 8 Gb FC Opt (NDMP)

400K

(7 active + 1 standby)

8

3584

TBu

2*

2

x 10 GbE Cu

2

x 10 GbE Opt

 

4

x 8 Gb FC Opt (NDMP)

250F/250FX

(3 active + 1 standby)

4

1.1

PBu

2*

2

x 10 GbE Cu

 

2

x 10 GbE Opt

Requires 2 V- Bricks

4

x 8 Gb FC Opt (NDMP)

450F/450FX

(3 active + 1 standby)

4

1.5

PBu

1*

2

x 10 GbE Cu

 

2

x 10 GbE Opt

Requires minimum 2 V- Bricks

4

x 1 GbE Cu

4

x 8 Gb FC Opt (NDMP)

850F/850FX

(7 active + 1 standby)

8

3.5

PBu

1*

2

x 10 GbE Cu

 

2

x 10 GbE Opt

Requires minimum 4 V- Bricks

4

x 1 GbE Cu

4

x 8 Gb FC Opt (NDMP)

950F/950FX

(7 active + 1 standby)

8

3.5

PBu

1*

2

x 10 GbE Cu

 

2

x 10 GbE Opt

Requires minimum 4 V- Bricks

4

x 8 Gb FC Opt (NDMP)

*Converged Systems require at least two of the available slots to have 16 Gb FC IOMs per director for host connectivity.

Symmetrix Remote Data Facility

VMAX3 arrays support remote replication using Symmetrix Remote Data Facility (SRDF).

SRDF requires:

CPU cores dedicated to the RDF protocol (CPU Slice running RDF emulation)

(SRDF). SRDF requires: • CPU cores dedicated to the RDF protocol (CPU Slice running RDF emulation)

Storage layer

|

24

Dedicated front-end ports assigned to the RDF over FC (RF) or RDF over Ethernet (RE) emulation

SRDF ports must be on dedicated IO Modules (SLICs). Front-end ports used for host connectivity should not share 16 Gb FC SLICs with SRDF.to the RDF over FC (RF) or RDF over Ethernet (RE) emulation SRDF is not configured

SRDF is not configured in the factory. Dell EMC Professional Services configures SRDF after the VxBlock System is installed at the customer site.

Converged Systems support the following SRDF connectivity models. All participating VMAX arrays connect as follows:

To the customer SAN/LAN switches (default connectivity)

To the same VxBlock SAN/LAN switches

To the same fabric technology SAN/LAN switches

VxBlock System SAN switches should never connect directly to a customer SAN. It is permissible for a VxBlock System SAN fabric to connect to a different VxBlock System for Data Protection purposes. Refer to the Integrated Data Protection Design Guide for additional information.• To the same fabric technology SAN/LAN switches The following table shows the elements for the

The following table shows the elements for the VMAX3 storage arrays:

I/O module

VMAX 400K

VMAX 850F/FX

VMAX 950F/FX VMAX 250 F/FX

VMAX 200K

VMAX 450F/FX

VMAX 100K

 

4

x 16 Gb FC

Yes

Yes

Yes

4

x 8 Gb FC

Yes

Yes

Yes

4

x 10 Gb Ethernet

No

Yes

Yes

2

x 10 Gb Ethernet

Yes

Yes

No

2

x 1 Gb Ethernet Cu

Yes

Yes

Yes

2

x 1 Gb Ethernet Opt

Yes

Yes

Yes

25

|

Storage layer

No 2 x 1 Gb Ethernet Cu Yes Yes Yes 2 x 1 Gb Ethernet Opt

Network layer

LAN and SAN make up the network layer.

LAN layer

The LAN layer includes a pair of Cisco Nexus switches.

The Converged System includes a pair of Cisco Nexus 55xxUP, Cisco Nexus 3172TQ, and Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switches.

The Cisco Nexus switches provide 10-GbE connectivity:

Between internal components

To the site network

To the AMP-2 through redundant connections between AMP-2 and the Cisco Nexus 9000 Series Switches

To the AMP-3S through redundant connections between AMP-3S and the Cisco Nexus 9000 Series Switches. AMP-3S with VMware vSphere 6.5 is supported only with the Cisco Nexus

93180YC-EX.

The following table shows LAN layer components:

Component

Description

Cisco Nexus 93180YC-EX

1 U appliance

Supports 48 fixed 10/25-Gbps SFP+ ports and 6 fixed 40/100-Gbps QSFP+ ports

No expansion modules available

Cisco Nexus 9396PX Switch

2 U appliance

Supports 48 fixed, 10-Gbps SFP+ ports and 12 fixed, 40-Gbps QSFP + ports

No expansion modules available

Cisco Nexus 3172TQ Switch

1 U appliance

Supports 48 fixed, 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivity ports, and 6 fixed, 40-Gbps QSFP+ ports for the management layer of the Converged System

Cisco Nexus 3064-T Switch - management networking

The base Cisco Nexus 3064-T Switch provides 48 100Mbps/1GbE/10GbE Base-T fixed ports and 4- QSFP+ ports to provide 40GbE connections.

provides 48 100Mbps/1GbE/10GbE Base-T fixed ports and 4- QSFP+ ports to provide 40GbE connections. Network layer

Network layer

|

26

The following table shows core connectivity for the Cisco Nexus 3064-T Switch for management networking and reflects the AMP-2 HA base for two servers:

Feature

Used ports

Port speeds

Media

Management uplinks from fabric interconnect (FI)

2

1

GbE

Cat6

Uplinks to customer core

2

Up to 10 G

Cat6

vPC peer links

2QSFP+

10

GbE/40 GbE

Cat6/MMF 50µ/125

 

LC/LC

Uplinks to management

1

1 GbE

Cat6

Cisco Nexus management ports

1

1 GbE

Cat6

Cisco MDS management ports

2

1 GbE

Cat6

VMAX3 management ports

4

1 Gbe

Cat6

AMP-2-CIMC ports

1

1 GbE

Cat6

AMP-2 ports

2

1 GbE

Cat6

AMP-2-10G ports

2

10

GbE

Cat6

VNXe management ports

1

1

GbE

Cat6

VNXe_NAS ports

4

10

GbE

Cat6

Gateways

14

100 Mb/1 GbE

Cat6

The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and their necessary management connections.

Cisco Nexus 3172TQ Switch - management networking

Each Cisco Nexus 3172TQ Switch provides 48 100 Mbps/1000 Mbps/10 Gbps twisted pair connectivity and six 40 GbE QSFP+ ports.

Cisco Nexus 3172TQ Switch on AMP

The following table shows core connectivity for the Cisco Nexus 3172TQ Switches for management networking and reflects the base for two servers:

The following table shows core connectivity for the Cisco Nexus 3172TQ Switches for management networking and reflects the base for six servers:

management networking and reflects the base for six servers: The port count is divided between the

The port count is divided between the switches.

The number of ports are split between the two switches.

Feature

Used ports

Port speeds

Media

Management uplinks from FI

2

1

GbE

Cat6

Uplinks to customer core

2

Up to 10 G

Cat6

vPC peer links

2QSFP+

10

GbE/40 GbE

Cat6/MMF 50µ/125

 

LC/LC

27

|

Network layer

to 10 G Cat6 vPC peer links 2QSFP+ 10 GbE/40 GbE Cat6/MMF 50µ/125   LC/LC 27

Feature

Used ports

Port speeds

Media

Uplinks to management

1

1 GbE

Cat6

Cisco Nexus management ports

2

1 GbE

Cat6

Cisco MDS management ports

2

1 GbE

Cat6

VMAX3 management ports

4

   

AMP-2 or AMP-3S CIMC ports

1 6

1 GbE

Cat6

Dell EMC AMP-2S NAS/iSCSI ports

8

10

GbE

Cat6

Gateways

14

100 Mb/1 GbE

Cat6

Dell EMC AMP-3S Unity management ports

2

1

GbE

Cat6

Dell EMC AMP-3S Unity iSCSI port

12

10

GbE

Cat6

The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and their necessary management connections.

Cisco Nexus 5548UP Switch

The base Cisco Nexus 5548UP Switch provides 32 SFP+ ports used for 1 Gbps or 10 Gbps connectivity for all Converged System production traffic.

The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module):

Feature

Used ports

Port speeds

Media

Uplinks from fabric interconnect (FI)

8

10

Gbps

Twinax

Uplinks to customer core

8

Up to 10 Gbps

SFP+

Uplinks to other Cisco Nexus 55xxUP Switches

2

10

Gbps

Twinax

Uplinks to management

3

10

Gbps

Twinax

Customer IP backup

4

1 Gbps or 10 Gbps

SFP+

If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional ports are available to provide additional network connectivity.

Cisco Nexus 5596UP Switch

The base Cisco Nexus 5596UP Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivity for LAN traffic.

The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module):

Feature

Used ports

Port speeds

Media

Uplinks from Cisco UCS fabric interconnect

8

10

Gbps

Twinax

Port speeds Media Uplinks from Cisco UCS fabric interconnect 8 10 Gbps Twinax Network layer |

Network layer

|

28

Feature

Used ports

Port speeds

Media

Uplinks to customer core

8

Up to 10 Gbps

SFP+

Uplinks to other Cisco Nexus 55xxUP Switches

2

10

Gbps

Twinax

Uplinks to management

2

10

Gbps

Twinax

The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the following additional connectivity option:

Feature

Used ports

Port speeds

Media

Customer IP backup

4

Gbps or 10 Gbps

1

SFP+

If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports are available to provide additional network connectivity.

Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch - segregated networking

The Cisco Nexus 93180YC-EX Switch provides 48 10/25 Gbps SFP+ ports and six 40/100 Gbps QSFP+ uplink ports. The Cisco Nexus 9396PX Switch provides 48 SFP+ ports used for 1 Gbps or 10 Gbps connectivity and 12 40 Gbps QSFP+ ports.

The following table shows core connectivity for the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch with segregated networking for six servers:

Feature

Used ports

Port speeds

Media

Uplinks from FI

8

10

GbE

Twinax

Uplinks to customer core

8

(10 GbE)/2 (40 GbE)

Up to 40 GbE

SFP+/

 

QSFP+

vPC peer links

2

40

GbE

Twinax

AMP-3S ESXi management**

6

12

10

GbE

SFP+

**Only supported on Cisco Nexus 93180YC-EX.

The remaining ports in the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch provide support for a combination of the following additional connectivity options:

Feature

Available

Port speeds

Media

ports

RecoverPoint WAN links (one per appliance pair)

4

1

GbE

GE T SFP+

Customer IP backup

8

GbE or 10 GbE

1

SFP+

Uplinks from Cisco UCS FIs for Ethernet BW enhancement

8

10 GbE

Twinax

29

|

Network layer

10 GbE 1 SFP+ Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10 GbE

SAN layer

Two Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9706 Multilayer Directors, or Cisco MDS 9396S 16G Multilayer Fabric Switches make up two separate fabrics to provide 16 Gbps of FC connectivity between the compute and storage layer components.

Connections from the storage components are over 16 Gbps connections.

With 10 Gbps connectivity, Cisco UCS FIs provide a FC port channel of four 8 Gbps connections (32 Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches. This can be increased to eight connections for 128 Gbps bandwidth. The Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 128 Gbps bandwidth per fabric.

The Cisco MDS switches provide:

FC connectivity between compute and storage layer components

Connectivity for backup and business continuity requirements (if configured)

backup and business continuity requirements (if configured) InterSwitch Links (ISLs) to the existing SAN or between

InterSwitch Links (ISLs) to the existing SAN or between switches is not permitted.

The following table shows SAN network layer components:

Component

Description

 

Cisco MDS 9148S Multilayer Fabric Switch

1 U appliance

Provides 12–48 line-rate ports for nonblocking 16 Gbps throughput

 

12

ports are licensed - more ports can be licensed

Cisco MDS 9396S 16G Multilayer Fabric Switch

2 U appliance

Provides 48–96 line-rate ports for nonblocking 16 Gbps throughput

 

48

ports are licensed - more ports can be licensed in 12-port

increments

Cisco MDS 9706 Multilayer Director

9 U appliance

Provides up to 12 Tbps front panel FC line rate nonblocking, system level switching

Dell EMC uses the advanced 48-port line cards at line rate of

16

Gbps for all ports

Consists of two 48-port line cards per director - up to two more 48- port line cards can be added

Dell EMC requires that 4 fabric modules are included with all Cisco MDS 9706 Multilayer Directors for an N+1 configuration

4 PDUs

2 supervisors

Cisco MDS 9148S Multilayer Fabric Switch

Converged Systems incorporate the Cisco MDS 9148S Multilayer Fabric Switch provide 12-48 line-rate ports for non-blocking, 16 Gbps throughput. In the base configuration, 24 ports are licensed. Additional ports can be licensed as needed.

In the base configuration, 24 ports are licensed. Additional ports can be licensed as needed. Network

Network layer

|

30

The following table provides core connectivity for the Cisco MDS 9148S Multilayer Fabric Switch:

Feature

Used ports

Port speeds

Media

FI uplinks

4 or 8

8 Gb

SFP+

Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706 Multilayer Director

Converged Systems incorporate the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS

9706 Multilayer Director to provide FC connectivity from storage to compute.

Cisco MDS 9706 Multilayer Directors provide 48 to 192 line-rate ports for nonblocking 16 Gbps

throughput. Port licenses are not required for the Cisco MDS 9706 Multilayer Director. The Cisco MDS

9706 Multilayer Director is a director-class SAN switch with four IOM expansion slots for 48-port 16 Gb

FC line cards. It deploys two supervisor modules for redundancy.

Cisco MDS 9396S 16G Multilayer Fabric Switches provide 48 to 96 line-rate ports for nonblocking, 16 Gbps throughput. The base license includes 48 ports. More ports can be licensed in 12-port increments.

The Cisco MDS 9396S 16G Multilayer Fabric Switch is a 96-port fixed switch with no IOM modules for port expansion.

The following table provides core connectivity for the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS 9706 Multilayer Director:

Feature

Used ports

Port speeds

Media

Cisco UCS 6248UP 48-Port FI

4

or 8

8

Gb

SFP+

Cisco UCS 6296UP 96-Port FI

8

or 16

8

Gb

SFP+

31

|

Network layer

FI 4 or 8 8 Gb SFP+ Cisco UCS 6296UP 96-Port FI 8 or 16 8

Virtualization layer

Virtualization components

VMware vSphere is the virtualization platform that provides the foundation for the private cloud. The core VMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server for management.

VMware vSphere 5.5 includes a Single Sign-on (SSO) component as a standalone Windows server or as an embedded service on the vCenter server. Only VMware vSphere vCenter server on Windows is supported.

VMware vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSO service. Either the VMware vCenter Service Appliance or the VMware vCenter Server for Windows can be deployed.

VMware vSphere 6.5 includes a pair of Platform Service Controller Linux appliances to provide the SSO service. For VMware vSphere 6.5 and later releases, VMware vCenter Server Appliance is the default deployment model for vCenter Server.

The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility with the use of VMware vMotion and Storage vMotion technology.

VMware vSphere Hypervisor ESXi

The VMware vSphere Hypervisor ESXi runs in the management servers and Converged Systems using VMware vSphere Server Enterprise Plus.

The lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install) with minimal management overhead.

In some instances, the hypervisor may be installed on a 32 GB or larger Cisco FlexFlash SD Card (mirrored HV partition). Beginning with VMware vSphere 6.x, all Cisco FlexFlash (boot) capable hosts are configured with a minimum of two 32 GB or larger SD cards.

The compute hypervisor supports 4-6 10GigE physical NICs (pNICS) on the Converged Systems VICs.

VMware vSphere ESXi does not contain a console operating system. The VMware vSphere Hypervisor ESXi boots from the SAN through an independent FC LUN presented from the storage array to the compute blades. The FC LUN also contains the hypervisor's locker for persistent storage of logs and other diagnostic files to provide stateless computing in Converged Systems. The stateless hypervisor (PXE boot into memory) is not supported.

Cluster configuration

VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusters contain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters can scale up to a maximum of 32 hosts for VMware vSphere 5.5 and 64 hosts for VMware vSphere 6.0. Clusters can support thousands of VMs.

VMware vSphere 5.5 and 64 hosts for VMware vSphere 6.0. Clusters can support thousands of VMs.

Virtualization layer

|

32

The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Some advanced CPU functionality might be unavailable if more than one blade model is running in a given cluster.

Datastores

Converged Systems support a mixture of data store types: block level storage using VMFS or file level storage using NFS. The maximum size per VMFS volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginning with VMware vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a maximum of 255 volumes.

Advanced settings are optimized for VMware vSphere ESXi hosts deployed in Converged Systems to maximize the throughput and scalability of NFS data stores. Converged Systems currently support a maximum of 256 NFS data stores per host.

Virtual networks

Virtual networking in the AMP-2 uses the VMware Virtual Standard Switch. Virtual networking is managed by either the Cisco Nexus 1000V distributed virtual switch or VMware vSphere Distributed Switch (VDS). The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all servers in the data center by allowing policies to move with a VM during live migration. This provides persistent network, security, and storage compliance.

Alternatively, virtual networking in Converged Systems is managed by VMware VDS with comparable features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a VMware Virtual Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to the hypervisor.

The implementation of Cisco Nexus 1000V for VMware vSphere 5.5 and VMware VDS for VMware vSphere 5.5 use intelligent network CoS marking and QoS policies to appropriately shape network traffic according to workload type and priority. With VMware vSphere 6.0, QoS is set to Default (Trust Host). The vNICs are equally distributed across all available physical adapter ports to ensure redundancy and maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco UCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vSphere ESXi has a predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies are assigned to the vNICs to ensure consistency in case the uplinks need to be migrated to the VMware VDS after manufacturing.

VMware vCenter Server (VMware vSphere 6.5)

VMware vCenter Server is a central management point for the hypervisors and VMs. VMware vCenter Server 6.5 resides on the VMware vCenter Server Appliance (vCSA).

By default, VMware vCenter Server is deployed using the VMware vCSA. VMware Update Manager (VUM) is fully integrated with the VMware vCSA and runs as a service to assist with host patch management.

AMP

AMP and the Converged System have a single VMware vCSA instance.

VMware vCenter Server provides the following functionality:

Cloning of VMs

33

|

Virtualization layer

vCSA instance. VMware vCenter Server provides the following functionality: • Cloning of VMs 33 | Virtualization

Creating templates

VMware vMotion and VMware Storage vMotion

Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere high-availability clusters

VMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. Converged System administrators can create and apply the following alarms to all managed objects in VMware vCenter Server:

Data center, cluster and host health, inventory, and performance

Data store health and capacity

VM usage, performance, and health

Virtual network usage and health

Databases

The VMware vCSA uses the embedded PostgreSQL database. The VMware Update Manager and VMware vCSA share the same PostgreSQL database server, but use separate PostgreSQL database instances.

Authentication

Converged Systems support the VMware Single Sign-On (SSO) Service capable of the integration of multiple identity sources including AD, Open LDAP, and local accounts for authentication. VMware vSphere 6.5 includes a pair of VMware Platform Service Controller (PSC) Linux appliances to provide the VMware SSO service. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and Update Manager run as separate services. Each service can be configured to use a dedicated service account depending on the security and directory services requirements.

use a dedicated service account depending on the security and directory services requirements. Virtualization layer |

Virtualization layer

|

34

Supported features

Dell EMC supports the following VMware vCenter Server features:

VMware SSO Service

VMware vSphere Platform Service Controller

VMware vSphere Web Client (used with Vision Intelligent Operations or VxBlock Central)

VMware vSphere Distributed Switch (VDS)

VMware vSphere High Availability

VMware DRS

VMware Fault Tolerance

VMware vMotion

VMware Storage vMotion - Layer 3 capability available for compute resources, version 6.0 and higher

Raw Device Mappings

Resource Pools

Storage DRS (capacity only)

Storage driven profiles (user-defined only)

Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

VMware Syslog Service

VMware Core Dump Collector

VMware vCenter Web Client

VMware vCenter Server (VMware vSphere 5.5 and 6.0)

VMware vCenter Server is the central management point for the hypervisors and VMs.

VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bit Windows Server and runs as a service to assist with host patch management.

AMP-2

VMware vCenter Server provides the following functionality:

Cloning of VMs

Template creation

VMware vMotion and VMware Storage vMotion

35

|

Virtualization layer

• Cloning of VMs • Template creation • VMware vMotion and VMware Storage vMotion 35 |

Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere high-availability clusters

VMware vCenter Server also provides monitoring and alerting capabilities for hosts and VMs. System administrators can create and apply alarms to all managed objects in VMware vCenter Server, including:

Data center, cluster, and host health, inventory, and performance

Data store health and capacity

VM usage, performance, and health

Virtual network usage and health

Databases

The back-end database that supports VMware vCenter Server and VUM is the remote Microsoft SQL Server 2008 (VMware vSphere 5.1) and Microsoft SQL 2012 (VMware vSphere 5.5/6.0). The SQL Server service can be configured to use a dedicated service account.

Authentication

VMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory, Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vSphere 5.x and later. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run as separate Windows services, which can be configured to use a dedicated service account depending on security and directory services requirements.

Supported features

Dell EMC supports the following VMware vCenter Server features:

VMware SSO Service (version 5.x and later)

VMware vSphere Web Client (used with Vision Intelligent Operations or VxBlock Central)

VMware vSphere Distributed Switch (VDS)

VMware vSphere High Availability

VMware DRS

VMware Fault Tolerance

VMware vMotion: Layer 3 capability available for compute resources (version 6.0 and later)

VMware Storage vMotion

Raw Device Maps

Resource Pools

Storage DRS (capacity only)

Storage-driven profiles (user-defined only)

Pools • Storage DRS (capacity only) • Storage-driven profiles (user-defined only) Virtualization layer | 36

Virtualization layer

|

36

Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)

VMware Syslog Service

VMware Core Dump Collector

VMware vCenter Web Services

37

|

Virtualization layer

VMware Syslog Service • VMware Core Dump Collector • VMware vCenter Web Services 37 | Virtualization

Management

Use VxBlock Central to manage and monitor VxBlock Systems in a data center.

The VxBlock Central provides the ability to:

View the health and RCM compliance of multiple VxBlock Systems.

View charts of key performance indicators (KPI) for one or more components or elements.

Download software and firmware components to maintain compliance with the current RCM.

Track real-time information regarding critical faults, errors, and issues affecting VxBlock Systems.

Configure multisystem Active Directory (AD) integration and map AD Groups to VxBlock Central roles.

Set up compute, storage, networks, and PXE services, manage credentials, and upload ISO images for server installation.

Monitor VxBlock System analytics and manage capacity through integration with VMware vRealize Operations (vROPs).

VxBlock Central options

VxBlock Central is available in Base, Advanced, and Advanced Plus options to manage your VxBlock System.

Base option

The Base option enables you to monitor the health and compliance of VxBlock Systems through a central dashboard.

VxBlock System health is a bottom-up calculation that monitors health or operational status of the following:

The VxBlock System as a whole system.

The physical components such as a chassis, disk array enclosure, fan, storage processor, or X- Blade.

The compute, network, storage, and management components that logically group the physical components.

The landing page of VxBlock Central provides a view of the health and compliance of multiple VxBlock Systems. You can run a compliance scan on one or more VxBlock Systems. You can view key performance indicators (KPI) for one or more components.

VxBlock Central contains dashboards that allow you to:

View all components for selected VxBlock Systems, including detailed information such as serial numbers, IP address, firmware version, and location.

View compliance scores and security and technical scan risks.

firmware version, and location. • View compliance scores and security and technical scan risks. Management |

Management

|

38

View and compare RCMs on different systems.

View real-time alerts for your system including severity, time, the system where the alert occurred, the ID, message, and status.

Configure roles with AD integration.

The following table describes each dashboard:

Dashboard

Description

Inventory

Provides high-level details of the components that are configured in a single view. It provides the name, IP address, type, and element manager, RCM scan results, and alert count for components.

An inventory item can be selected to suppress alerts and enable alerts. When alerts are suppressed for a specific component, real-time alert notifications are suspended.

You can search VxBlock Systems for specific components or subcomponents and export a spreadsheet of your search.

RCM

Provides the compliance score, security, and technical risks associated with each VxBlock System. From the dashboard, you can:

View noncompliant components, security, and technical risks associated for components.

Download software and firmware for your VxBlock System components to upgrade to a new RCM or remediate drift from your current RCM.

Run compliance scans and download and assess the results.

Check the base profile to determine whether components have the correct firmware versions.

Upload and install the latest compliance content.

Customize the compliance profile.

Alerts

Provides real-time alerting to monitor and receive alerts for critical failures on compute, storage, and network components. Administrators and Dell EMC Support can respond faster to incidents to minimize any impact of failures. Using the predefined alert notification templates to create alert notification profiles, you can specify how you want to be notified for a critical alert.

Roles

When VxBlock Central is integrated with Active Directory (AD), VxBlock Central authenticates AD users and supports mapping between AD groups and roles.

Role mappings control the actions that a user is authorized to perform. By mapping an AD group to a role, you can control user permissions. When an AD user logs in to VxBlock Central, role mappings are checked for AD groups to which the user is assigned. The set of available permissions depends on the roles mapped to the groups in which the user is a member.

Advanced

Provides access to Advanced and Advanced Plus.

Management

Advanced

The Advanced option provides automation and orchestration for daily provisioning tasks through the following features:

VxBlock Central Orchestration Services

VxBlock Central Orchestration Workflows

VxBlock Central Orchestration provides automation and orchestration for daily provisioning tasks through integration with VMware vRealize Orchestrator (vRO).

39

|

Management

orchestration for daily provisioning tasks through integration with VMware vRealize Orchestrator (vRO). 39 | Management

VxBlock Central Orchestration Services

VxBlock Central Orchestration Services sets up compute, storage, network, and PXE services. VxBlock Central Orchestration Services manages credentials and uploads ISO images for server installation.

The VxBlock Central Orchestration vRO Adapter provides supported workflows for VxBlock System compute expansion.

VxBlock Central Orchestration Workflows

VxBlock Central Orchestration Workflows simplify complex compute, storage, and network provisioning tasks using automated workflows for VMware vRO.

Automated VxBlock Central Orchestration Workflows enable you to concurrently provision multiple VMware vSphere ESXi hosts and add these hosts to the VMware vCenter cluster. The workflows implement Dell EMC best practices for VxBlock Systems and provide the validation and resilience that is required for enterprise-grade operations. Once hosts are provisioned, workflows trigger an RCM compliance scan to ensure compliance with RCM standards. The VMware vRO workflows also support bare-metal server provisioning.

The following table provides an overview of available workflows and their tasks:

Workflow

Description

Available workflow tasks

 

Establishes the connection between VMware vRO with automation workflows and VxBlock Central Orchestration Services to run workflow automation.

Add VxBlock Central Orchestration Services API gateway

Configuration

Update VxBlock Central Orchestration Services API gateway

 

Add a vCenter Server instance

 

Provides a presentation layer for user input and data validation. Service workflows create multiple instances of fulfillment workflows to run concurrently.

Provision a host (bare metal)

Service

Provision a host (bare metal) - VMAX3/PowerMax boot

 

Provision a host (ESXi) and add to a cluster - VMAX3/PowerMax boot

Add a host to a cluster

 

Performs overall orchestration of resources and runs automation tasks. You can run multiple fulfillment workflows concurrently based on user input.

Provision a bare metal server (UCS)

Fulfillment

Provision bare metal servers (optional - with VMAX boot LUN)

 

Provision an ESXi host using VMAX boot LUN

Advanced Plus

The Advanced Plus option contains VxBlock Central Orchestration, and VxBlock Central Operations. VxBlock Central Operations provides features that simplify operations you must perform for Vblock Systems through advanced monitoring, system analytics, and simplified capacity management.

VMware vRealize Operations (vROps) Manager integration with VxBlock Central presents the topology and relationship of VxBlock Systems with compute, storage, network, virtualization, and management components. VxBlock Central Operations provides advanced monitoring, system analytics, and simplified capacity management through integration with VMware vROps Manager.

system analytics, and simplified capacity management through integration with VMware vROps Manager. Management | 40

Management

|

40

VxBlock Central Operations allows you to:

Monitor health, performance, and capacity through predicative analytics.

Troubleshoot and optimize your environment though alerts and recommended actions.

Manage inventory and create reports.

Define custom alerts for performance and capacity metrics in the following actions:

Collect data from VxBlock Systems every 15 minutes by default.

Collect real-time alerts from VxBlock Systems every three minutes, by default.

View VxBlock Central VM relationships to physical infrastructure. Core VM, MSM VM, and MSP VM resource monitoring enables you to identify and monitor a collection of resources associated with a VM.

The following illustration provides an overview of how VxBlock Central uses VMware vRealize:

with a VM. The following illustration provides an overview of how VxBlock Central uses VMware vRealize:

41

|

Management

with a VM. The following illustration provides an overview of how VxBlock Central uses VMware vRealize:

VxBlock Central architecture

VxBlock Central uses VMs to provide services.

The following table provides an overview of VxBlock Central VMs:

VM

Description

Core

Discovers and gathers information about the inventory, location, and health of the VxBlock System.

MSM

Provides functions to manage multiple VxBlock Systems. In a data center environment, one MSM VM can be associated with up to 8 Core VMs.

MSP (optional)

Provides functions for RCM content prepositioning.

VMware vRO

Provides workflow engine and workflow designer capabilities.

VxBlock Central Orchestration Services

Provides firmware repository management, credentials management, log management, PXE management VxBlock System workflows require.

VxBlock Central includes the Core VM and the multisystem management (MSM) VM as a minimum configuration. The multisystem prepositioning (MSP) VM deployment is optional for prepositioning.

Discovery

The discovery model resides within a database and is exposed through REST and SNMP interfaces. Initial discovery is performed during manufacturing of the VxBlock System and relies on an .XML file that contains build and configuration information. Core VM uses the .XML file to populate basic information about the VxBlock System and establish communication with components.

After initial discovery, Core VM uses the following methods to discover the VxBlock System, including physical components and logical entities:

XML API

SNMP

SMI-S

Vendor CLIs, such as Unisphere CLI

Platform Management Interface

Core VM performs discovery every 15 minutes, by default. This setting can be changed as desired.

Interface Core VM performs discovery every 15 minutes, by default. This setting can be changed as

Management<