Documente Academic
Documente Profesional
Documente Cultură
December 2018 1.19 Added support for VMware DVS switch and
VxBlock Central.
August 2017 1.13 Added support for VMAX 950F and 950FX storage
arrays.
July 2017 1.12 Added support for Symmetrix Remote Data Facility
(SRDF)
March 2017 1.11 Added support for the Cisco Nexus 93180YC-EX
Switch
February 2017 1.10 Added the 256 GB cache option for VMAX 250F
and VMAX 250FX.
December 2016 1.9 Added support for AMP-2 on Cisco UCS C2x0 M4
servers with VMware vSphere 5.5.
November 2016 1.8 • Added support for Dell EMC network attached
storage (eNAS).
• Added support for VMAX 250F and VMAX
250FX.
• Removed IPI Appliance from elevations.
October 2015 1.3 Added support for vSphere 6.0 with Cisco Nexus
1000V switches.
Revision history | 2
Date Document revision Description of changes
3 | Revision history
Contents
Introduction................................................................................................................................................. 6
System overview.........................................................................................................................................7
Base configurations ...............................................................................................................................8
Scaling up compute resources.......................................................................................................11
Network topology..................................................................................................................................12
Compute layer...........................................................................................................................................14
Cisco UCS............................................................................................................................................14
Compute connectivity........................................................................................................................... 14
Cisco UCS fabric interconnects............................................................................................................15
Cisco Trusted Platform Module............................................................................................................ 15
Disjoint layer 2 configuration................................................................................................................ 15
Bare metal support policy.....................................................................................................................16
Storage layer............................................................................................................................................. 18
VMAX3 storage arrays......................................................................................................................... 19
VMAX3 storage array features.......................................................................................................22
VMAX All Flash storage arrays...................................................................................................... 22
Embedded Network Attached Storage...........................................................................................23
Symmetrix Remote Data Facility ...................................................................................................24
Management..............................................................................................................................................38
VxBlock Central options....................................................................................................................... 38
VxBlock Central architecture................................................................................................................ 42
Datacenter architecture........................................................................................................................ 48
AMP overview...................................................................................................................................... 50
AMP hardware components...........................................................................................................51
Management software components (vSphere 5.5 and 6.0)........................................................... 52
Management software components (VMware vSphere 6.5)..........................................................53
Contents | 4
AMP-2 management network connectivity.....................................................................................54
AMP-3S management network connectivity ................................................................................. 61
Sample configurations............................................................................................................................. 64
Sample VxBlock System 740 and Vblock System 740 with VMAX 400K............................................ 64
Sample VxBlock System 740 and Vblock System 740 with VMAX 200K............................................ 65
Sample VxBlock System 740 and Vblock System 740 with VMAX 100K............................................ 67
Additional references............................................................................................................................... 70
Virtualization components.................................................................................................................... 70
Compute components.......................................................................................................................... 70
Network components............................................................................................................................71
Storage components............................................................................................................................ 72
5 | Contents
Introduction
This document describes the high-level design of the Converged System and the hardware and software
components.
In this document, the VxBlock System and Vblock System are referred to as Converged Systems.
Introduction | 6
System overview
Converged Systems are modular platforms with defined scale points that meet the higher performance
and availability requirements of business-critical applications.
SAN storage mediums are used for deployments involving large numbers of VMs and users and provide
the following features:
• Multicontroller, scale-out architecture with consolidation and efficiency for the enterprise.
Local boot disks are optional and available only for bare metal blades.
Components
Converged Systems contain the following key hardware and software components:
Resource Components
Converged Systems Management Vision Intelligent Operations for Converged Systems or VxBlock Central for
VxBlock Systems.
The following options are available for VxBlock Central:
• The Base option provides the VxBlock Central user interface.
• The Advanced option adds VxBlock Central Orchestration, which provides:
— VxBlock Central Orchestration Services
— VxBlock Central Orchestration Workflows
• The Advanced Plus option adds VxBlock Central Operations and VxBlock
Central Orchestration .
7 | System overview
Resource Components
Network • Cisco Nexus 5548UP Switches, Cisco Nexus 5596UP Switches, Cisco
Nexus 93180YC-EX, or Cisco Nexus 9396PX Switches
• Cisco MDS 9148S, Cisco MDS 9396S 16G Multilayer Fabric Switches, or
Cisco MDS 9706 Multilayer Directors
• Cisco Nexus 3064-T Switches or Cisco Nexus 3172TQ Switches
• Optional Cisco Nexus 1000V Series Switches
• Optional VMware vSphere Distributed Switch (VDS) for VxBlock Systems
• Optional VMware NSX Virtual Networking for VxBlock Systems
Base configurations
Converged Systems have a base configuration that is a minimum set of compute and storage
components, and fixed network resources.
These components are integrated in one or more 28-inch 42 U cabinets. In the base configuration, you
can customize the following hardware aspects:
Edge servers (with optional Refer to the Dell EMC for VMware NSX Architecture Overview.
VMware NSX)
Network • 1 pair of Cisco MDS 9148S Multilayer Fabric Switches, or one pair of Cisco
MDS 9396S Multilayer Fabric Switches, or one pair of Cisco MDS 9706
Multilayer Directors
• One pair of Cisco Nexus 93180YC-EX Switches or one pair of Cisco Nexus
9396PX Switches
• 1 pair of Cisco Nexus 3172TQ Switches
System overview | 8
Hardware How it can be customized
Storage Supports 2.5-inch drives, 3.5-inch drives, and both 2.5 and 3.5 inch drives
(VMAX 400K, VMAX 200K, and VMAX 100K only)
• VMAX 400K
— Contains 1–8 engines
— Contains a maximum of 256 front-end ports
— Supports 10–5760 drives
• VMAX 200K
— Contains 1–4 engines
— Contains a maximum of 128 front-end ports
— Supports 10–2880 drives
• VMAX 100K
— Contains 1–2 engines
— Contains a maximum of 128 front-end ports
— Supports 10–1440 drives
9 | System overview
Hardware How it can be customized
Storage policies Policy levels are applied at the storage group level of array masking.
Array storage is organized by the following service level objectives (VMAX
400K, VMAX 200K, and VMAX 100K only):
• Optimized (default) - system optimized
• Bronze - 12 ms response time, emulating 7.2 K drive performance
• Silver - 8 ms response time, emulating 10K RPM drives
• Gold - 5 ms response time, emulating 15 K RPM drives
• Platinum - 3 ms response time, emulating 15 K RPM drives and enterprise
flash drive (EFD)
• Diamond - < 1 ms response time, emulating EFD
Array storage is organized by the following service level objective (VMAX All
Flash models only):
• Diamond - <1 ms response time, emulating EFD
Management hardware options • AMP-2 is available in multiple configurations that use their own resources
to run workloads without consuming resources on the Converged System.
• AMP-3S is available in a single configuration that uses its own resources to
run workloads without consuming resources on the Converged System.
Together, the components offer balanced CPU, I/O bandwidth, and storage capacity relative to the
compute and storage arrays in the Converged System. All components have N+N or N+1 redundancy.
System overview | 10
Depending upon the configuration, the following maximums apply:
Cisco UCS 62xxUP Fabric The maximum number of Cisco Series Blade Server Chassis with 4 Cisco UCS
Interconnects domains is:
• 32 for Cisco UCS 6248UP Fabric Interconnects
• 64 for Cisco UCS 6296UP Fabric Interconnects
Maximum blades are as follows:
• Half width = 512
• Full width = 256
• Double height = 128
Blade packs
Cisco UCS blades are sold in packs of two, and include two identical Cisco UCS blades.
The base configuration of Converged Systems includes two blade packs. The maximum number of blade
packs depends on the selected scale point.
Each blade type must have a minimum of two blade packs as a base configuration and can be increased
in single blade pack increments thereafter. Each blade pack is added along with license packs for the
following software:
• Cisco Nexus 1000V Series Switch (Cisco Nexus 1000V Advanced Edition only)
• PowerPath/VE
License packs for VMware vSphere ESXi, Cisco Nexus 1000V Series Switch, and
PowerPath/VE are not available for bare metal blades.
Additional chassis
The power supplies and fabric extenders for all chassis are pre-populated and cabled, and all required
Twinax cables and transceivers are populated. However, in base Converged Systems configurations,
there is a minimum of two Cisco UCS 5108 Blade Server Chassis. There are no unpopulated server
chassis unless they are ordered that way. This limited licensing reduces the entry cost for Converged
Systems.
As more blades are added and additional chassis are required, additional chassis are added
automatically to an order. The kit contains software licenses to enable additional fabric interconnect ports.
Only enough port licenses for the minimum number of chassis to contain the blades are ordered.
Additional chassis can be added up-front to allow for flexibility in the field or to initially spread the blades
across a larger number of chassis.
11 | System overview
Network topology
In the network topology for Converged Systems, LAN and SAN connectivity is segregated into separate
Cisco Nexus switches.
LAN switching uses the Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX Switch and the Cisco Nexus
55xxUP Switch. SAN switching uses the Cisco MDS 9148S Multilayer Fabric Switch, the Cisco MDS
9396S 16G Multilayer Fabric Switch, or the Cisco MDS 9706 Multilayer Director.
The optional VMware NSX feature uses the Cisco Nexus 93180YC-EX or Cisco Nexus 9396PX switches
for LAN switching. For more information, refer to the Converged Systems for VMware NSX Architecture
Overview.
The compute layer connects to both the Ethernet and FC components of the network layer. Cisco UCS
fabric interconnects connect to the Cisco Nexus switches in the Ethernet network through 10 GbE port
channels and to the Cisco MDS switches through port channels made up of multiple 8 Gb links.
The front-end IO modules in the storage array connect to the Cisco MDS switches in the network layer
over 16 Gb FC links.
System overview | 12
The following illustration shows a network block storage configuration for Converged Systems:
VMware vSphere ESXi hosts always boot over the FC SAN from a 10 GB boot LUN (VMware vSphere
6.x) which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The
remainder of the storage can be presented as virtual machine file system (VMFS) data stores or as raw
device mappings (RDMs).
VMware vSphere ESXi hosts always boot over the FC SAN from a 15 GB boot LUN (VMware vSphere
6.5), which contains the hypervisor's locker for persistent storage of logs and other diagnostic files. The
remainder of the storage can be presented as VMFS data stores or as raw device mappings.
13 | System overview
Compute layer
Cisco UCS B-Series Blade Servers installed in the Cisco UCS server chassis provide computing power
within Converged Systems.
FEX within the Cisco UCS server chassis connect to FIs over converged Ethernet. Up to eight 10-GbE
ports on each FEX connect northbound to the FIs regardless of the number of blades in the server
chassis. These connections carry IP and FC traffic.
Dell EMC has reserved some of the FI ports to connect to upstream access switches within the
Converged System. These connections are formed into a port channel to the Cisco Nexus switches, and
carry IP traffic destined for the external network links. In a unified storage configuration, this port channel
can also carry NAS traffic to the within the storage layer.
Each FI also has multiple ports reserved by Dell EMC for FC ports. These ports connect to Cisco SAN
switches. These connections carry FC traffic between the compute layer and the storage layer. SAN port
channels carrying FC traffic are configured between the FIs and upstream Cisco MDS switches.
Cisco UCS
The Cisco UCS data center platform unites compute, network, and storage access. Optimized for
virtualization, the Cisco UCS integrates a low-latency, lossless 10 Gb/s Ethernet unified network fabric
with enterprise-class, x86-based Cisco UCS B-Series Servers.
Converged Systems contain a number of Cisco UCS 5108 Server Chassis. Each chassis can contain up
to eight half-width Cisco UCS B-Series M4 and M5 Blade Servers, four full-width, or two double-height
blades. The full-width, double-height blades must be installed at the bottom of the chassis.
In a Converged System, each chassis also includes Cisco UCS fabric extenders and Cisco UCS B-Series
Converged Network Adapters.
• Reduced cabling
Compute connectivity
Cisco UCS B-Series Blades installed in the Cisco UCS chassis, along with C-series Compute Technology
Extension Servers, provide computing power in a Converged System.
Fabric extenders (FEX) in the Cisco UCS chassis connect to Cisco fabric interconnects (FIs) over
converged Ethernet. Up to eight 10 GbE ports on each Cisco UCS fabric extender connect northbound to
the fabric interconnects, regardless of the number of blades in the chassis. These connections carry IP
and storage traffic.
Compute layer | 14
Dell EMC uses multiple ports for each fabric interconnect for 8 Gbps FC. These ports connect to Cisco
MDS storage switches and the connections carry FC traffic between the compute layer and the storage
layer. These connections also enable SAN booting of the Cisco UCS blades.
Northbound, the FIs connect directly to Cisco network switches for Ethernet access into the external
network. They also connect directly to Cisco MDS switches for FC access of the attached Converged
System storage. These connections are currently 8 Gbps FC. VMAX storage arrays have 16 Gbps FC
connections into the Cisco MDS switches.
VMware NSX
This VMware NSX feature uses Cisco UCS 6296UP Fabric Interconnects to accommodate the required
port count for VMware NSX external connectivity (edges).
Cisco TPM is a computer chip that securely stores artifacts such as passwords, certificates, or encryption
keys that are used to authenticate remote and local server sessions. Cisco TPM is available by default as
a component in the Cisco UCS B- and C-Series blade servers, and is shipped disabled.
Only the Cisco TPM hardware is supported, Cisco TPM functionality is not supported. Because making
effective use of the Cisco TPM involves the use of a software stack from a vendor with significant
experience in trusted computing, defer to the software stack vendor for configuration and operational
considerations relating to the Cisco TPM.
Related information
www.cisco.com
Cisco UCS servers connect to two different clouds. Upstream Disjoint Layer 2 networks enable two or
more Ethernet clouds that never connect to be accessed by VMs located in the same Cisco UCS domain.
The following illustration provides an example implementation of Disjoint Layer 2 networking into a Cisco
UCS domain:
15 | Compute layer
vPCs 101 and 102 are production uplinks that connect to the network layer of the Converged System.
vPCs 105 and 106 are external uplinks that connect to other switches. If using Ethernet performance port
channels (103 and 104, by default), port channels 101 through 104 are assigned to the same VLANs.
Disjoint Layer 2 network connectivity can be configured with an individual uplink on each FI.
Compute layer | 16
While it is possible for Converged Systems to support these workloads (with the following caveats), due to
the nature of bare metal deployments, Dell EMC can only provide reasonable effort support for systems
that comply with the following requirements:
• Converged Systems contain only Dell EMC published, tested, and validated hardware and
software components. The Release Certification Matrix provides a list of the certified versions of
components for Converged Systems.
• The operating systems used on bare metal deployments for compute components must comply
with the published hardware and software compatibility guides from Cisco and Dell EMC.
• For bare metal configurations that include other hypervisor technologies (Hyper-V, KVM, etc.)
those hypervisor technologies are not supported by Dell EMC. Dell EMC support is provided only
on VMware Hypervisors.
Dell EMC reasonable effort support includes Dell EMC acceptance of customer calls, a determination of
whether a Converged System is operating correctly, and assistance in problem resolution to the extent
possible.
Dell EMC is unable to reproduce problems or provide support on the operating systems and applications
installed on bare metal deployments. In addition, Dell EMC does not provide updates to or test those
operating systems or applications. The OEM support vendor should be contacted directly for issues and
patches related to those operating systems and applications.
17 | Compute layer
Storage layer
VMAX3 storage arrays are high-end, storage systems built for the virtual data center.
Architected for reliability, availability, and scalability, these storage arrays use specialized engines, each
of which includes two redundant director modules providing parallel access and replicated copies of all
critical data.
The following table shows the software components that are supported by the VMAX3 storage array:
Component Description
Virtual Provisioning (Virtual Pools) Virtual Provisioning, based on thin provisioning, is the ability to
present an application with more capacity than is physically
allocated in the storage array. The physical storage is allocated to
the application on demand as it is needed from the shared pool of
capacity. Each disk group in the VMAX3 (all-inclusive) is carved
into a separate virtual pool.
Storage Resource Pool (SRP) An SRP is a collection of virtual pools that make up an FAST
domain. A virtual pool can only be included in one SRP. Each
VMAX initially contains a single SRP that contains all virtual pools
in the array.
Fully Automated Storage Tiering for Virtual FAST is employed to migrate sub-LUN chunks of data between the
Pools (FAST VP) various virtual pools in the SRP. Tiering is automatically optimized
by dynamically allocating and relocating application workloads
based on the defined service level objective (SLO).
Embedded Network Attached Storage (eNAS) eNAS consists of virtual instances of the VNX NAS hardware
incorporated into the HYPERMAX OS architecture. The software X-
Blades and control stations run on VMs embedded in a VMAX
engine.
Embedded Management (eMGMT) eMGMT is the management model for VMAX3 which is a
combination of Solutions Enabler and Unisphere for VMAX running
locally on the VMAX using virtual servers.
Unisphere for VMAX Browser-based GUI for device creating, managing, and monitoring
on storage arrays.
VMAX All Flash Inline Compression Inline Compression is only for the VMAX All Flash models, and
enables customers to compress data for increased effective
capacity.
Symmetrix Remote Data Facility (SRDF) A VMAX native replication technology that enables a VMAX system
to copy data to one or more VMAX systems.
Storage layer | 18
VMAX3 storage arrays
VMAX3 storage arrays have characteristics that are common across all models.
VMAX3 hybrid storage arrays (400K, 200K, and 100K) include the following features:
• Two 16 Gb multimode (MM), FC, four-port IO modules per director (four per engine) - two slots for
additional front-end connectivity are available per director.
• Minimum of five drives with a maximum of 360 3.5 inch drives or 720 2.5 inch drives per engine.
• Option of 2.5 inch, 3.5 inch or a combination of 2.5 inch and 3.5 inch drives.
• Racks may be dispersed, however, each rack must be within 25 meters of the first rack.
• Number of supported Cisco UCS domains and servers depends on the number of array engines.
• Two 16 Gb multimode (MM), FC, four-port IO modules per director (four per engine). The VMAX
All Flash 250F and 250FX have two additional slots per director for front-end connectivity. All
other VMAX All Flash arrays have one slot for front-end connectivity.
• Minimum of 17 drives per V-Brick. VMAX 250F and 250FX have a minimum of 9 drives per V-
Brick.
• Maximum of 240 drives per V-Brick. VMAX 250F and 250FX have a maximum of 50 drives per V-
Brick.
• Racks may be dispersed, however, each rack must be within 25 meters of the first rack.
• Number of supported Cisco UCS domains and servers depends on the number of array engines.
Only 2.5 inch drives are supported for VMAX All Flash models.
19 | Storage layer
The following illustration shows the interconnection of the VMAX3 in Converged Systems:
The following table shows the engines with the maximum blades:
1 128
2 256
3 384
4 512
5 512
6 512
Storage layer | 20
Engines Maximum blades (half-width)
7 512
8 512
Supported drives
The following tables shows the supported drives and RAID protection levels for VMAX3 Hybrid models:
1200 GB - 10K
1200 GB - 10K
R5 (7+1)
The following table shows the supported drives and RAID protection levels for VMAX All Flash models:
SSD 2.5 in. 960 GB, 1.9 TB 960 GB, 1.9, 3.8, 7.6, and 15.3 TB
21 | Storage layer
Drive types VMAX3 VMAX AFA
VMAX 100K, 200K, 400K VMAX 250F, 450F, 850F, 950F
• Minimum configuration contains one engine and the maximum contains eight engines
• VMAX 400K engines contain Intel 2.7 GHz Ivy Bridge processors with 48 cores
• Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)
• Minimum configuration contains one engine, and the maximum contains four engines
• VMAX 200K engines contain Intel 2.6 GHz Ivy Bridge processors with 32 cores
• Available cache options are: 512 GB, 1024 GB, or 2048 GB (per engine)
• VMAX 100K engines contain Intel 2.1 GHz Ivy Bridge processors with 24 cores
Storage layer | 22
Overview
Initial capacity 11.3 TBu 52.6 TBu 52.6 TBu 56.6 TBu
per V-Brick
CPU Intel Xeon E5-2650- Intel Xeon Intel Xeon E5-2697-v2, Intel Xeon E5-2697-v4,
v4, E5-2650-v2, 2.7 GHz 12 core 2.3 GHz 18 core
2.2 GHz 12 core 2.6 GHz 8 core
• A single cabinet Vblock configuration of the VMAX All Flash 250F and 250FX is available. It has a
single engine and expansion of the storage array requires the addition of a Converged
Technology Extension for Dell EMC Storage.
• By default data devices have compression enabled at the Storage Group level. Disable
compression if the application is not designed for compression.
Software VMAX All Flash 950F, 850F, 450F, 250F VMAX All Flash 950FX, 850FX, 450FX, 250FX
package
Embedded Network Attached Storage (eNAS) uses the hypervisor to create and run a set of VMs on
VMAX3 controllers. The VMs host software X-Blades and control stations, which are two major elements
23 | Storage layer
of eNAS. The virtual elements are distributed across the VMAX3 system to evenly consume VMAX3
resources for both performance and capacity.
The following table shows the elements for the VMAX3 modules:
VMAX Maximum Usable capacity Maximum eNAS I/O I/O supported modules
number of X- modules/software X-
Blades Blades
*Converged Systems require at least two of the available slots to have 16 Gb FC IOMs per director for
host connectivity.
SRDF requires:
• CPU cores dedicated to the RDF protocol (CPU Slice running RDF emulation)
Storage layer | 24
• Dedicated front-end ports assigned to the RDF over FC (RF) or RDF over Ethernet (RE)
emulation
SRDF ports must be on dedicated IO Modules (SLICs). Front-end ports used for host
connectivity should not share 16 Gb FC SLICs with SRDF.
SRDF is not configured in the factory. Dell EMC Professional Services configures SRDF after the VxBlock
System is installed at the customer site.
Converged Systems support the following SRDF connectivity models. All participating VMAX arrays
connect as follows:
VxBlock System SAN switches should never connect directly to a customer SAN. It is
permissible for a VxBlock System SAN fabric to connect to a different VxBlock System for Data
Protection purposes. Refer to the Integrated Data Protection Design Guide for additional
information.
The following table shows the elements for the VMAX3 storage arrays:
25 | Storage layer
Network layer
LAN and SAN make up the network layer.
LAN layer
The LAN layer includes a pair of Cisco Nexus switches.
The Converged System includes a pair of Cisco Nexus 55xxUP, Cisco Nexus 3172TQ, and Cisco Nexus
93180YC-EX or Cisco Nexus 9396PX Switches.
• To the AMP-2 through redundant connections between AMP-2 and the Cisco Nexus 9000 Series
Switches
• To the AMP-3S through redundant connections between AMP-3S and the Cisco Nexus 9000
Series Switches. AMP-3S with VMware vSphere 6.5 is supported only with the Cisco Nexus
93180YC-EX.
Component Description
Network layer | 26
The following table shows core connectivity for the Cisco Nexus 3064-T Switch for management
networking and reflects the AMP-2 HA base for two servers:
The remaining ports in the Cisco Nexus 3064-T Switch provide support for additional domains and their
necessary management connections.
The following table shows core connectivity for the Cisco Nexus 3172TQ Switches for management
networking and reflects the base for two servers:
The following table shows core connectivity for the Cisco Nexus 3172TQ Switches for management
networking and reflects the base for six servers:
The remaining ports in the Cisco Nexus 3172TQ Switch provide support for additional domains and their
necessary management connections.
The following table shows the core connectivity for the Cisco Nexus 5548UP Switch (no module):
If an optional 16 unified port module is added to the Cisco Nexus 5548UP Switch, there are 28 additional
ports are available to provide additional network connectivity.
The following table shows core connectivity for the Cisco Nexus 5596UP Switch (no module):
Network layer | 28
Feature Used ports Port speeds Media
The remaining ports in the base Cisco Nexus 5596UP Switch (no module) provide support for the
following additional connectivity option:
If an optional 16 unified port module is added to the Cisco Nexus 5596UP Switch, additional ports are
available to provide additional network connectivity.
The following table shows core connectivity for the Cisco Nexus 93180YC-EX Switch or Cisco Nexus
9396PX Switch with segregated networking for six servers:
The remaining ports in the Cisco Nexus 93180YC-EX Switch or Cisco Nexus 9396PX Switch provide
support for a combination of the following additional connectivity options:
Uplinks from Cisco UCS FIs for Ethernet BW enhancement 8 10 GbE Twinax
29 | Network layer
SAN layer
Two Cisco MDS 9148S Multilayer Fabric Switches, Cisco MDS 9706 Multilayer Directors, or Cisco MDS
9396S 16G Multilayer Fabric Switches make up two separate fabrics to provide 16 Gbps of FC
connectivity between the compute and storage layer components.
With 10 Gbps connectivity, Cisco UCS FIs provide a FC port channel of four 8 Gbps connections (32
Gbps bandwidth) to each fabric on the Cisco MDS 9148S Multilayer Fabric Switches. This can be
increased to eight connections for 128 Gbps bandwidth. The Cisco MDS 9396S 16G Multilayer Fabric
Switch and Cisco MDS 9706 Multilayer Directors also support 16 connections for 128 Gbps bandwidth
per fabric.
InterSwitch Links (ISLs) to the existing SAN or between switches is not permitted.
Component Description
Network layer | 30
The following table provides core connectivity for the Cisco MDS 9148S Multilayer Fabric Switch:
FI uplinks 4 or 8 8 Gb SFP+
Cisco MDS 9396S 16G Multilayer Fabric Switch and Cisco MDS 9706
Multilayer Director
Converged Systems incorporate the Cisco MDS 9396S 16G Multilayer Fabric Switch and the Cisco MDS
9706 Multilayer Director to provide FC connectivity from storage to compute.
Cisco MDS 9706 Multilayer Directors provide 48 to 192 line-rate ports for nonblocking 16 Gbps
throughput. Port licenses are not required for the Cisco MDS 9706 Multilayer Director. The Cisco MDS
9706 Multilayer Director is a director-class SAN switch with four IOM expansion slots for 48-port 16 Gb
FC line cards. It deploys two supervisor modules for redundancy.
Cisco MDS 9396S 16G Multilayer Fabric Switches provide 48 to 96 line-rate ports for nonblocking, 16
Gbps throughput. The base license includes 48 ports. More ports can be licensed in 12-port increments.
The Cisco MDS 9396S 16G Multilayer Fabric Switch is a 96-port fixed switch with no IOM modules for
port expansion.
The following table provides core connectivity for the Cisco MDS 9396S 16G Multilayer Fabric Switch and
the Cisco MDS 9706 Multilayer Director:
31 | Network layer
Virtualization layer
Virtualization components
VMware vSphere is the virtualization platform that provides the foundation for the private cloud. The core
VMware vSphere components are the VMware vSphere ESXi and VMware vCenter Server for
management.
VMware vSphere 5.5 includes a Single Sign-on (SSO) component as a standalone Windows server or as
an embedded service on the vCenter server. Only VMware vSphere vCenter server on Windows is
supported.
VMware vSphere 6.0 includes a pair of Platform Service Controller Linux appliances to provide the SSO
service. Either the VMware vCenter Service Appliance or the VMware vCenter Server for Windows can
be deployed.
VMware vSphere 6.5 includes a pair of Platform Service Controller Linux appliances to provide the SSO
service. For VMware vSphere 6.5 and later releases, VMware vCenter Server Appliance is the default
deployment model for vCenter Server.
The hypervisors are deployed in a cluster configuration. The cluster allows dynamic allocation of
resources, such as CPU, memory, and storage. The cluster also provides workload mobility and flexibility
with the use of VMware vMotion and Storage vMotion technology.
The lightweight hypervisor requires very little space to run (less than 6 GB of storage required to install)
with minimal management overhead.
In some instances, the hypervisor may be installed on a 32 GB or larger Cisco FlexFlash SD Card
(mirrored HV partition). Beginning with VMware vSphere 6.x, all Cisco FlexFlash (boot) capable hosts are
configured with a minimum of two 32 GB or larger SD cards.
The compute hypervisor supports 4-6 10GigE physical NICs (pNICS) on the Converged Systems VICs.
VMware vSphere ESXi does not contain a console operating system. The VMware vSphere Hypervisor
ESXi boots from the SAN through an independent FC LUN presented from the storage array to the
compute blades. The FC LUN also contains the hypervisor's locker for persistent storage of logs and
other diagnostic files to provide stateless computing in Converged Systems. The stateless hypervisor
(PXE boot into memory) is not supported.
Cluster configuration
VMware vSphere ESXi hosts and their resources are pooled together into clusters. These clusters
contain the CPU, memory, network, and storage resources available for allocation to VMs. Clusters can
scale up to a maximum of 32 hosts for VMware vSphere 5.5 and 64 hosts for VMware vSphere 6.0.
Clusters can support thousands of VMs.
Virtualization layer | 32
The clusters can also support a variety of Cisco UCS blades running inside the same cluster. Some
advanced CPU functionality might be unavailable if more than one blade model is running in a given
cluster.
Datastores
Converged Systems support a mixture of data store types: block level storage using VMFS or file level
storage using NFS. The maximum size per VMFS volume is 64 TB (50 TB VMFS3 @ 1 MB). Beginning
with VMware vSphere 5.5, the maximum VMDK file size is 62 TB. Each host/cluster can support a
maximum of 255 volumes.
Advanced settings are optimized for VMware vSphere ESXi hosts deployed in Converged Systems to
maximize the throughput and scalability of NFS data stores. Converged Systems currently support a
maximum of 256 NFS data stores per host.
Virtual networks
Virtual networking in the AMP-2 uses the VMware Virtual Standard Switch. Virtual networking is managed
by either the Cisco Nexus 1000V distributed virtual switch or VMware vSphere Distributed Switch (VDS).
The Cisco Nexus 1000V Series Switch ensures consistent, policy-based network capabilities to all
servers in the data center by allowing policies to move with a VM during live migration. This provides
persistent network, security, and storage compliance.
Alternatively, virtual networking in Converged Systems is managed by VMware VDS with comparable
features to the Cisco Nexus 1000V where applicable. The VMware VDS option consists of both a
VMware Virtual Standard Switch and a VMware VDS and uses a minimum of four uplinks presented to
the hypervisor.
The implementation of Cisco Nexus 1000V for VMware vSphere 5.5 and VMware VDS for VMware
vSphere 5.5 use intelligent network CoS marking and QoS policies to appropriately shape network traffic
according to workload type and priority. With VMware vSphere 6.0, QoS is set to Default (Trust Host). The
vNICs are equally distributed across all available physical adapter ports to ensure redundancy and
maximum bandwidth where appropriate. This provides general consistency and balance across all Cisco
UCS blade models, regardless of the Cisco UCS VIC hardware. Thus, VMware vSphere ESXi has a
predictable uplink interface count. All applicable VLANs, native VLANs, MTU settings, and QoS policies
are assigned to the vNICs to ensure consistency in case the uplinks need to be migrated to the VMware
VDS after manufacturing.
By default, VMware vCenter Server is deployed using the VMware vCSA. VMware Update Manager
(VUM) is fully integrated with the VMware vCSA and runs as a service to assist with host patch
management.
AMP
AMP and the Converged System have a single VMware vCSA instance.
• Cloning of VMs
33 | Virtualization layer
• Creating templates
• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere
high-availability clusters
VMware vCenter Server provides monitoring and alerting capabilities for hosts and VMs. Converged
System administrators can create and apply the following alarms to all managed objects in VMware
vCenter Server:
Databases
The VMware vCSA uses the embedded PostgreSQL database. The VMware Update Manager and
VMware vCSA share the same PostgreSQL database server, but use separate PostgreSQL database
instances.
Authentication
Converged Systems support the VMware Single Sign-On (SSO) Service capable of the integration of
multiple identity sources including AD, Open LDAP, and local accounts for authentication. VMware
vSphere 6.5 includes a pair of VMware Platform Service Controller (PSC) Linux appliances to provide the
VMware SSO service. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and
Update Manager run as separate services. Each service can be configured to use a dedicated service
account depending on the security and directory services requirements.
Virtualization layer | 34
Supported features
• VMware vSphere Web Client (used with Vision Intelligent Operations or VxBlock Central)
• VMware DRS
• VMware vMotion
• VMware Storage vMotion - Layer 3 capability available for compute resources, version 6.0 and
higher
• Resource Pools
VMware vCenter is installed on a 64-bit Windows Server. VMware Update Manager is installed on a 64-bit
Windows Server and runs as a service to assist with host patch management.
AMP-2
• Cloning of VMs
• Template creation
35 | Virtualization layer
• Initial configuration of VMware Distributed Resource Scheduler (DRS) and VMware vSphere
high-availability clusters
VMware vCenter Server also provides monitoring and alerting capabilities for hosts and VMs. System
administrators can create and apply alarms to all managed objects in VMware vCenter Server, including:
Databases
The back-end database that supports VMware vCenter Server and VUM is the remote Microsoft SQL
Server 2008 (VMware vSphere 5.1) and Microsoft SQL 2012 (VMware vSphere 5.5/6.0). The SQL Server
service can be configured to use a dedicated service account.
Authentication
VMware Single Sign-On (SSO) Service integrates multiple identity sources including Active Directory,
Open LDAP, and local accounts for authentication. VMware SSO is available in VMware vSphere 5.x and
later. VMware vCenter Server, Inventory, Web Client, SSO, Core Dump Collector, and VUM run as
separate Windows services, which can be configured to use a dedicated service account depending on
security and directory services requirements.
Supported features
• VMware vSphere Web Client (used with Vision Intelligent Operations or VxBlock Central)
• VMware DRS
• VMware vMotion: Layer 3 capability available for compute resources (version 6.0 and later)
• Resource Pools
Virtualization layer | 36
• Distributed power management (up to 50 percent of VMware vSphere ESXi hosts/blades)
37 | Virtualization layer
Management
Use VxBlock Central to manage and monitor VxBlock Systems in a data center.
• View charts of key performance indicators (KPI) for one or more components or elements.
• Download software and firmware components to maintain compliance with the current RCM.
• Track real-time information regarding critical faults, errors, and issues affecting VxBlock Systems.
• Configure multisystem Active Directory (AD) integration and map AD Groups to VxBlock Central
roles.
• Set up compute, storage, networks, and PXE services, manage credentials, and upload ISO
images for server installation.
• Monitor VxBlock System analytics and manage capacity through integration with VMware
vRealize Operations (vROPs).
Base option
The Base option enables you to monitor the health and compliance of VxBlock Systems through a central
dashboard.
VxBlock System health is a bottom-up calculation that monitors health or operational status of the
following:
• The physical components such as a chassis, disk array enclosure, fan, storage processor, or X-
Blade.
• The compute, network, storage, and management components that logically group the physical
components.
The landing page of VxBlock Central provides a view of the health and compliance of multiple VxBlock
Systems. You can run a compliance scan on one or more VxBlock Systems. You can view key
performance indicators (KPI) for one or more components.
• View all components for selected VxBlock Systems, including detailed information such as serial
numbers, IP address, firmware version, and location.
Management | 38
• View and compare RCMs on different systems.
• View real-time alerts for your system including severity, time, the system where the alert
occurred, the ID, message, and status.
Dashboard Description
Inventory Provides high-level details of the components that are configured in a single view. It provides the
name, IP address, type, and element manager, RCM scan results, and alert count for
components.
An inventory item can be selected to suppress alerts and enable alerts. When alerts are
suppressed for a specific component, real-time alert notifications are suspended.
You can search VxBlock Systems for specific components or subcomponents and export a
spreadsheet of your search.
RCM Provides the compliance score, security, and technical risks associated with each VxBlock
System. From the dashboard, you can:
• View noncompliant components, security, and technical risks associated for components.
• Download software and firmware for your VxBlock System components to upgrade to a new
RCM or remediate drift from your current RCM.
• Run compliance scans and download and assess the results.
• Check the base profile to determine whether components have the correct firmware
versions.
• Upload and install the latest compliance content.
• Customize the compliance profile.
Alerts Provides real-time alerting to monitor and receive alerts for critical failures on compute, storage,
and network components. Administrators and Dell EMC Support can respond faster to incidents
to minimize any impact of failures. Using the predefined alert notification templates to create
alert notification profiles, you can specify how you want to be notified for a critical alert.
Roles When VxBlock Central is integrated with Active Directory (AD), VxBlock Central authenticates
AD users and supports mapping between AD groups and roles.
Role mappings control the actions that a user is authorized to perform. By mapping an AD group
to a role, you can control user permissions. When an AD user logs in to VxBlock Central, role
mappings are checked for AD groups to which the user is assigned. The set of available
permissions depends on the roles mapped to the groups in which the user is a member.
Advanced
The Advanced option provides automation and orchestration for daily provisioning tasks through the
following features:
VxBlock Central Orchestration provides automation and orchestration for daily provisioning tasks through
integration with VMware vRealize Orchestrator (vRO).
39 | Management
VxBlock Central Orchestration Services
VxBlock Central Orchestration Services sets up compute, storage, network, and PXE services. VxBlock
Central Orchestration Services manages credentials and uploads ISO images for server installation.
The VxBlock Central Orchestration vRO Adapter provides supported workflows for VxBlock System
compute expansion.
VxBlock Central Orchestration Workflows simplify complex compute, storage, and network provisioning
tasks using automated workflows for VMware vRO.
Automated VxBlock Central Orchestration Workflows enable you to concurrently provision multiple
VMware vSphere ESXi hosts and add these hosts to the VMware vCenter cluster. The workflows
implement Dell EMC best practices for VxBlock Systems and provide the validation and resilience that is
required for enterprise-grade operations. Once hosts are provisioned, workflows trigger an RCM
compliance scan to ensure compliance with RCM standards. The VMware vRO workflows also support
bare-metal server provisioning.
The following table provides an overview of available workflows and their tasks:
Establishes the connection between Add VxBlock Central Orchestration Services API
VMware vRO with automation workflows gateway
and VxBlock Central Orchestration
Configuration Update VxBlock Central Orchestration Services API
Services to run workflow automation.
gateway
Provides a presentation layer for user input Provision a host (bare metal)
and data validation. Service workflows
Provision a host (bare metal) - VMAX3/PowerMax
create multiple instances of fulfillment
boot
workflows to run concurrently.
Service
Provision a host (ESXi) and add to a cluster -
VMAX3/PowerMax boot
Advanced Plus
The Advanced Plus option contains VxBlock Central Orchestration, and VxBlock Central Operations.
VxBlock Central Operations provides features that simplify operations you must perform for Vblock
Systems through advanced monitoring, system analytics, and simplified capacity management.
VMware vRealize Operations (vROps) Manager integration with VxBlock Central presents the topology
and relationship of VxBlock Systems with compute, storage, network, virtualization, and management
components. VxBlock Central Operations provides advanced monitoring, system analytics, and simplified
capacity management through integration with VMware vROps Manager.
Management | 40
VxBlock Central Operations allows you to:
• Troubleshoot and optimize your environment though alerts and recommended actions.
• Define custom alerts for performance and capacity metrics in the following actions:
— Collect real-time alerts from VxBlock Systems every three minutes, by default.
— View VxBlock Central VM relationships to physical infrastructure. Core VM, MSM VM, and
MSP VM resource monitoring enables you to identify and monitor a collection of resources
associated with a VM.
The following illustration provides an overview of how VxBlock Central uses VMware vRealize:
41 | Management
VxBlock Central architecture
VxBlock Central uses VMs to provide services.
VM Description
Core Discovers and gathers information about the inventory, location, and health of the
VxBlock System.
VxBlock Central includes the Core VM and the multisystem management (MSM) VM as a minimum
configuration. The multisystem prepositioning (MSP) VM deployment is optional for prepositioning.
Discovery
The discovery model resides within a database and is exposed through REST and SNMP interfaces.
Initial discovery is performed during manufacturing of the VxBlock System and relies on an .XML file that
contains build and configuration information. Core VM uses the .XML file to populate basic information
about the VxBlock System and establish communication with components.
After initial discovery, Core VM uses the following methods to discover the VxBlock System, including
physical components and logical entities:
• XML API
• SNMP
• SMI-S
Core VM performs discovery every 15 minutes, by default. This setting can be changed as desired.
Management | 42
The following illustration is a high-level overview of integration between Core VM and various products
and protocols:
Data collection
VxBlock Central uses data collectors to unzip required data from various web services.
43 | Management
The following table describes the data collectors:
VxBlock Central collector Uses the VxBlock Central REST API to collect the VxBlock System
configuration data and key performance indicators (KPI) already
discovered in Core VM. The configuration is stored with KPI data from
Core VM into the Cassandra and Elasticsearch databases.
SMI-S collector Works with the CIM Object Manager (ECOM) service that runs on SMI
components to discover metrics for VMAX:
• Storage array
• Storage processor
• Storage volume
• Storage pool
• Storage tier
• Disk
SNMP collector Collects information from SNMP enabled devices such as Cisco Nexus
and MDS switches to discover metrics. Information can be collected
from the following network components:
• Switches
• Network chassis
• Container
• Fan
• Expansion module
• Power supply bay
• PSU
• Network temperature sensor
• SFP
• IPI appliance
vSphere API collector Works with VMware vCenter Server using the VMware vSphere API to
discover metrics, for datastores, disk partitions, and clusters.
Dell EMC Unity REST collector Collects configuration data from a Dell EMC Unity storage array and its
components.
XIO REST collector Collects metrics for storage array, storage volume, disk, and port.
VxBlock Central collects all other configuration information with
thecollector.
XML API collector Collects information from the Cisco UCS using the XML API to discover
metrics.
VMware NSX collector Collects information about VMware NSX components, such as Virtual
Appliance Management and the NSX controllers. The NSX collector
interfaces with the NSX Manager APIs.
The VxBlock Central Shell removes the complexity of working with individual component interfaces and
provides a plug-in structure that can be extended to include more functionality. VxBlock Central Shell
creates an abstraction layer that removes the burden of having to use different login credentials, IP
addresses, and syntax to make configuration changes across multiple components. VxBlock Central Shell
can help manage multiple VxBlock Systems.
Management | 44
For example, to update the NTP server IP addresses for all switches on a VxBlock System, you can issue
a single command without having to log on to each component.
The shell is a framework layer built on top of Python and VxBlock CentralAPI bindings. In addition to the
commands provided, any valid Python command can be run in the shell.
Developers writing extensions for the VxBlock Central Shell can provide a single interface for all
components and enable users to:
• Perform operations on each VxBlock System as a single logical entity rather than a collection of
components.
• Configure and manage settings at the individual VxBlock System component level.
VxBlock Central can connect to Secure Remote Services to automatically send system inventory, real-
time alerts, and RCM fitness information through the Secure Remote Services connection to collect and
analyze data.
Use the VxBlock Central Shell Secure Remote Services Extension Pack to perform the following
functions:
— Release Certification Matrix (RCM) compliance scan results (ZIP file containing CSV, XLS,
PDF, and XML files) (if you have installed RCM content and selected a default profile)
— VxBlock System real-time alerts are automatically sent to SRS if SRS notification is
configured.
• Modify the schedule VxBlock Central uses to regularly send RCM and inventory information to
Secure Remote Services
Access key performance indicator (KPI) information using VxBlock Central or MSM VM. VxBlock Central
displays charts and graphs of KPI information for the selected element type.
45 | Management
The following table provides examples of KPI information:
Bandwidth
Temperature
The MSM VM API for multisystem services retrieves the following KPI data:
• Existing KPI definitions for a particular element type and/or component category.
Management | 46
The following illustration shows VxBlock Central Orchestration with VMware vRealize Orchestrator (vRO):
47 | Management
The following illustration shows components and services for VxBlock Central:
Management | 48
In a data center environment, one MSM VM can be associated with up to eight Core VMs:
49 | Management
The following illustration shows a single-site environment consisting of three MSM VMs, each associated
with a single Core VM:
MSM VMs are configured to form a cluster. Capabilities and functionality are exposed after deployment
and configuration.
In a single-site configuration with one datacenter, VxBlock Central supports up to three MSM VMs running
within the data center. Up to eight Core VMs are supported.
VxBlock Central supports a multisite clustering configuration that includes a maximum of three data
centers. Up to two MSM VMs are supported.
AMP overview
The AMP provides a single management point for a Converged System.
Management | 50
The core management workload is the minimum required management software to install, operate, and
support the Converged System. This includes all hypervisor management, element managers, virtual
networking components, Vision Intelligent Operations or VxBlock Central.
The Dell EMC optional management workload is non-core management workloads supported and
installed by Dell EMC to manage components in the Converged System. The list includes, but is not
limited to:
• Data protection
— Avamar Administrator
The following operational relationships apply between Cisco UCS Servers and VMware vSphere versions
for Converged Systems:
• Cisco UCS C240 M3 servers are configured with VMware vSphere 6.0.
• Cisco UCS C2x0 M4 servers are configured with VMware vSphere 6.x.
• Cisco UCS C2x0 M4 servers are configured with VMware vSphere 6.5 only with AMP-3S.
51 | Management
AMP Number of Cisco UCS Storage Description
servers
*AMP-2S is supported on Cisco UCS C220 M4 servers with VMware vSphere 6.x.
**AMP-3S is only supported on Cisco UCS C220 M4 servers with VMware vSphere 6.5. The AMP-3S
management platform can be configured with two to six servers. A minimum number of three servers is
strongly recommended to build a viable AMP-3S VMware vSphere HA and DRS cluster based on the
core and optional workload. This is due to the fact that some management applications require memory
or CPU reservations, which adversely affect the AMP-3S vSphere HA Cluster memory and CPU Slot size.
Memory and CPU reservations impact the available number of slots on a VMware vSphere ESXi host and
limit the number of VMS which can be supported on each host. If the AMP-3S is configured with two
servers and depending on the workload, vSphere HA admission control may disallow VM migration due to
a lack of available slots on a VMware vSphere ESXi host. If an AMP-2S is built with two servers, it is
imperative the Dell EMC vArchitect determine if the proposed configuration can support vSphere HA and
DRS.
The AMP-3S can be configured with two to six servers. A minimum number of three servers is strongly
recommended to build a viable AMP-3S VMware vSphere HA and DRS cluster based on the core and
optional workload. With the deployment of the standard core workload for VxBlock System management,
VMware vSphere HA is not supported with two servers.
Management | 52
• VMware vSphere Platform Services Controller
For VMware vSphere 6.0, the preferred instance is created using VMware vSphere
vCenter Server Appliance. An alternate instance may be created using the Windows
version. Only one of these options can be implemented. For VMware vSphere 5.5,
only VMware vSphere vCenter with Windows is supported.
• VMware vCenter Database using Microsoft SQL Server 2012 Standard Edition
• VMware vCenter Update Manager (VUM) - Integrated with VMware vCenter Server Appliance
For VMware vSphere 6.0, the preferred configuration (with VMware vSphere vCenter
Server Appliance) embeds the SQL server on the same VM as the VUM. The alternate
configuration leverages the remote SQL server with VMware vCenter Server on
Windows. Only one of these options can be implemented.
• Array management modules, including but not limited to Unisphere for VMAX
• (Optional) RecoverPoint management software that includes the management application and
deployment manager
53 | Management
The following components are installed dependent on the selected RCM:
For VMware vSphere 6.5, only the VMware vSphere vCenter Server Appliance
deployment model is offered.
• VMware vCenter Update Manager (VUM – Integrated with VMware vCenter Server Appliance)
• Cisco Prime Data Center Network Manager and Device Manager (DCNM)
• (Optional) RecoverPoint management software that includes the management application and
deployment manager
Management | 54
AMP-2S network connectivity with Cisco UCS C220 M4 servers (VMware vSphere 6.0)
The following illustration shows the network connectivity for the AMP-2S on the Cisco UCS C220 M4
servers:
AMP-2S server assignments with Cisco UCS C220 M4 servers (VMware vSphere 6.0)
The following illustration shows the VM server assignments for AMP-2S on the Cisco UCS C220 M4
servers. This configuration implements the default VMware vCenter Server configuration using the
55 | Management
VMware 6.0 vCenter Server Appliance and VMware Update Manager with embedded MS SQL Server
database:
Management | 56
The following illustration shows the VM server assignments for AMP-2S on the Cisco UCS C2x20 M4
servers, which implements the alternate VMware vCenter Servers configuration, using the VMware 6.0
vCenter Server, Database Servers, and VMware Update Manager:
Converged Systems that use VMware vSphere Distributed Switch (VDS) do not include the Cisco Nexus
1000V VSM VMs.
The AMP-2S option leverages the DRS functionality of the VMware vCenter to optimize resource usage
(CPU and memory) so that VM assignment to a VMware vSphere ESXi host is managed automatically.
57 | Management
AMP-2S on Cisco UCS C220 M4 servers (VMware vSphere 6.5)
The following illustration provides an overview of the network connectivity for the AMP-2S on the Cisco
C220 M4 servers:
* No default gateway
The default VMware vCenter Server configuration contains the VMware vCenter Server 6.5 Appliance
with integrated VMware Update Manager
Beginning with vSphere 6.5, Microsoft SQL will no longer be used since VMware vCenter and VUM will
utilize the Postgres database embedded within the vCSA.
The following illustration provides an overview of the VM server assignment for AMP-2S on C220 M4 with
the default configuration:
Management | 58
59 | Management
AMP-2HA network connectivity on Cisco UCS C240 M3 servers
The following illustration shows the network connectivity for AMP-2HA on the Cisco UCS C240 M3
servers:
Management | 60
AMP-2HA server assignments with Cisco UCS C240 M3 servers
The following illustration shows the VM server assignment for AMP-2HA with Cisco UCS C240 M3
servers:
61 | Management
AMP-3S on Cisco UCS C220 M4 servers (VMware vSphere 6.5)
The following illustration provides an overview of network connectivity on Cisco C220 M4 servers:
AMP-3S uses VMware virtual distributed switches with network input output configuration (NIOC) in place
of VMware standard switches.
Management | 62
The following illustration shows VM placement with two AMP-3S servers:
63 | Management
Sample configurations
Cabinet elevations vary based on the specific configuration requirements.
Elevations are provided for sample purposes only. For specifications for a specific design, consult your
vArchitect.
Cabinet 1
Sample configurations | 64
Cabinet 2
65 | Sample configurations
Cabinet 1
Sample configurations | 66
Cabinet 2
67 | Sample configurations
Cabinet 1
Sample configurations | 68
Cabinet 2
69 | Sample configurations
Additional references
References to related documentation for virtualization, compute, network and storage components are
provided.
Virtualization components
Virtualization component information and links to documentation are provided.
VMware vCenter Provides a scalable and extensible platform that forms http://www.vmware.com/products/
Server the foundation for virtualization management. vcenter-server/
Compute components
Compute component information and links to documentation are provided.
Cisco UCS 2200 Series Bring unified fabric into the blade-server chassis, www.cisco.com/c/en/us/support/
Fabric Extenders providing up to eight 10 Gbps connections each servers-unified-computing/
between blade servers and the fabric ucs-2200-series-fabric-extenders/
interconnect. tsd-products-support-series-
home.html
Cisco UCS 5108 Series Chassis that supports up to eight blade servers www.cisco.com/en/US/products/
Blade Server Chassis and up to two fabric extenders in a six U ps10279/index.html
enclosure.
Cisco UCS 6200 Series Cisco UCS family of line-rate, low-latency, www.cisco.com/en/US/products/
Fabric Interconnects lossless, 10 Gigabit Ethernet, Fibre Channel over ps11544/index.html
Ethernet (FCoE), and Fibre Channel functions.
Provide network connectivity and management
capabilities.
Additional references | 70
Network components
Network component information and links to documentation are provided.
Cisco Nexus 1000V Series Switches A software switch on a server that www.cisco.com/en/US/products/
delivers Cisco VN-Link services to ps9902/index.html
VMs hosted on that server.
Cisco Nexus 5000 Series Switches Simplifies data center transformation http://www.cisco.com/c/en/us/
by enabling a standards-based, high- products/switches/nexus-5000-
performance unified fabric. series-switches/index.html
Cisco MDS 9706 Multilayer Director Provides 48 line-rate 16 Gbps ports http://www.cisco.com/c/en/us/
and offers cost-effective scalability products/storage-networking/
through on-demand activation of mds-9706-multilayer-director/
ports. index.html
Cisco MDS 9148S Multilayer Fabric Provides 48 line-rate 16 Gbps ports http://www.cisco.com/c/en/us/
Switch and offers cost-effective scalability products/collateral/storage-
through on-demand activation of networking/mds-9148s-16g-
ports. multilayer-fabric-switch/datasheet-
c78-731523.html
71 | Additional references
Storage components
Storage component information and links to documentation are provided.
Additional references | 72
The information in this publication is provided "as is." Dell Inc. makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness
for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2014-2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are
trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published
in the USA in December 2018.
Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to
change without notice.
73 | Copyright