Documente Academic
Documente Profesional
Documente Cultură
In both regions, two separate NSX Managers instances are deployed, one for the Management pod and one for the Compute and Edge pods, along with an
associated NSX Universal Controller Cluster. In the Region B the secondary NSX Manager instances automatically imports the configurations of
the NSX Universal Controller Clusters from Region A.
Management Stack
vCenter Server
Appliance
(Ring Topology)
Region A
Region A
Platform Services
Controller
Appliance
Compute Stack
vCenter Server
Appliance
Management Stack
vCenter Server
Appliance
Platform Services
Controller
Appliance
Management Stack
vCenter Server
Appliance
Management Stack
vSphere Data
Protection
NSX Edge
Services Gateways
(Routing)
Management Stack
NSX Universal
Controller Cluster
Compute Stack
vCenter Server
Appliance
Management Stack
NSX Manager
(Secondary)
Management Stack
vSphere Data
Protection
Region A
Management /
Compute
vCenter Servers
Clctr
Node
Clctr
Node
NSX
for vSphere
NSX
for vSphere
Clctr
Node
Virtual
SAN
Analytics Cluster
Master
Node
Replica
Node
Data
Node
Worker
Node
NSX Edge
Services Gateways
(Routing)
vRealize
Automation
vRealize
Orchestrator
NSX Edge
Services Gateways
(Routing)
Worker
Node
Master
Node
Worker
Node
Worker
Node
Virtual
SAN
Virtual SAN
NFS
Virtual SAN
NFS
Primary Storage
Log Archives
Primary Storage
Log Archives
Production Reservation
vRealize
Automation
vRealize
Business
VRO
VRA
IWS
IMS
DEM
IAS
VRO
VRA
IWS
IMS
DEM
IAS
Compute Pod(s)
Compute Pod(s)
Region A
Compute Stack
vCenter Server
SQL
BUS
IAS
BUC
IAS
Production Reservation
Region B
Compute Stack
vCenter Server
NSX
for vSphere
Edge Pod
NSX
for vSphere
This VMware Validated Design instantiates an extensible Cloud Management Platform layer to deliver multi-platform and multi-vendor cloud services through the use of vRealize Automation, vRealize Orchestrator and vRealize Business for Cloud. It provides comprehensive and purpose-built
capabilities that provide standardized resources in a short time span, service delivery methods that integrate with existing enterprise management systems, and user-centric and business-aware governance for all private and public cloud services.
Event
Forwarding
via Ingestion
API
Region B
Data
Node
Master
Node
Development Reservation
Management /
Compute
vCenter Servers
vRealize
Automation
vRealize
Operations
Shared
Storage
Systems
NSX Edge
Services Gateways
(Routing)
Compute Stack
NSX Manager
(Secondary)
Region B
Shared
Storage
Systems
Compute Stack
vCenter Server
Appliance
Compute Stack
NSX Manager
(Primary)
Compute Stack
vCenter Server
Appliance
Management /
Compute
vCenter Servers
vRealize
Automation
In a multi-region SDDC, a vRealize Log Insight cluster is deployed in each region that consists of three nodes, enabling
continued availability and increased log ingestion rates. vRealize Log Insight collects data from ESXi hosts using the syslog
protocol, connects to vCenter Server instances and integrates with vRealize Operations Manager to send notication events and
enable the last mile of root cause analysis. Content packs for Virtual SAN, NSX and vRealize Automation are also configured.
Region B
NSX
for vSphere
Management Stack
NSX Manager
(Primary)
Platform Services
Controller
Appliance
Management /
Compute
vCenter Servers
Region A
Management Stack
vCenter Server
Appliance
Region B
Platform Services
Controller
Appliance
VMware Validated Designs use several VMware solutions for network, storage and cloud management. You can monitor and
perform diagnostics on all of them by using vRealize Operations and solution management packs. In these designs, vRealize
Operations is configured with management packs for NSX, vRealize Log Insight, vRealize Automation and Storage Devices.
vRealize Operations
Development Reservation
NSX
for vSphere
Edge Pod
Edge Reservation
Edge Reservation
Spine
Spine
Spine
WAN/
MPLS
Spine
WAN/
MPLS
VTEP
VTEP
ESXi
40 GigE
Leaf
40 GigE
Leaf
Leaf
L3
Leaf
Leaf
Leaf
Leaf
Leaf
IGMP
IGMP
L2
L2
10 GigE
IGMP
IGMP
IGMP
IGMP
IGMP
VTEP
ESXi
VTEP
VTEP
ESXi
VTEP
VTEP
ESXi
VTEP
ESXi
VTEP
VTEP
ESXi
VTEP
VTEP
ESXi
VTEP
VTEP
VTEP
ESXi
Virtual SAN
10 GigE
L2
10 GigE
VTEP
VTEP
ESXi
Virtual SAN
VTEP
ESXi
VTEP
VTEP
Server
Edge Pod
(4 VSAN Ready Nodes)
ESXi
Management Cluster
Edge Cluster
Compute Cluster n
Management Pod
Edge Pod
Compute Pod n
Within the SDDC, all business and end-user workloads run inside
the compute pods. By design, business and end-user workloads running in
the SDDC are isolated on their own network and do not have direct access to
external networks. To access external networks, traffic must be routed
through the edge pod using a shared NSX transport zone.
Management Stack
Compute Stack
nic1
10 GigE
VLAN 1611
VLAN 1612
VLAN 1613
VLAN 1614
ESXi Host
Spine
Switches
Region B
To Edge Pod
Internet or
Enterprise
WAN/MPLS
To Edge Pod
To Compute Pods
To Compute Pods
Spine
Switches
L2
L3
172.16.11.0/24
172.17.11.0/24
L2
BGP
Peering
VDP
OS
PSC
OS
VC
OS
192.168.10.0/24
192.168.11.0/24
APP
APP
APP
APP
APP
OS
OS
OS
OS
OS
VC
OS
PSC
OS
VDP
OS
NSXM
OS
ECMP
NSX Edge
Services Gateways
192.168.10.0/24
APP
APP
APP
APP
OS
OS
OS
OS
Master
Node
Replica
Node
Data
Node
Data
Node
Management
172.16.11.0/24
DGW:
172.16.11.253
192.168.10.0/24
192.168.31.0/24
APP
192.168.11.0/24
IWS
OS
OS
Collector
Node
APP
vRealize Operations
OS
OS
Collector
Node
APP
APP
OS
OS
APP
APP
OS
OS
IMS
APP
APP
OS
OS
DEM
APP
APP
OS
OS
APP
APP
OS
OS
SQL
MTU
9000
VLAN NFS
MTU
9000
VLAN Management
MTU
9000
VLAN Management
MTU
9000
VLAN Management
MTU
9000
VLAN vMotion
MTU
9000
VLAN vMotion
MTU
9000
VLAN vMotion
MTU
9000
MTU
9000
MTU
9000
MTU
9000
VLAN VSAN
MTU
9000
VLAN VSAN
MTU
9000
VLAN VSAN
MTU
9000
vMotion
172.16.12.0/24
DGW:
172.16.12.253
VXLAN
172.16.13.0/24
DGW:
172.16.13.253
VSAN
172.16.14.0/24
DGW:
172.16.14.253
VLAN Customer 1
MTU
9000
VLAN Customer n
VLAN Uplink 01
VLAN Uplink 01
VLAN Uplink 02
VLAN Uplink 02
VMware Validated Designs use NFS storage as a secondary storage tier for management and compute pods. NFS is used as the target for vSphere Data Protection backups
and vRealize Log Insight log archives in management pods. NFS is also used to host the virtual machine templates in the IT Automation Cloud validated design.
APP
APP
APP
APP
APP
APP
APP
APP
OS
OS
OS
OS
OS
OS
OS
OS
OS
Cluster
VIP
APP
APP
APP
APP
APP
OS
OS
APP
OS
OS
OS
Master
Node
OS
Worker
Node
Worker
Node
Master
Node
Worker
Node
Worker
Node
ECMP
NSX Edge
Services Gateways
BGP
Peering
PCIe
Ultra DIMM
Caching
Tier
192.168.11.0/24
Region B
Volume 1
VRA
Volume 2
Volume 1
Volume 2
Capacity
Export
(vRealize
Automation)
Data
Persistence
Tier
IWS
Export
(vRealize
Log Insight)
Export
(vSphere
Data Protection)
Export
(vRealize
Automation)
Export
(vRealize
Log Insight)
Export
(vSphere
Data Protection)
IMS
Region Dependent Application Virtual Network
Universal Logical Switch / VXLAN 5xxx
DEM
VRO
APP
OS
OS
BUS
APP
APP
APP
OS
OS
OS
IAS
IAS
BUC
APP
APP
APP
OS
OS
OS
IAS
IAS
BUC
Reference
Networks
VRA
VIP: 192.168.11.53
IWS
VIP: 192.168.11.56
IMS
VIP: 192.168.11.59
VRO
VIP: 192.168.11.65
Notable Acronyms
VRA
IWS
IMS
IAS
DEM
VRO
BUS
BUC
SQL
Region A
vRealize Operations
Cluster
VIP
192.168.32.0/24
APP
192.168.32.0/24
APP
(vRealize Operations, vRealize Log Insight, vRealize Automation & vRealize Business)
192.168.11.0/24
192.168.31.0/24
OS
192.168.10.0/24
APP
VLAN NFS
VMware Validated Designs use rack mount Virtual SAN Ready Nodes to ensure seamless compatibility and support. The
configuration and assembly for each node is standardized with all components installed the same manner to eliminate system
variability. Virtual SAN enables both tiered-hybrid and all-flash architectures.
APP
Collector
Node
nic1 10 GigE
192.168.31.0/24
192.168.10.0/24
192.168.32.0/24
APP
Collector
Node
Region B
VRO
Region A
192.168.11.0/24
BGP
Peering
nic0
MTU 9000
SSD
BGP
Peering
10 GigE
Storage
vRealize Operations
BGP
Peering
Region B
10 GigE
The two 10GbE NICs on each host are connected across the top-of-rack leaf switches and teamed on the vSphere Distributed Switch via an active-active configuration. All port groups except for the ones that carry VXLAN traffic are configured for the 'Route based on physical NIC load' teaming algorithm. VTEP kernel ports and VXLAN traffic use the Route based on
SRC-ID' algorithm. The vSphere Distributed Switch has a MTU of 9000 configured for Jumbo Frames along with with necessary VMkernel ports.
ECMP
NSX Edge
Services Gateways
VRA
BGP Peering
NSXM
OS
192.168.11.0/24
Top-of-Rack
Leaf Switches
BGP Peering
ECMP
NSX Edge
Services Gateways
192.168.10.0/24
L3
Top-of-Rack
Leaf Switches
Region A
Universal Transit Network
Universal Logical Switch / VXLAN 5xxx
BGP
Peering
Region A
Internet or
Enterprise
WAN/MPLS
nic1
MTU
9000
ECMP
NSX Edge
Services Gateways
nic0
VLAN NFS
10 GigE
MTU 9000
Distributed Logical Routing and Application Virtual Networks for Management, Operations and Automation Solutions
Distributed Logical Routing and Application Virtual Networks
L3
plus NFS
nic0
MTU 9000
plus NFS
Management Pod
(4 VSAN Ready Nodes)
Compute Pods
(Up to 19 2RU Hosts or 19 VSAN Ready Nodes)
VTEP
L2
L3
Server
VTEP
ESXi
L2
10 GigE
VTEP
10 GigE
Leaf
L3
IGMP
VTEP
40 GigE
Leaf
L3
The leaf switches of each rack acts as the Layer 3 interface for the corresponding subnet. Management and Edge Pods are provided with externally accessible VLANs for access to the Internet and/or MPLS-based corporate networks.
40 GigE
Host Connectivity
WAN/
MPLS
Spine
VMware Validated Designs use a small set of common, standardized building blocks called pods.
Span of VLANs
Spine
Spine
Pods
The physical network architecture in the VMware Validated Designs is tightly coupled with the pod-and-core architecture and uses a Layer 3 leaf
and spine network model for an efficient and distributed core.
Span of VLANs
Leaf-and-Spine Network
vmware.com/go/vvd