Sunteți pe pagina 1din 128

#CLMEL

Data Centre Design


for the Midsize
Enterprise
David Jansen CCIE 5952
Distinguished Systems Engineer
Global Enterprise
BRKDCN-2218

#CLMEL
Abstract: Data Centre Design for the Midsize
Enterprise
Network designs for the data centre have common requirements to support
a high density of physical and virtual servers, storage systems, and other
required services. Many of the technical design challenges are the same
regardless of the size of the organisation. Midsize organisations have added
emphasis on controlling costs, allowing for rapid growth, and striving for
simplicity of deployment and operational maintenance. This session will
discuss architectures for building data centres for the midsize organisation. A
flexible network architecture will be discussed that is focused on simplified
configuration and scalability. Reference topologies shown will begin from an
entry-level data centre environment, and illustrate transition points that
protect the investment in existing equipment while providing increasingly
advanced features. Best practices for optimisation of common data centre
features and protocols will also be discussed.

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
David Jansen, CCIE #5952
Distinguished System Engineer (DSE)
Global Enterprise Segment Platforms & Solutions

dajansen@cisco.com
@ccie5952

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Home is…
Season in Michigan is? Winter. Where is has been – 25 °F;
which is about ~ -31 °C

Michigan
Known for?
But.. Most importantly: 

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
• Introduction and Fabric Journey
Agenda
• Data Centre Overlay Fabric Overview
• Application Centric Infrastructure (ACI)
• Programmable Fabric VXLAN EVPN w/DCNM
• Attaching Compute
• Multiple Data Centre's
• Data Centre Visibility, Programmability +
Operations
• Network Insights Advisor (NIA) / Network
Insights Resources (NIR)
• Summary
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Cisco Webex Teams

Questions?
Use Cisco Webex Teams (formerly Cisco Spark)
to chat with the speaker after the session

How
1 Open the Cisco Events Mobile App
2 Find your desired session in the “Session Scheduler”
3 Click “Join the Discussion”
4 Install Webex Teams or go directly to the team space
5 Enter messages/questions in the team space
cs.co/ciscolivebot#BRKSEC-2218

© 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Introduction and Fabric
Journey
Business Drivers & Solutions for Data Center Requirements
• Multi-tenancy Multi-
• Security and Separation tenancy
• Traffic Engineering
• Scalable
• Flexible topology
• Minimise oversubscription
• Scale out and scale up Mergers
Automation,
Shared
• Scalable L4-7 Service Layer Acquisitions
Overlay, Scale, services
• No spanning tree (IRB) Operations
• Incremental scale
• Virtual FW/LB per tenant
• Flexible placement
• Incremental capacity
Compliance
• Multi-Site Connectivity

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Data Centre “Fabric” Journey
STP
VPC
FabricPath

FabricPath
/BGP

MAN/WA
N

MAN/WAN

VXLAN
/EVPN Application Centric
VXLAN (F&L)
Infrastructure
APIC

Application
MAN/WAN ACI Fabric Policy
Infrastructur
e Controller
BRKDCN-2218 10
Traditional Networking
MPLS/IP/SR Core

Layer 3 Layer 3

Layer 2 Layer 2

Management options: Limitations:


• CLI • Box by box approach
• Cut/Paste • Lack of consistent configuration (no
• Limited automation network wide policies)
• Disparate management platforms • Leftover/unknown configuration
• Open “any to any” connectivity*
• Lack of traffic visibility
Host or
• Separate virtual and physical networks
Switch • Separate L4-7 device management

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
The DC network moving to a Fabric
Modular Switching CLOS Fabric

Controller
Supervisors (1 or 2)
Fabric Modules (3- 5)
Up to 18 RUs Scale-up

SPINE

Linecards (Copper, Fiber, 1/10G)


Zero-touch L2 VXLAN
No STP
LEAF
Layer 3
Layer 2
Scale as you need

Single chassis (e.g. Nexus 7700)

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Data centre Overlay
Fabric Overview
Overlay Based Data Centre Fabrics

RR RR • Desirable Attributes:
• Mobility
• Segmentation
• Scale
• Automated & Programmable
• Abstracted consumption models
• Full Cross Sectional Bandwidth
• Layer-2 + Layer-3 Connectivity
• Physical + Virtual

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Data Centre Fabric Properties
• Any subnet, anywhere, rapidly
RR RR
• Reduced Failure Domains

• Extensible Scale & Resiliency

• Profile Controlled Configuration

 Full Bi-Sectional Bandwidth (N Spines)

 Any/All Leaf Distributed Default Gateways

 Any/All Subnets on Any Leaf

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Spine/Leaf Topologies
• High Bi-Sectional Bandwidth

• Wide ECMP: Unicast or Multicast

• Uniform Reachability, Deterministic Latency

• High Redundancy: Node/Link Failure

• Line rate, low latency, for all traffic

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Variety of Fabric Sizes
More Spine, More Bandwidth, More Resiliency
• Fabric size: Hundreds to 10s of Thousands of
10G ports
• Variety of Building Blocks:
• Varying Size
• Varying Capacity
• Desired oversubscription
• Modular and Fixed
• Scale Out Architecture
• Add compute, service, external connectivity as the
demand grows

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Application Centric
Infrastructure (ACI)
ACI: The elements

APICs L-Size (Recommended VMM + Containers


3 Recommended for Production Physical for 1000+ physical leaf ports) Virtual
At least 1 physical APIC required M-size (Recommended
for <1000 physical leaf ports)

Fixed Nexus Virtual


SPINES Modular (NX-OS
Capable) 9300 vPod
(vSpine)
Nexus (9332C, 9364C)

9500
(w/9700 LCs)

LEAVES Virtual
Fixed
Virtual/Container networking integration (NX-OS vPod
included (except vPod mode) Capable) Nexus 9300 (vLeaf)
(100M/1/10/25/40/50/100/400G)

LICENSING Add-ons
Only applies to Physical Leaves Advantage Multisite
Remote-Leaf vPod FC
Mgmt Cluster + FCoE
(no licensing on APICs nor Spines) Essentials Per AVE License
Storage Encryption

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
ACI: Consistent, automated and simpler networking
Single point of management for all your
Physical, Virtual, Container-based and Cloud
Networking

Spine Layer
Nexus 9000

Leaf Layer
Nexus 9000

WAN Legacy
Networks
(N5K/N7K)
VM
VM VM VM
L4-L7 Services

ACI
The network made simple
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
ACI Policy Model
Tenant – CiscoLive Melbourne

Context Context
(VRF A) (VRF B)

Bridge Bridge Bridge


Domain (BD) Domain (BD) Domain (BD)

Subnet A Subnet B

EPG EPG EPG


A B C

EPG =
Group
Applications Applications Applications
BRKDCN-2218 21
Network Centric Mode
VLAN = EPG

EPG-A EPG-B EPG-n

End-
point(s)
End-
End- point(s)
point(s)
- Connect non-ACI networks to ACI leaf nodes
- Connect at L2 with VLAN trunks (802.1Q)
- Objective: Map VLANs to EPGs, extend policy model to non-ACI networks

BRKDCN-2218 22
ACI Policy Model: EPG To EPG Communication

Allow HTTP
EPG-A EPG-n
Allow ICMP

Provides Consumes
policies policies

Zero Trust Security Model

- Need to define a Contract (Policy); - A contract is used to specify the interaction between two EPG(s), a provider/consumer pair.
- The goal is to provide a global policy view that focuses on improving automation and scalability.

- You have the option to change the default from white-list to Unenforced VRFs; IP Any Any.

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
ACI Policy Model: uEPG Communication
uEPG

Allow HTTP
VM BM
Allow ICMP

Provides Consumes
policies policies

BM VM

C
VM BM

Zero Trust Security Model

- Need to define a Contract (Policy); - A contract is used to specify the interaction within an uEPG(s), a provider/consumer pair.
- The goal is to provide a global policy view that focuses on improving automation and scalability.

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
Supported Attributes for Micro EPG Classification
Supported attributes as of 3.1
• Attribute support depends on Attributes Type Example Domains
Domain type. MAC Address Network 5c:01:23:ab:cd:ef Phys, VMW,
MSFT
• For VMM domains, some
IP Address Network 10.10.1.0/24 Phys, VMQ,
attributes are vendor specific 10.20.21.1 MSFT
(i.e. vSphere Tags) VNic Dn (vNIC domain name) VM A1:23:45:67:89:0 VMW, MSFT
b
• Refer to Release Notes and VM Identifier VM vm-598 VMW, MSFT
virtualization Configuration VM Name VM HR_UI_WEB VMW, MSFT
Guide for latest information Hypervisor Identifier VM esxi-host-01 VMW, MSFT
VMM Domain VM AVS-VMM-DC1 VMW, MSFT
Datacentre VM BRU-DC VMW, MSFT
Guest Operating System VM Windows 2008 VMW, MSFT
Custom Attribute VM AppTier=Web VMW, MSFT
vSphere TAGs VM PROD:ENV VMW
DNS Network acme.app.com (experimental)

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
ACI Deployment
Options
ACI MultiPod
Single APIC Cluster Extends Network Virtualization, Policy, Services to Multiple PODs

Inter-Pod IP Network

Site A Site B

Active-Active Data Virtual Metro Stretch VRF, EPG, BD Up to 50ms


Centres Clusters Across PoDs with VXLAN Latency

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
ACI Multi-Pod
Supported Topologies
Intra-DC Two DC sites directly connected

10G/40G/100G
10G*/40G/100G 10G*/40G/100G
POD 1 10G*/40G/100G 10G*/40G/100G
POD n POD 1 Dark fiber/DWDM POD 2
(up to 50 msec RTT**)

APIC Cluster APIC Cluster

3 (or more) DC Sites directly connected Multiple sites interconnected by a


10G/40G/100G
generic L3 network
10G*/40G/100G
POD 1 10G*/40G/100G POD 2
Dark fiber/DWDM 10G*/40G/100G 10G*/40G/100G
(up to 50 msec RTT**)
L3
10G*/40G/100G
10G*/40G/100G (up to 50msec RTT**)
10G*/40G/100G

POD 3 * 10G only with QSA adapters#CLMEL


on EX/FX spinesBRKDCN-2218 ©**
2019
50 Cisco
msec and/or its affiliates.
support addedAll rights reserved.
in SW Cisco 2.3(1)
release Public 28
ACI Multi-Pod
Inter-Pod Network (IPN) Requirements

Pod ‘A’ Pod ‘B’

MP-BGP - EVPN

DB Web/App APIC Cluster Web/App

 Not managed by APIC, must be separately configured (day-0 configuration)


 IPN topology can be arbitrary, not mandatory to connect to all spine nodes
 Main requirements:
 Multicast BiDir PIM  needed to handle Layer 2 BUM* traffic
 OSPF to peer with the spine nodes and learn VTEP reachability
 Increase MTU support to handle VXLAN encapsulated traffic
 DHCP-Relay
* Broadcast, Unknown unicast, Multicast

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
ACI Multisite
Extends Network Virtualization, Policy, Services to Multiple Fabrics
Consistent Policy across sites
Single Point of Orchestration

Multi-Site Fault Isolation

Appliance Scale

Inter-Site IP Network

Site A Site B

Geographically Dispersed Health dashboard / Spine to Create inter-site tenants, Up to 500ms to


Data Centre spine peering/connectivity policy profiles, RBAC rules 1 sec Latency
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
ACI Multi-Site
Spines in Separate Sites Connected Back-to-Back

Dark fiber/DWDM
10G*/40G/100G

Dark fiber/DWDM
10G*/40G/100G
ISN
Site 1 Site 2 Site 1 Site 2

Site 3 Site 3

• Multiple DC sites directly connected • ‘Hybrid’ topology with some sites directly
• 10G connection supported with QSA adapter on connected and other reachable via the ISN
spine nodes

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
ACI Multi-Site
Inter-Site Network (ISN) Requirements

Inter-Site Network

MP-BGP - EVPN

• Not managed by APIC, must be separately configured (day-0 configuration)


• IP topology can be arbitrary, not mandatory to connect to all spine nodes, can extend long distance
(across the globe)
• Main requirements:
 OSPF on the first hop routers to peer with the spine nodes and exchange site specific E-TEP reachability
 Increased MTU support (at least +100B) to allow site-to-site VXLAN traffic
 Ingress Replication
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
ACI Multi-Site Networking Options
Per Bridge Domain Behavior
Layer 3 only across sites IP Mobility without BUM flooding Layer 2 adjacency across Sites
1 2 3
ISN ISN ISN
Site Site Site Site 2
Site Site Site
1 2 1 2 1 2

 Bridge Domains and subnets not  Same IP subnet defined in separate  Interconnecting separate sites for
extended across Sites Sites fault containment and scalability
reasons
 Layer 3 Intra-VRF or Inter-VRF  Support for IP Mobility (‘cold’ and
communication (shared services ‘live’* VM migration) and intra-  Layer 2 domains stretched across
across VRFs/Tenants) subnet communication across sites Sites, support for ‘live’* VM
migration and application clustering
 No Layer 2 BUM flooding across
sites  Layer 2 BUM flooding across
sites

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
BRKDCN-2218
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Multi-Site and Network Services
Integration Models
ISN

• Active and Standby pair deployed across Sites


• Currently supported only if the FW is in L2 mode or in
L3 mode but acting as default gateway for the
endpoints
Active Standby

ISN

• Active/Active FW cluster deployed across Sites

• Not currently supported (scoped for a future ACI release)


Active/Active Cluster

ISN • Most common deployment model for ACI Multi-Site


• Option 1: supported from 3.0 if the FW is connected in
L3 mode to the fabric  mandates the deployment of
traffic ingress optimisation
• Option 2: supported from 3.2 release with the use of
Active/Standby Active/Standby Service Graph with Policy Based Redirection (PBR)
35
BRKDCN-2218
ACI Remote Leaf
Physical Remote Leaf
Extend ACI to Satellite Data Centres

• Extend the ACI fabric outside the main datacentre to remote sites distributed
over IP Backbone
• Extend ACI fabric to small DR site without investing in full-blown ACI Fabric
• Centralised Policy Management and Control Plane for remote locations
• Small form factor solution at locations with space constraints
• Satellite DC
• RoBo Facilities
• Brownfield / Migration
• Co-location

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
ACI: Physical Remote Leaf
Extend ACI to Satellite Data Centres
IP Network

Logical Connection To Spine


(BGP-EVPN/ VXLAN)

Site A Remote
Location VM VM VM
VM VM VM VM
VM VM VM VM VM VM VM

Zero Touch Auto Two Remote Leafs Stretch EPG, BD, Health Scores,
Discovery of Remote Up To 20 Remote VRF, Tenant, Contract EPG Stats
Leaf Locations #CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Physical Remote Leaf
Requirements

• Bandwidth in the WAN must be a minimum of 100 Mbps

• Increased MTU to support VXLAN header; recommendation for 100 byte


above MTU requirements of edge-MTU

• Bandwidth in the WAN must be a minimum of 100 Mbps and maximum


supported latency is 300 msecs RTT

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
ACI Remote Leaf: Considerations
• Local traffic forwarding
• Inter-VRF
• PBR
• ERSPAN
• Asymmetry with local L3outs: Host route advertisement

• Feature parity with local leaf

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
ACI Virtual Edge + vPOD
vPOD Use Cases:
• Extend ACI Network and Policy Model to vSphere Clusters not directly
attached to the fabric
• Key examples:
• Brownfield (Nexus 7000 / Nexus 5000 / Nexus 2000 environments)
• Colo-facilities
• Remote Data Centres
• Bare Metal Clouds (i.e. IBM Bluemix/SoftLayer, Oracle Cloud, OVH, Rack Space, AWS Elastic
Bare Metal, etc.)

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
ACI Virtual Pod (vPod)
Management Cluster (vSpine + vLeaf)
Virtual Pod
• vSpine and vLeaf: Run ACI control plane
function
vSpine vSpine
• vLeaf: Distribute APIC policies to ACI Virtual
Edge

vLeaf vLeaf
ACI Virtual Edge (vPod Mode)
• Implements ACI data plane function and
ACI Virtual Edge policy enforcement
• iVXLAN for communication within vPod and
across Pods

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
vCentre VMM
Integration with
ACI/APIC
ACI and VMM vCentre Integration
- Cisco APIC integrates with the VMware vCentre.

- Ability to transparently extend the Cisco ACI


policy framework to VMware vSphere
workloads.

- APIC uses Application Network Profiles (ANPs)


to represent the Cisco ACI policy.

- APIC creates a virtual distributed switch (VDS)


in VMware vCentre for virtual networking.

- APIC manages all application infrastructure


components. The network administrator creates
EPGs and pushes them to VMware vCentre as
port groups on the DVS.

- Server administrators can then associate the


virtual machines and provision them
accordingly.

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
ACI and VMM vCentre Integration

- Show configured VMware VMM


vCentre

- Focusing on vCentre 6 instances


vCentre 6 instance integrated into APIC

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
ACI: EPG/ANP

- Create EPG

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
ACI and VMM vCentre Integration

- Add VMM Domain to EPG

- This will create port-group to


vCentre

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
ACI: EPG(s) pushed to vCentre Port-groups

- port-groups on the vDS

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
ACI and VMM vCentre Integration

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
Mini ACI Fabric & Virtual
APIC (vAPIC)
ACI: Mini ACI Fabric
ACI Fabric For Small Scale Deployments

Physical APIC 1

APIC Virtual APIC 2


No. of Leafs 2-4
Spine 2
No. of Spines 2
Spine 1
No. of Tenants 25
Leaf 2 – 48 ports VM
No. of EPs 20,000
Leaf 1 – 48 ports
VM No. of BDs 1000
No. of EPGs 1000
No. of VRFs 25

Optimized Physical Footprint – 5 RU System


#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
ACI Multi-Cloud
ACI Extensions To Multi-Cloud

ACI Multi-Site Appliance

Site A
Site C

Site D
Site B

VM VM VM

VM VM VM

Consistent Network and Seamless Single Point of Secure Automated


Policy across clouds Workload Migration Orchestration Connectivity
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
4.0 (Alpha)
ACI Extensions to AWS
Multi-
Site
Site A On-Premise Public Cloud Site B

IP
Network
EPG EPG EPG
Contract Contract
Web APP DB
SG SG SG
SG Rule SG Rule
Web APP DB

AWS Region
VM VM VM

Common Discovery Policy Monitoring & Single Point Operational


Governance & Visibility Translation Troubleshooting Of Orchestration Consistency
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Policy Mapping - AWS
User Account Tenant
Virtual Private Network VRF

VPC subnet BD Subnet

EPG

Security Group Contracts

Security Group Rule EPG Contracts

Outbound rule Consumed contracts


Source/Destination: Subnet or IP or Any or ‘Internet’
Protocol
Port

Inbound rule Provided contracts


EC2 Instance

Network Adapter
Future
ACI Anywhere: On-Prem Connectivity To AWS
VPC With Direct Connect + VPN

Multi-Site

Site A On-Premises Public Cloud Site B


Co-location
BGP EVPN Control Plane
User VPC-1
OVERLAY
Colocation
CSR1000V
L3 Out
+ Golf
VXLAN TUNNEL (DATA PLANE)
Customer
Premise Customer Amazon CSR1000V
Router Router VGW
AWS Direct AWS Instances
Connect
Routers

Infra VPC
VM VM VM CSR1000V

AWS Instances

AWS Region User VPC-2


#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Programmable Fabric
VXLAN EVPN
Deployment Considerations: Underlay
• MTU and Overlays

• Unicast Routing Protocol and IP Addressing

• Multicast for BUM Traffic Replication

BRKDCN-2218 59
Overlay with Optimized Routing
EVPN Control Plane -- Host and Subnet Route Distribution
BGPMultiprotocol
Scalable Multi-Tenancy with Update BGP
RR RR • Host-MAC
Spine • Host-IP
EVPN Address-Family:
Border Host MAC+IP,• internal/external IP
Internal IP Subnet
Subnets
• External Prefixes
V
BGP enhanced for Fast Convergence at Large Scale

V Extensions for Fast and Seamless Host Mobility


V
V
Distributed Gateway with Traffic Flow Symmetry
V
V
BGP Adjacencies ARP Suppression
Route-Reflectors deployed
RR
for scaling purposes (iBGP)

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Distributed IP Anycast Gateway

Any Subnet Routed Anywhere – Any VTEP can serve any Subnet
RR RR
Spine SVI 100, Gateway IP: 192.168.1.1
Integrated Route & Bridge (IRB) - Route whenever you can,
SVI 200, Gateway IP: Bridge when needed
10.10.10.1
bridge

route
No Hairpinning V
– Optimized East/West and North/South Routing
SVI 100
Host3
V Seamless Mobility
MAC: CC:CC:CC:CC:CC:CC
- All Leaf share same Gateway MAC
V SVI 200 IP: 192.168.1.33
VLAN 100
V VXLAN VNI 30001
Reduced Failure Domain – Layer-2/Layer-3 Boundary at Leaf
V Host2
MAC: BB:BB:BB:BB:BB:BB
V IP: 10.10.10.22
Optimal Scalability – Route
VLAN 200
Distribution & closest to the Host
SVI 100
Host1 VXLAN VNI 30002
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11
VLAN 100
VXLAN VNI 30001 #CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
MP-BGP EVPN Route Type 2
MP-BGP EVPN Route Type 2 - MAC/IP Advertisement Route
RD (1 octet)

ESI (10 octets)


• Route Type 2 provides End-Host reachability information Ethernet Tag ID (4
octets)
• The following fields are part of the EVPN prefix in the NLRI MAC Address Length (1
octet)
• Ethernet Tag ID (zeroed out)
• MAC Address Length (/48), MAC Address MAC Address (6 octets)
• IP Address Length (/32, /128), IP Address [Optional] IP Address Length (1
octet)
• Additional Route Attributes
• Ethernet Segment Identifier (ESI) (zeroed out) IP Address (0, 4, or 16
octets)
• MPLS Label1 (L2VNI)
• MPLS Label2 (L3VNI) MPLS Label1 (3 octets)
MPLS Label2 (0 or 3
octets)

© 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
BRKDCN-2218
MP-BGP EVPN Route Type 5
MP-BGP EVPN Route Type 5 - IP Prefix Route
• Route Type 5 provides IP Prefix advertisement in EVPN
• RT-5 decouples IP prefix from MAC (RT-2) and provides RD (8 octet)
flexible advertisement of IPv4 and IPv6 Prefixes with variable ESI (10 octets)
length
Ethernet Tag ID (4
octets)
• The following fields are part of the EVPN prefix in the NLRI
IP Prefix Length (1
• IP Prefix Length (0-32 bits for IPv4 or 0-128 bits for IPv6) octet)
• IP Prefix (IPv4 or IPv6) IP Prefix (4 or 16
• GW IP Address octets)
• MPLS Label (L3VNI) GW IP Address (4 or 16
octets)
MPLS Label (3 octets)

BRKDCN-2218 63
Route Type: Ethernet Segment Ethernet Tag MAC Address
MAC Address IP Address Length IP Address
2 - MAC/IP Identifier Identifier Length

V2# show bgp l2vpn evpn 192.168.1.73

BGP routing table information for VRF default, address family L2VPN EVPN
Route Distinguisher: 10.0.0.1:32868
BGP routing table entry for
[2]:[0]:[0]:[48]:[0050.56a3.c2bb]:[32]:[192.168.1.73]/272,
version 4
Paths: (1 available, best #1)
Flags: (0x000202) on xmit-list, is not in l2rib/evpn, is locked

Advertised path-id 1
L3VNI Path type: internal, path is valid, is best path, no labeled nexthop
AS-Path: NONE, path sourced internal to AS
L2VNI 10.0.0.1 (metric 3) from 10.0.0.111 (10.0.0.111)
Origin IGP, MED not set, localpref 100, weight 0
Received label 30001 50001
Extcommunity: RT:65501:30001 RT:65501:50001 ENCAP:8 Router MAC:5087.89d4.5495
Originator: 10.0.0.1 Cluster list: 10.0.0.111
Remote VTEP Route Target: Route Target: Overlay Encapsulation: Router MAC of
IP Address L2VNI (VLAN) L3VNI (VRF) 8 - VXLAN Remote VTEP

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Private VLAN with VXLAN: Microsegmentation
L2VNI 30201 L2VNI 30200
Isolate Spine Spine Community
Private VLAN with VXLAN
• Extending Private VLAN over VXLAN
• Sub-VLAN Segmentation
• Availability of 2nd VLAN Modes
VTEP VTEP VTEP VTEP
• Community VLAN across VXLAN
• Promiscuous VLAN across VXLAN
• Isolate VLAN localized but extended
across VXLAN
Server Server Server Server

VLAN 201 VLAN 200


Isolate Community
Platform(s): Nexus 9000-EX/FX/FX2

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Multi-Site Border Gateway – Anycast vs. vPC
• Both Anycast and vPC Border Gateway needs to be configured with VIP-R address VIP-R: virtual IP re-origination
VIP: virtual IP
• vPC Border Gateways share a secondary IP address to be used as vPC virtual IP (VIP) PIP: primary IP

Anycast BGW vPC BGW


VIP1
11.11.11.11
VIP-R1 VIP-R1
100.100.100.100 100.100.100.100

PIP1 PIP1 PIP1 PIP1


10.1.1.1 10.1.2.1 10.1.1.1 10.1.2.1

…. ….
VTEP VTEP VTEP VTEP

BGW1 BGW2 BGW1 BGW2


Fabric Fabric

Spine Spine Spine Spine

VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
#CLMEL
BRKDCN-2218 BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Y2018
Roadmap

VXLAN X-Connect

Spine Spine Spine Spine


• MPLS Pseudowire like Tunneling
with VXLAN
• Tunnel all control & data packets
VTEP VTEP VTEP VTEP VTEP VTEP VTEP
between VTEPs
• Attachment point is part of a
vlan 10 vlan 10
unique provider VNI
vn-segment 10000 vn-segment 10000

• P2P
xconnect xconnect
interface Ethernet1/1 interface Ethernet1/1
switchport mode dot1q-tunnel switchport mode dot1q-tunnel
switchport access vlan 10 switchport access vlan 10

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Layer-3 Hand-Off +
InterNetworking VRF-Lite

#CLUS
Border-Leaf with VRF-Lite (Inter-AS Option “A “)
VTEP(s) Configured on Border-leaf RR RR
BGP AS# 65500

VRF for External Routing


L3
defined on Border Leaf
L2
BL L1

A B C
Interface-Type Options:
• Physical Routed Ports Peering Interface can
• Sub-Interfaces be in Global or Tenant VRF
• VLAN SVIs over Trunk Ports

BGP AS# 65599


#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Border-Leaf with eBGP VRF-Lite Configuration
# Sub-Interface Configuration
interface Ethernet1/1
no switchport RR RR
interface Ethernet1/1.10
mtu 9216
encapsulation dot1q 10
vrf member VRF-A
ip address 10.254.254.1/30 L3
# eBGP Configuration
# Interface Configuration
L2
router bgp 65500 BL interface
L1 Ethernet1/1.10

vrf VRF-A mtu 9216
address-family ipv4 unicast encapsulation dot1q 10
advertise l2vpn evpn A B C vrf member VRF-A
aggregate-address 10.0.0.0/8 summary-only ip address 10.254.254.2/30
neighbor 10.254.254.2 remote-as 65599
update-source Ethernet1/1.10 # eBGP Configuration
peer-type fabric-external router bgp 65599
address-family ipv4 unicast …
send-community both vrf VRF-A
address-family ipv4 unicast
neighbor 10.254.254.1 remote-as 65500
update-source Ethernet1/1.10
address-family ipv4 unicast
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Layer-3 Hand-Off +
InterNetworking MPLS

#CLUS
Border with MPLS (BorderPE)
RR RR
• MPLS @ Border with L3VPN
• Similar to Inter-AS Option “B“
• Provides L3VPN connectivity via MPLS V3
integration V2
• Interconnect using L3VPN for Multi-
V1
Tenant capable hand-off
VB1
• Uses different BGP AS# as within EVPN
VB2
fabric
• Re-Originates EVPN into L3VPN
address-family MPLS
Label Switched
• Could be combined with vPC for Layer-
2 connectivity

BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
Nexus 7x00 with F3 – 7.3
ASR9k TH/TYP – 5.3.2

Seamless MPLS Integration - BorderPE ASR1000

router bgp 65500

neighbor to-MPLS remote-as 65599


update-source loopback0
address-family vpnv4 unicast A B C
import l2vpn evpn reoriginate
address-family vpnv6 unicast
import l2vpn evpn reoriginate

neighbor to-VXLAN remote-as 65500 VPNv4: 192.168.1.0/24


address-family l2vpn evpn
import vpn unicast reoriginate
Type5: 192.168.1.0/24

MPLS

VPNv4/VPNv6 EVPN

Reoriginate EVPN Prefix in L3VPN


Reoriginate L3VPN Prefix in EVPN
BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Attaching Compute
Attaching Cisco UCS Compute

 UCS connection to the fabric is via a port-channel from both


Fabric Interconnects
 UCS supports both bare-metal, storage, containers and
virtualised services workloads.
Physical Design – Fabric UCS
c-series-1 c-series-2

UCS-FI-A UCS-FI-B

Leaf-1 Leaf-2

APIC
APIC

Spine-1 Spine-2

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Cisco ACI and UCSM Integration Future

Creates EPGs VLANS


VLANS

x` x`
APIC Admin

Cisco UCS
Manager
VLANS get automatically (Embedded)
provisioned on the Leaf switches
and UCSM Manager

VLANS are added to the uplink


and vNIC templates on the FI

Ease of deployment End to End Orchestration Single pane of Management Agility

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Multiple Data Centre's
Multi-Site VXLAN EVPN: The Overlay
Multiple Overlay Domains – Interconnected & Controlled POD #2/n
POD #1
Scaling and Segregating VXLAN EVPN Networks
RR RR RR RR

VXLAN Overlay POD #1 VXLAN Overlay POD #2/n

VXLAN EVPN VXLAN EVPN


VTEP
10.1.1.1
BGP AS#100 BGP AS#200 VTEP
10.2.2.7

Border-leaf Border-leaf

eBGP
eBGP

Unicast
Edge router Overlay Multi-Site Edge router
Unicast
BGP AS#65500
Inter-DC Core
Baremetal
(Layer-3 IP/MPLS) Baremetal

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
Multi-Site VXLAN EVPN: Introducing Border Gateway
POD #1 POD #2/n
RR RR RR RR

VXLAN Overlay POD #1 VXLAN Overlay POD 2/n

VXLAN EVPNBorder Gateway Border Gateway (BG) Border Gateway VXLAN EVPN
- Anycast Cluster -
VTEP
10.1.1.1
BGP AS#100 BGP AS#200 VTEP
10.2.2.7
Border (VIP) Border (VIP)
10.1.1.111 10.2.2.222

Unicast
Edge router Overlay
VRF-A Multi-Site
VRF-B VRF-C Edge router
Unicast
Multiple Overlay Domains – Interconnected & Controlled
Baremetal Scaling and Segregating VXLAN EVPN Networks Baremetal

Co-Existence of Multi-Site and External Connectivity


VRF-Lite for External Layer-3 Connectivity
#CLMEL
#CLUS BRKDCN-2218 © 2018
2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
Multi-Site VXLAN EVPN: Isolated Underlay Domains
POD #1 POD #2/n

RR RR RR RR

VXLAN Overlay POD #1 VXLAN Overlay


Site 2/nPOD 2/n
Underlay
Routing Table
Site 1 Underlay
VXLAN EVPN Routing Table Border Gateway (BG) VXLAN EVPN
Border: Leaf:
10.2.2.101 10.2.2.1
Border: Leaf: - Anycast Cluster -
VTEP
10.1.1.1
BGP AS#100 10.1.1.101 10.1.1.1 BGP AS#200
10.2.2.102 10.2.2.2 VTEP
10.2.2.222 10.2.2.3 10.2.2.7
10.1.1.102 10.1.1.2
Border (VIP) Border (VIP)
10.2.2.4
10.1.1.111 10.1.1.3
10.1.1.111 Border Gateway 10.2.2.222
Border Gateway 10.2.2.5
10.1.1.4
10.2.2.6
10.1.1.5
10.2.2.7
10.1.1.6
10.1.1.7
Unicast
Edge router Overlay Multi-Site Edge router
Unicast

Baremetal
Multiple Underlay Domains - Isolated Baremetal

Isolated Underlay Domains – No need for Extension

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
Multi-Site VXLAN EVPN: Tunnel Adjacencies
POD #1 POD #2/n
RR RR BG102# show nve peers RR RR
Interface Peer-IP VNI Up Time
---------- ----------- ------ ----------
nve1 10.1.1.111 30000 00:12:16
00:12:23 VXLAN Overlay POD 2/n
VXLAN Overlay POD #1nve1
nve1
10.2.2.222
10.1.1.1
30000
30000 00:12:23
VXLAN EVPNBorder Gateway Border Gateway (BG) Border Gateway VXLAN EVPN
- Anycast Cluster -
VTEP
10.1.1.1
BGP AS#100 BGP AS#200 VTEP
10.2.2.7
Border (VIP) Border (VIP)
10.1.1.111 10.2.2.222

Leaf1-1# show nve peers


Interface Peer-IP VNI Up Time
---------- ----------- ------ ----------
nve1 Leaf2-7# show nve peers
Unicast 10.1.1.1
nve1 10.1.1.111
30000
30000
03:18:06
00:12:23 Interface Peer-IP VNI Up Time
Edge router Overlay----------
Multi-Site -----------
Edge router------ ----------
Unicast
nve1 10.2.2.7 30000 01:12:06
nve1 10.2.2.222 30000 00:12:25

Baremetal
Multiple Overlay Control-Plane Domains – Interconnected & Controlled Baremetal

Contained Overlay Control-Plane Update Propagation


#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Multi-Site VXLAN EVPN: Inter-Site Network
POD #1 POD #2/n

RR RR RR RR

VXLAN Overlay POD #1 VXLAN Overlay POD 2/n

VXLAN EVPNBorder Gateway Border Gateway (BG) Border Gateway VXLAN EVPN
- Anycast Cluster -
VTEP
10.1.1.1
BGP AS#100 BGP AS#200 VTEP
10.2.2.7
Border (VIP) Border (VIP)
10.1.1.111 10.2.2.222

Inside Inter-Site Network


Unicast Routing Table
Edge router BorderOverlay
Site2: Multi-Site
Border Site1: Edge router
10.2.2.101 10.1.1.101 Unicast
10.2.2.102 10.1.1.102
10.2.2.222 10.1.1.111
Baremetal Baremetal

Multiple Underlay Domains - Isolated


Isolated Underlay Domains – No need for Extension
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
Multi-Site: Multi-Destination BUM Traffic
POD #1 POD #2/n

RR RR RR RR

Overlay Overlay
eBGP
eBGP
VXLAN EVPN VXLAN EVPN
BGP AS#100 Edge router Edge router BGP AS#200

BUM
Overlay Multi-Site

Baremetal
Multiple Replication Domains for BUM – Interconnected &
Controlled
Individual BUM flooding domain with Traffic control
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
Multi-Site: Multi-Destination BUM Enforcement
POD #1 POD #2/n

RR RR RR RR

Overlay Overlay
eBGP
eBGP
VXLAN EVPN VXLAN EVPN
BGP AS#100 Edge router Storm Control Edge router BGP AS#200
Broadcast 0-100%
Unknown Unicast 0-100%
Multicast
Layer-2 Multicast
0-100% 0-100%

BUM
Overlay Multi-Site

Baremetal
Multiple Replication Domains for BUM – Interconnected &
Controlled
Individual BUM flooding domain with Traffic control
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
Multi-Site Design and Deployment White Paper
VXLAN EVPN:

https://www.cisco.com/c/en/us/products/collateral/switches/ne
xus-9000-series-switches/white-paper-c11-739942.html

ACI:

https://www.cisco.com/c/en/us/solutions/collateral/data-centre-
virtualization/application-centric-infrastructure/white-paper-
c11-739609.html

https://www.cisco.com/c/en/us/solutions/data-centre-
virtualization/application-centric-infrastructure/white-paper-
c11-739971.html
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
Path optimisation
(Ingress and Egress)

#CLUS
Ingress Routing Localization
Possible Solutions

Challenge Options
• Subnets are spread across locations • DNS Based
• Subnet information in the routing tables • Route Injection
is not specific enough • LISP – Locator/ID Separation Protocol
• Routing doesn’t know if a server has • Home location + EVPN Type-2 /32
moved between locations
• Traffic may be sent to the location
where the application is not available

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 89
Multi-Site
Ingress Traffic optimisation
192.168.10.0/24  WAN Edge 1-4
192.168.10.101/32  WAN Edge 1-2
192.168.10.102/32  WAN Edge 3-4
Host routes
advertisement in the
WAN
External
WAN Edge 1 WAN Edge 2 Connectivity WAN Edge 3 WAN Edge 4

eBGP-EVPN
192.168.10.0/24  VIP1 DC Core 192.168.10.0/24  VIP2
192.168.10.101/32  VIP1 192.168.10.102/32  VIP2
(Layer-3 Unicast)
VTEP VIP1 VTEP VTEP VIP2 VTEP
10.1.1.111 10.2.2.222 Filtering of host routes
BGW BGW BGW BGW
Host routes advertised received from remote sites.
VXLAN EVPN across sites but NOT
re-advertised toward
VXLAN EVPN Only announce local host
route information
Spine Spine Spine Spine
the local WAN Edges

Site1
192.168.10.101/32  Leaf1 Site2
192.168.10.102/32 -> Leaf3
VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP

Host1 Host3
0000.3010.1101 0000.3010.1102
192.168.10.101 192.168.10.102

#CLMEL
BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
Multi-Site
Ingress Traffic optimisation
192.168.10.0/24  WAN Edge 1-4
192.168.10.101/32  WAN Edge 1-2
192.168.10.102/32  WAN Edge 3-4

External
WAN Edge 1 WAN Edge 2 Connectivity WAN Edge 3 WAN Edge 4

Optimized Ingress Optimized Ingress


Traffic Path eBGP-EVPN Traffic Path
DC Core
(Layer-3 Unicast)
VTEP VIP1 VTEP VTEP VIP2 VTEP
10.1.1.111 10.2.2.222
BGW BGW BGW BGW

VXLAN EVPN VXLAN EVPN


Spine Spine Spine Spine

Site1 Site2
VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP

Host1 Host3
0000.3010.1101 0000.3010.1102
192.168.10.101 192.168.10.102

#CLMEL
BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
ACI Multi-Site Networking Options
Per Bridge Domain Behavior
Layer 3 only across sites IP Mobility without BUM flooding Layer 2 adjacency across Sites
1 2 3
ISN ISN ISN
Site Site Site Site 2
Site Site Site
1 2 1 2 1 2

 Bridge Domains and subnets not  Same IP subnet defined in separate  Interconnecting separate sites for
extended across Sites Sites fault containment and scalability
reasons
 Layer 3 Intra-VRF or Inter-VRF  Support for IP Mobility (‘cold’ and
communication (shared services ‘live’* VM migration) and intra-  Layer 2 domains stretched across
across VRFs/Tenants) subnet communication across sites Sites, support for ‘live’* VM
migration and application clustering
 No Layer 2 BUM flooding across
sites  Layer 2 BUM flooding across
sites

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Host route advertisement from BL & RL
Ingress path optimization

IP Network (WAN Core – IPv4, MPLS, SR, etc …)

Remote
Main DC Location
Host route RL L3out
advertisement
from BL Host route
Local-Pod-L3out advertisement
EP1
EP3 from RL
100.1.1.1
100.1.1.2 100.1.1.2/32
100.1.1.1/32
WAN
20.20.20.0/24
External Client
20.1.1.1

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
Data Centre Visibility,
Programmability +
Operations
DCNM: Programmable Fabric VXLAN EVPN

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
DCNM VXLAN-EVPN Fabric Management

Day 0: POAP Day 1: Overlay Day 2: Monitoring


Underlay Provisioning Provisioning with OAM

Day 0: Template Day 1: API based Day 1: Brownfield Day 2: Endpoint


based Provisioning Provisioning Overlay Migration
Visibility

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 96
DCNM 11.0 Highlights

Enhanced Application VMM


Installer Options Framework Integration Telemetry

Easy Fabric VXLAN- Configuration Brownfield


EVPN Provisioning Compliance Migration RMA
#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 97
DCNM 11.1 Highlights

Topology Backup and Revamped Enhanced


Enhancements Restore Template Editor Telemetry

VXLAN-EVPN BGP EVPN Multi-


Compliance Side- Bulk Network &
Brownfield Deployment Site Automation
by-Side View BRKDCN-2218
VRF Deployment
#CLMEL © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 98
Endpoint Reachability – VXLAN OAM

Spine Spine Spine Spine


• Endpoint Reachability
• Uses ICMP
Is Host A alive? • VTEP to Endpoint reachability
Overlay • VTEP to VTEP reachability

Leaf Leaf Leaf Leaf Leaf Leaf Leaf • Validates ECMP Path
• Single Random Path
• Multiple, Random/Specified Path

• Provides VXLAN Outer UDP


Baremetal Baremetal
Baremetal Source Port (SPORT) as output
Host A Host B Host C
MAC: 0000.3001.1101 MAC: 0000.3001.1102 MAC: 0000.3002.2101
IP: 192.168.10.101 IP: 192.168.10.102 IP: 192.168.20.101

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
Endpoint Reachability – VXLAN OAM
Spine

Is Host A alive?
L15# ping nve ip 192.168.10.101 vrf BLUE source 10.50.1.15 sport 35977 verbose
Codes: '!' - success, 'Q' - request not sent, '.' - timeout,
VTEP VTEP
'D' - Destination Unreachable, 'X' - unknown return code,
'm' - malformed
Baremetal request(parameter problem), AS#65500
'c' - Corrupted Data/Test, '#' - Duplicate response
Host A
MAC: 0000.3001.1101
Sender handle: 32
IP: 192.168.10.101
! sport 35977 size 56,Reply from 192.168.10.101,time = 1 ms
! sport 35977 size 56,Reply from 192.168.10.101,time = 2 ms
Spine
! sport 35977 size 56,Reply from 192.168.10.101,time = 1 ms
! sport 35977 size 56,Reply from 192.168.10.101,time = 1 ms
! sport 35977 size 56,Reply from 192.168.10.101,time = 1 ms

Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms


Total time elapsed 89 ms

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 100
VTEP Reachability – VXLAN OAM
Spine

Is VTEP18
Loopback10 alive?
IP: 10.50.1.18

VTEP VTEP
L15# ping nve ip 10.50.1.18 vrf BLUE source 10.50.1.15 sport 41803 verbose
Codes: '!' - success, 'Q' - request not sent, '.' -AS#65500
timeout,
'D' - Destination Unreachable, 'X' - unknown return code,
'm' - malformed request(parameter problem),
'c' - Corrupted Data/Test, '#' - Duplicate response

Sender handle: 62 Spine


! sport 41803 size 56,Reply from 10.50.1.18,time = 1 ms
! sport 41803 size 56,Reply from 10.50.1.18,time = 1 ms
! sport 41803 size 56,Reply from 10.50.1.18,time = 1 ms
! sport 41803 size 56,Reply from 10.50.1.18,time = 1 ms
! sport 41803 size 56,Reply from 10.50.1.18,time = 1 ms

Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms


Total time elapsed 87 ms

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 101
CLI Options – ICMP-based NVE Ping
ping nve ip Destination Host/Loopback vrf VRF-Name source Source Loopback verbose

ping nve ip Destination Host/Loopback vrf VRF-Name source Source Loopback sport Outer Source Port verbose

ping nve ip Destination Host/Loopback vrf VRF-Name source Source Loopback egress Uplink Interface verbose

• Issues Ping to Host or Loopback • Use a specific VXLAN Outer


IP address Source Port
• Otherwise Random Generated
• Specifies the VRF where Source VXLAN Source Ports are used
and Destination Endpoint exists
• Use specific egress Interface
• Choose the local Loopback IP as • i.e. Uplink towards Spine
a Source IP address for the NVE • Otherwise ECMP hashing is used
Ping with Random or defined VXLAN
Source Port

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 102
Endpoint Traceroute – VXLAN OAM

Spine Spine Spine Spine


• Endpoint Traceroute
• Uses ICMP
What is the Path
Overlay to Host A? • VTEP to Endpoint
• VTEP to VTEP

Leaf Leaf Leaf Leaf Leaf Leaf Leaf • Validates Overlay Path
• Single Specified Path
• Multiple, Specified Path

• Provides Overlay to Underlay


Baremetal Baremetal
Baremetal correlation
Host A Host B Host C
MAC: 0000.3001.1101 MAC: 0000.3001.1102 MAC: 0000.3002.2101
IP: 192.168.10.101 IP: 192.168.10.102 IP: 192.168.20.101

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
How would a normal Traceroute look like?
Eth1/5
10.1.1.17

Spine

NVE1
10.200.200.18 What is the Path
to Host A?

VTEP VTEP
L15# traceroute 192.168.10.101 source 10.50.1.15 vrf BLUE
Baremetal AS#65500
traceroute to 192.168.10.101 (192.168.10.101) from 10.50.1.15 (10.50.1.15), 30 hops max, 40 byte packets
Host A (10.50.1.18) 0.96 ms 0.817 ms 0.746 ms
1 10.50.1.18
MAC: 0000.3001.1101
2 2 192.168.10.101 (192.168.10.101) 4.751 ms 0.69 ms 0.697 ms
IP: 192.168.10.101

Which Path did my Traceroute take?


Spine

Eth1/5
10.1.2.17

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 104
Endpoint Traceroute – VXLAN OAM
Eth1/5
10.1.1.17

Spine

NVE1
10.200.200.18 What is the Path
to Host A?

VTEP VTEP
L15# traceroute nve ip 192.168.10.101 vrf BLUE source 10.50.1.15 sport 35977 verbose
Baremetal AS#65500
Codes: '!' - success, 'Q' - request not sent, '.' - timeout,
Host A
'D' - Destination Unreachable, 'X' - unknown return code,
MAC: 0000.3001.1101
'm' - malformed request(parameter problem),
'c'IP:-192.168.10.101
Corrupted Data/Test, '#' - Duplicate response
Spine
Traceroute Request to peer ip 10.200.200.18 source ip 10.200.200.15
Sender handle: 94
1 !Reply from 10.1.1.17,time = 1 ms
2 !Reply from 10.200.200.18,time = 1 ms Eth1/5
3 !Reply from 192.168.10.101,time = 4 ms 10.1.2.17

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
Endpoint Traceroute – VXLAN OAM – Details
L15# traceroute nve ip 192.168.10.101 vrf BLUE source 10.50.1.15 sport 35977 verbose

Codes: '!' - success, 'Q' - request not sent, '.' - timeout,


'D' - Destination Unreachable, 'X' - unknown return code,
'm' - malformed request(parameter problem),
'c' - Corrupted Data/Test, '#' - Duplicate response

Traceroute Request to peer ip 10.200.200.18 source ip 10.200.200.15


Sender handle: 94
1 !Reply from 10.1.1.17,time = 1 ms Spine Ingress Interface IP
2 !Reply from 10.200.200.18,time = 1 ms
Destination VTEP IP
3 !Reply from 192.168.10.101,time = 4 ms
Host A IP
Spine Ingress Interface and Destination VTEP IP Address
are Underlay Information – additions vs. standard Traceroute

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 106
CLI Options – ICMP-based NVE Traceroute
traceroute nve ip Destination Host/Loopback vrf VRF-Name source Source Loopback verbose

traceroute nve ip Destination Host/Loopback vrf VRF-Name source Source Loopback sport Outer Source Port verbose

traceroute nve ip Destination Host/Loopback vrf VRF-Name source Source Loopback egress Uplink Interface verbose

• Issues Traceroute to Host or • Use a specific VXLAN Outer


Loopback IP address Source Port
• Otherwise Random Generated
• Specifies the VRF where Source VXLAN Source Ports are used
and Destination Endpoint exists
• Use specific egress Interface
• Chose the local Loopback IP as a • i.e. Uplink towards Spine
Source IP address for the NVE • Otherwise ECMP hashing is used
Ping with Random or defined VXLAN
Source Port

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 107
Endpoint Traceroute – VXLAN OAM – GUI (1)

Source Switch

Destination Switch

VRF

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 108
Pathtrace for Enhanced Network Visibility

What is the Path


• Application Specific Pathtrace
Spine Spine Spine from Host C to
Spine • Uses “draft-tissa-nvo3-oam-fm”
Host A for HTTP?
• Endpoint to Endpoint Pathtrace
Overlay • Adds Interface Load and Error
Statistics of the Path
• Uses Protocol Information
Leaf Leaf Leaf Leaf Leaf Leaf Leaf

• Validates Specific or All Paths


• Provides Overlay to Underlay
correlation
Baremetal Baremetal
Baremetal

Host A Host B Host C • Superset of NVE Ping/Traceroute


MAC: 0000.3001.1101 MAC: 0000.3001.1102 MAC: 0000.3002.2101
IP: 192.168.10.101 IP: 192.168.10.102 IP: 192.168.20.101

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 109
Endpoint Locator (EPL) – Architecture
• Endpoint Locator (EPL)
• Application in DCNM
Spine Spine Spine Spine
• Peers with the Overlay Control-
Plane (i.e. BGP EVPN)
• BGP Receiver only (Passive)

Leaf Leaf
• Searchable and Scalable
Leaf Leaf Leaf Leaf Leaf

Database for Real-Time and


Historic Data
BGP
• Stores every Endpoint Control-
Plane Event
EPL
• Correlates with Inventory Data

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 110
Endpoint Locator

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 111
APIC Application Centric Infrastructure (ACI)

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 112
Enhanced Endpoint Tracker : Endpoint History

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 113
Enhanced Endpoint Tracker : 10.0.144.2

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 114
ACI: StateChangeChecker
Cisco ACI application that allows operators to snapshot a collection of managed objects (MO) in the fabric and perform
snapshot comparisons. Allowing the ability to have operators to answer questions before and after maintenance windows:

1) What changed in my fabric?


2) Are my critical objects the same after maintenance?
3) Did any route change on any node?
4) Are all the local endpoint learns still present?

State means real state, not just config:


So, basically,
- An interface is down that was up
- End-Points moved
- Prefixes learnt changed from L3Out
- L3 peering is not up, but was up earlier

e.g. Let’s say an L3 peering was down before the maintenance window; it’s still down after maintenance. This will not
show up, because state has not changed.

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 115
APIC Multi-POD via Postman

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 116
APIC Multi-POD via Postman

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 117
APIC Multi-POD via Postman

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 118
Network Insights
Advisor (NIA) / Network
Insights Resources
(NIR):

For both Programmable


Fabric VXLAN EVPN and
ACI
Data Center Visibility Use Cases
Network Health Path and Latency Network Performance
Measurement
• CPU and memory • Interface utilization
utilization • End-to-end visibility
• Buffer monitoring
• Forwarding table utilization • Path tracing over time
• Microburst detection
• Protocol state and events • Flow latency monitoring
• Drop event correlation
• Environmental data

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 120
Network Insights Applications

Apps
DCNM APIC

Platform
App Hosting Framework App Hosting Framework
App Store App Store

Data collection and ingestion Data correlation and analysis Data visualization and action

Visibility Insights Proactive Troubleshooting


Learn from your network and See problems before Find root cause faster with
recognize anomalies your end users do granular details

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 121
Network Insights Resources
Benefits
Fabric-Wide Capacity
Resource Utilization Planning, Trend Monitoring

Environmental Avoid Environmental (CPU,


Monitoring Power, Memory, Fan, Storage
Related Failures

Network Identify/Predict Failing


Operations Statistics Devices
Insights
Resources
Troubleshoot Application Latency
Identify Traffic/Protocol
Flow Analytics behavior

Event Analytics Identify Subtle


Path-Related issues

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
Network Insights Advisor
Benefits
Software/Hardware
Recommendations Avoid multiple TAC calls
Workarounds

EOL/EOS Keep Network up to date


Field Notices Adhere to Cisco policies
SMUs Recommendations

Network Known Bugs/PSIRTs Remove Complexity


Anomalies Unknown runtime Avoid Outages
Insights Config anomalies Faster Deployment times

Advisor Version Scale


Limits/Hardening Significant CAPEX
Check And OPEX Savings
Configuration

Forwarding State Check Prevent traffic black holing


Network Anomaly Detection Avoid downtimes

#CLMEL BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 123
Network Insights Solution Dependencies
Targeted Scale, ACI / DCNM / NX-OS Versions

Compute Footprint*
• Two Options - Node Server Appliance or OVA Cluster
NIA NIR
• 3 Compute nodes each with Memory:192GB, 4x 2.4
TB Hard Drive, 400GB SSD, 32 vCPUs

Fabric Scale & Platforms*


• Fabric Scale up to 300 switches target at FCS
• Multi fabric support, up to 300 switches
• Directed Flow Monitoring up to 10k Flows
• Streaming intervals: 30sec S/W, 1sec H/W

APIC / DCNM & NX-OS Target Versions*


ACI
• APIC / ACI Minimum Release 4.1(2)
• NX-OS 7.0(3)i7(6)
• DCNM 11.2

*under evaluation / pre engineering


commit BRKDCN-2218 © 2019 Cisco and/or its affiliates. All rights reserved.
#CLMEL Cisco Public
#CLMEL TECSEC-2723 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 125
Complete Your Online Session Evaluation
• Give us your feedback and receive a
complimentary Cisco Live 2019 Power
Bank after completing the overall event
evaluation and 5 session evaluations.
• All evaluations can be completed via
the Cisco Live Melbourne Mobile App.
• Don’t forget: Cisco Live sessions will
be available for viewing on demand
after the event at:
https://ciscolive.cisco.com/on-demand-library/

#CLMEL © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
Thank you

#CLMEL
#CLMEL

S-ar putea să vă placă și