Documente Academic
Documente Profesional
Documente Cultură
Cisco Live
Agenda
VXLAN Recap
Migration Strategy
VXLAN
MAN/WA
N
FabricPath
/BGP
VXLAN
/EVPN
MAN/WAN
MAN/WAN
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Modern DC Fabric
Seek well integrated best in class Overlays and Underlays
Robust Underlay/Fabric
Flexibility/Programmability
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VXLAN Benefits
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Secure
Multi-tenancy
Scale
ASR1000
CSR1000
L2 Gateway
Nexus
1000
Nexus
2000
L3 Gateway
Workload
Anywhere
Workload
Mobility
Nexus
3100
Nexus
5600
BGP EVPN
Control Plane
Nexus
7000
Anycast
Gateway
BRKDCN-2020
Nexus
9000
ASR9000
Head End
Replication
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Underlay
Outer IP Header
UDP Header
48
48
VLAN Type
0x8100
16
VLAN ID
Tag
16
Ether Type
0x0800
16
Source
Port
16
VXLAN Port
16
14 Bytes
(4 Bytes Optional)
UDP Length
16
Checksum 0x0000
16
VXLAN Header
IP Header
Misc. Data
72
Header
Checksum
16
Source IP
32
Dest. IP
32
UDP 4789
VXLAN Flags
RRRRIRRR
Reserved
24
VNI
24
Reserved
8 Bytes
BRKDCN-2020
20 Bytes
8 Bytes
Overlay
Allows for
possible
Segments
16M
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Edge Device
Local LAN
Segment
Physical
Host
Local LAN
Segment
IP Interface
Edge Device
Local LAN
Segment
Physical
Host
Virtual Switch
Virtual Hosts
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VTEP
V
Local LAN
Segment
Physical
Host
Local LAN
Segment
Encapsulation
VTEP
V
Local LAN
Segment
Physical
Host
Virtual Switch
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
10
VY
VX
Inter-VXLAN Routing at
Core/Aggregation Layer
V3
V2
V1
Host Y
VNI 30001
Host A
VNI 30000
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
18
*Multicast Independence requires the usage of the Overlay Control-Plane or static configuration
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
19
Key Takeaway
In real world deployments traditional VXLAN flood/learn with or without headend replication has limited applicability simply does not scale for 100s of
switches
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
22
If only..
Multiple VRFs
Granular filtering
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
23
Reduce Flooding
Flood Prevention
Standards Based
Protocol Learning
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
24
RR
V2
V1
V3
BRKDCN-2020
BGP Route-Reflector
iBGP Adjacency
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
25
DataPlane
EVPN MP-BGP
draft-ietf-l2vpn-evpn
draft-ietf-l2vpn-evpn
draft-ietf-l2vpn-pbb-evpn
draft-sd-l2vpn-evpn-overlay
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
26
RR
V2
V1
V3
BRKDCN-2020
BGP Route-Reflector
iBGP Adjacency
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
27
V2
V3
V1
Distributed Gateway
Route or Bridge at Leaf
Distributed Gateway (Anycast) for Routing
Disaggregate state by scale out
Optimal Scalability
Requires VXLAN/EVPN!
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
37
V2
V3
V1
Host Y
VNI 30001
Host A
VNI 30000
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
38
Symmetric
Both ingress and egress VTEP can forward at L2 or
L3
Only VNIs of locally attached VTEPs need to be
configured on the leaf.
Simplifies configuration
Better scale in terms of total VNIs
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
39
Key Takeaway
MP-BGP EVPN is what you should use
Control plane learning for end host Layer 2 and Layer 3 reachability information.
Ability to build a more robust and scalable VXLAN over lay network
Peer discovery and authentication to improve security, optimal east-west and north-south
traffic forwarding
When buying a next-gen switching platform ensure that it can perform VXLAN routing in
hardware (Note: Merchant only Trident2 or Tomahawk based ToR switches cannot
natively route VXLAN, ONLY Trident2+ can)
When buying a next-gen switching platform ensure that it can support MP-BGP w EVPN
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
40
Agenda
Migration Strategy
Host Overlays
Layer3 ToR
Layer 3 ToR
Flat underlay
Local LAN
Segment
IP Interface
Virtual Switch
Virtual Hosts
Virtual Switch
Layer 3 ToR
Virtual Hosts
Flat underlay
Virtual Switch
Virtual Hosts
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
42
Host overlays run between VTEPs and empower server teams to delivery virtual
network services without needing to involve the network team
Results in CPU impact as most off the shelf NICs and hypervisors do not
provide HW offload for VXLAN
Extra effort due to lack of corelation between IP underlay and overlay, network
team has no idea where workloads live
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
43
Solutions
VLAN provisioning
VNI allocation
L3 anycast GW provisioning
VRF provisioning
You get HW based forwarding, full visibility, and network automation without any
of the drawbacks in the previous slide
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
44
There may be scenarios where host based overlays may solve some challenges
such as cases where
Cisco VTS (Virtual Topology System) is one possible solution. The VTC (Virtual Topology
Controller) is a single point of management for configuration/management and operation
of VXLAN EVPN.
Control-plane using IOS-XR w MP-BGP EVPN and SW VTEP leveraging the VTF (Virtual
Topology Forwarder)
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
45
Careful with scaling host overlays. Consider not just network automation/virtualization, but
performance SLAs, cost of licensing, edge gateway hardware (more servers).
Cisco VM-Tracker and/or Cisco VTS provide a comprehensive solution for host based
overlay designs
Specific host overlay use cases such as distributed firewall at hypervisor level, micro
segmentation can be delivered with much lower TCO and orders of magnitude
improvement in ROI via Cisco ACI
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
46
Overlay
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
47
Different for different platforms (Nexus 9000 vs Nexus 7000 vs Nexus 5600) and
needs careful consideration and planning during design phase to ensure you
dont run into issues
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
48
Similar to MPLS VPN / in Provider networks, the Underlay is your IGP and on
top the BGP process with the VRFs are running
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
49
Similar to MPLS VPN / in Provider networks, the Underlay is your IGP and on
top the BGP process with the VRFs are running
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
50
When using BGP in the Underlay and BGP in the Overlay (EVPN), do we still
have Underlay / Overlay separation?
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
51
Most operations teams are not ready for this as the line between overlay and
underlay is blurred
IGPs also simplify multi-pathing and offer separation of the underlay and
overlay control plane
Most deployments rely on IGP for underlay and MP-BGP EVPN for overlay.
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
52
The assumption is the Server Leaf will have mainly host routes in his FIB
Recommendation is keep the default max-mode host
Routes from outside of the Fabric (LPM routes) probably been minimal
The Spine will learn in Control Plane all routes, but not adding to FIB / RIB
The Spine will learn in his FIB / RIB only the Underlay routes, expectation is this will be
relatively small, similar to a P-node in MPLS
Recommendation is keep also here the default max-mode host
There might be corner cases like Underlay is exposed to outside with existing large RIB
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
53
MTU
Ensure Jumbo MTU is enabled
on all links (system QoS enabled
globally)
Outer IP Header
Underlay
UDP Header
VXLAN Header
BRKDCN-2020
Overlay
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
54
MTU
A change of MTU in Underlay could be disruptive and could impact also all
Overlay VRFs / tenants. Get it right the first time.
Nexus 9000 and Nexus 7000 can use MTU 9216 Byte for Underlay
Nexus 5600 can use max MTU 9192 Byte for Underlay (due to internal header)
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
55
Loopback Interface
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
56
If using inband for management, this range should not overlap with the existing
network. Hard to re-IP this!
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
57
Loopback Interface
Beside your Fabric Uplink and Underlay Loopback, probably you need one SVI
local and between vPC peers
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
58
Loopback Interface
Recommendation is to use one Loopback for Router ID, and different Loopback
for VTEP especially for vPC Leafs
Idea is that different Loopback for Router ID will enable learning of the prefixes in the
Fabric even VTEP Loopback is not up
VXLAN hold down the VTEP Loopback at boot up for certain time
In certain scenarios with vPC the VTEP Loopback will be brought down
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
59
Default mode is cut-through and need to change as the header must be rewritten, as
well as the ingress frame encapsulated
This change require a reboot on both platforms
According feedback from engineering for VXLAN the default cut-through should been
used
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
60
Multicast
Do NOT tune PIM timers, rely on BFD for fast failover assuming platform
supports PIM tying into BFD
PIM ASM or PIM SSM can be used. ASM provides higher validated scale in
most cases.
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
61
BUM traffic needs to be forwarded inside of the Fabric via this path
You can run Fabric Underlay without multicast routing, called ingress replication
and will do head end replication
For using multicast replication for BUM, the multicast routing in the Underlay
Fabric will be only local
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
62
Natural choice for the Multicast Rendezvous Point (MC RP) for ASM as well
BiDir are the Spines
The MC RP should be redundant it didnt hurt if there are more, e.g. you having 4
Spines and with this 4 MC RP
Interoperability of Nexus 9300, Nexus 7000 and Nexus 5600 as Leaf for
multicast routing in Underlay
Nexus 7000 to Nexus 9300 PIM ASM
Nexus 7000 to Nexus 5600 PIM BiDir
Nexus 9300 to Nexus 5600 Not Today
Next slides more details
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
63
Nexus 9000
Nexus 7x00
Nexus 5600
Hardware capable
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
64
For PIM ASM there is a clear direction of the traffic, from the sender to the MC
RP, and from MC RP towards the receiver
The sender and receiver for the MC streams, the BUM traffic of the end devices, will be
the Server Leafs no end devices
Default multicast load balancing for ASM is per group via the Spines
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
68
For each Bridge Domain with unique MC group address will be one *,G entry in the
mroute table
For each sender for this Bridge Domain, potential all Leafs, will be one S,G entry in the
mroute table
If you using 50 groups for your Bridge Domains / VLANs, and having 50 Server Leafs,
this could result in 50 *.G and 2500 S,G entries so total 2550 mroute entries
Please be aware of the max mroute entries per platform (VXLAN table)
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
69
For MC RP and PIM ASM you should sync the S,G entries, usually done via
anycast RP (no need for MSDP)
S,G are the Source of the traffic, here the Leafs, and the Group, bind to the Bridge
Domain or VLAN
Because of potential high amount of S,G entries the multicast group IP address
design is important
Best load balancing over the Spines for BUM traffic for one MC group per Bridge
Domain / VLAN but maybe too many S,G entries
Best scalability all Bridge Domains / VLANs share the same MC group but no load
balancing as well all Leafs will receive all BUM traffic
Compromise is to use one MC group per VRF over all Bridge Domains limit the
amount of S,G entries, by allowing the BUM traffic load balancing, as well BUM traffic
from away from Leafs not having this VRF configured
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
70
The streams always goes from source to RP, and from RP to the receiver
Always the shared tree is used, never the source tree
In the Fabric always the Spine is in forwarding path, so there is no shorter path then via
the Spine, aka no benefit of source tree
Because only shared tree is used, there are only *,G entries in the mroute
table and no S,G scale much higher
For PIM BiDir there is no clear direction of the traffic, it flows to and from the
RP for a given group
The sender and receiver for the MC streams, the BUM traffic of the end devices, will be
the Server Leafs no end devices
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
71
Only one Spine is the active RP for the MC groups assigned to him, the standby RP
didnt get any BUM traffic as long the active RP is available
With this no sync protocol between active and standby RPs is required
The IP for PIM BiDir RP should not be configured, usually one higher
The primary RP have for the MC RP a /30 subnetmask, due to be able having IP +1
The Phantom RP have a /29 mask with same IP then primary RP
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
72
With only one PIM BiDir RP there will be no load balancing for BUM traffic over
the Spines
To achieve load balancing for BUM, multiple PIM BiDir RP required with different
MC groups per RP
For single MC group all traffic goes via one RP / single Spine
Load balancing over both RP is achieved by different groups
With 2 Spines, each has is active RP for one MC group, as well Phantom RP for the
other MC group
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
73
Only one Spine is the active RP for the MC groups assigned to him, the standby RP
didnt get any BUM traffic as long the active RP is available
With this no sync protocol between active and standby RPs is required
The IP for PIM BiDir RP should not be configured, usually one higher
The primary RP have for the MC RP a /30 subnetmask, due to be able having IP +1
The Phantom RP have a /29 mask with same IP then primary RP
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
74
With only one PIM BiDir RP there will be no load balancing for BUM traffic over
the Spines
To achieve load balancing for BUM, multiple PIM BiDir RP required with different
MC groups per RP
For single MC group all traffic goes via one RP / single Spine
Load balancing over both RP is achieved by different groups
With 2 Spines, each has is active RP for one MC group, as well Phantom RP for the
other MC group
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
75
Conclusion
Consider the platform capabilities when choosing the multicast routing protocol,
if you have both multicast routing options:
Balance the Bridge Domain to MC group mapping between the scalability of the
Leafs and granularity of BUM traffic forwarding
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
76
All Server Leafs using for one SVI the same IP address, the distributed gateway IP or
anycast IP
With this the approach to add as GIADDR (option 80) the server SVI IP will not work,
because path back is not distinct
In VXLAN EVPN a unique IP for the requested Server Leaf will be added to GIADDR
which is in examples below either Loopback or mgmt0
According the standard, the DHCP server need to use the GIADDR to send back the
DHCP offer. This path is deterministic with use of Loopback or mgmt0
Due to challenge above, for DHCP relay for VXLAN EVPN need to use DHCP
option 82 and the sub-types on the DHCP server
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
77
Some DHCP server have currently problems with these sub-options of DHCP
option 82, current investigation shows this is mainly Windows 2012 servers,
documented in CSCux88796
The DHCP relay commands are used to enable different options of 82 of DHCP
protocol required for setup in multi tenancy environment
The server ID override sub-option is used for the client send DHCP renew and
release from the client to the Server Leaf
By this modification on the Server Leaf the client will send renew and release to the
Server Leaf, which will then forward the messages to the DHCP server
This will be the server SVI IP address which is the anycast IP on Server Leaf
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
78
The Server Leaf add as GIADDR a unique IP, either a Loopback or his mgmt to which
the DHCP server responds
In other environments the GIADDR is enough to define a distinct path back to the DHCP relay
agent, but not with distributed gateway
The Server Leaf will add the server SVI subnetwork as link selection
From this range the DHCP server will provide the IP address to the client and bind it the
MAC of the client
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
79
DHCP is not that often used in data center, one of the exceptions is PXE boot
for staging servers which adding new challenges
The following examples are how to address this without use of auto-config
trigger LLDP
With LLDP trigger we can set base on MAC database on DCNM the correct VLAN / L2
VNI, so DHCP request will be send it the desired VLAN without extra config per port
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
81
Usually different operating system require different PXE boot processes and by
this different PXE boot server
One PXE boot server for all VRFs unique PXE boot domain per VRF?
One PXE boot server per operating system and different for VRFs?
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
82
Same then the client traffic via Border Leaf and Core?
Using VRF management / mgmt 0 the switches?
Using complete different VRF to forward this traffic?
For automation profiles as well manual config we can use per SVI different
DHCP server settings, network itself is flexible
Planning and get in sync with server teams might be the bigger challenge than
to configure DHCP relay
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
83
There are few cases where DHCP Relay did not work
Run PXE boot VLANs as L2 bridge domains, and add DHCP interface directly into this
bridge domain (no SVI in Fabric required for this)
Use external DHCP relay device which has a SVI in each PXE boot VLAN, and can use
GIADDR like in traditional environment
Use separate infrastructure for staging of servers, then re-cable server after the
installation process
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
84
RR
Spine
VXLAN Overlay
EVPN MP-BGP
VTEP
VTEP
VTEP
VTEP
Border Leaf
VTEP
VTEP
Leaf
Routing
Protocol
of
Choice
IP Routing
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
85
Overlay
EVPN
VRF
GREEN
Overlay
EVPN
VRF
RED
Overlay
EVPN
VRF
BLUE
VRF
RED
VRF
BLUE
Interface-Type Options:
Physical Routed Ports
Subinterfaces
VLAN SVIs over Trunk Ports
(static routing possible via vPC)
External Router
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
86
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
87
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
88
RR
RR
Spine
VXLAN Overlay
VTE
P
EVPN
VRF VTE
Instance VTE
Space
VTE
P
VRF
GREEN
VRF
RED
VTEP
Border Leaf
Core
VRF default
VRF
BLUE
interface Ethernet2/9.10
mtu 9216
encapsulation dot1q 10
vrf member BLUE
ip address 30.10.1.1/30
interface Ethernet1/50.10
mtu 9216
encapsulation dot1q 10
ip address 30.10.1.2/30
router bgp 100
address-family ipv4 unicast
network 100.0.0.0/24
network 100.0.1.0/24
neighbor 30.10.1.1 remote-as 100
address-family ipv4 unicast
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
89
You could expect we receive in VRF Green also routes from VRF Blue,
because Core have one VRF why is this not the case?
Even though the Core has all prefixes from all tenant/VRFs he will NOT send
them back into the same AS
Without default route from Core, no communication possible between
tenant/VRFs inside of the same Fabric
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
90
How to implement:
Either virtual firewall IP, external learned network like MC RP, or to Null 0
Either redistribute static or use network statement in VRF BGP IPv4 address family
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
91
Pro:
Con:
It is static
Requires also static routing external to the Fabric
Need to add FHRP protocol on both sides for failover
Higher operation effort, if adding/removing networks is dynamic
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
92
How to implement:
Current Core runs IGP like OSPF or EIGRP and BGP should be avoided
Dynamic routing is desired, but network services only support e.g. OSPF
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
93
Pro:
Con:
Lose capabilities and options of BGP like marking prefixes via standard BGP
communities as suggested to signal the tenant / VRF
Not possible via SVI towards vPC member connected device
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
94
How to implement:
BGP to Core or network service from Fabric Border Leafs
1:1 transport of Fabric prefixes, or use summarization on Border Leaf
Suggestion is to add BGP standard communities on Border Leaf to be able to identify
the VRF / tenant in the Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
95
Pro:
Con:
The following output shows an example for 0.0.0.0/0 received via eBGP on
Border Leaf and forwarded to Server Leafs
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
96
Conditionally:
The default route is only advertised into the fabric if there is a path available on the
Border Leaf to the Core
Unconditionally:
No mater if there is a path to the Core, the default route is always advertised into the
Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
97
RR
Spine
Border Leaf learns default route via
eBGP from Core AND via iBGP
from neighbor Border Leaf
VXLAN Overlay
EVPN MP-BGP
VTEP
VTEP
VTEP
VTEP
Border Leaf
VTEP
VTEP
Routing
Protocol
of
Choice
Border Leaf
IP Routing
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
98
CONDITIONALLY:
RR
RR
Spine
Border Leaf 1 doesnt learns default route
via eBGP from Core.
VXLAN Overlay
EVPN MP-BGP
VTEP
VTEP
VTEP
Border Leaf
VTEP
VTEP
VTEP
Routing
Protocol
of
Choice
Border Leaf
Result:
Leafs receive only default route from Border Leaf 2 as
this is the only direct way to the Core
IP Routing
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
99
Next Hop
9.0.0.22
Metric
LocPrf
100
Weight Path
0 64999 i
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
100
2-Box Solution
Routing-Session for each
VRF
Dedicated Interface
(Physical or Logical) per
VRF
Configuration intensive
Can be Automated
VRF-Lite
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
101
1-Box Solution
Single Routing-Session
Single Interface for all VRF
Lean Configuration
Can be Automated
MPLS
Use-Case:
External Connectivity
Layer-3 DCI
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
102
VPNv4: 192.168.1.0/24
Type5: 192.168.1.0/24
MPLS
VPNv4/VPNv6
EVPN
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
103
provision overlay
Industrys first puppet plugin for MP-BGP
EVPN
Bring your own controller (BYOC)
OR .....
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
104
APIC
APIC Cluster
APIC
APIC
ACI Fabric supports discovery, boot, inventory, and systems maintenance processes through
the APIC
Image management
Fully automated BGP RR and VXLAN VTEP discovery / configuration and removal
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
105
Agenda
Migration Strategy
Where we can help is to define how the interconnect from new to old world
should look like, as well with technical migration strategy this we want to get
right
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
107
From past experience, no good idea to migrate on the fly running switches with servers
to VXLAN EVPN
Would cause longer down time which clients dont have, and higher risk, which clients
dont want
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
108
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
109
With auto-config assumption is the same config is used in the whole Fabric
When you mix this you break this rule with unpredictable effects
It is not supported to mix auto-config and doing afterwards in this part manual changes
One reason that general Server Leaf should not used to connect existing environment to
the Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
110
Would mean STP root change in existing environment for VLANs start migration, the
other VLAN STP root will be not affected
Probably STP root change is the biggest impact duration depending if RPVST is used
as well how many VLAN moved per step
Possible with VXLAN to run STP root outside of Fabric, but risk is higher due to Fabric
itself cant break a Layer 2 loop
Avoid BPDU filter, because wouldnt allow STP to break a Layer 2 loop
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
111
Separate Border Leaf who does only routing to the Core without vPC
Dont even think about using the Fabric Spines
In existing environment consider Distribution as connection, because most time central
point as well vPC / VSS is there in use
Recommendation is to use vPC for connection to existing environment
Main reason is higher resiliency and lower convergence in case of failure, also better use of
bandwidth to the new Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
112
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
113
Assumption is STP root is in the Fabric, and STP normal with root guard is used on
Fabric ports to existing environment
The STP BLK will be a port on the Fabric switches, because of root guard.
There can be only one STP root (The Leaf with lower system MAC will win)
Benefit, traffic inside of the existing environment will stay inside existing environment and not cross
the Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
114
STP
BLK
STP
BLK
STP
BLK
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
115
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
116
Are there multiple different environments with overlapping VLAN IDs need to be
consolidated?
This will tell: Do we need to think about VLAN translation options on the Migration Leaf
and which kind option might fit
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
117
Did one switch, or 2 switches hosting the STP roots for all VLANs?
This will tell: How many steps we need to migrate the STP root into the new Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
118
HSRP is working only without VNI to the SVI, like for network services
Because to add hosts to control plane EVPN config is required, will not work
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
119
During migration using summaries might lead to traffic going to wrong side
E.g. in existing data center still announce /24 with higher metric and the new Fabric start to
announce with /20 traffic will not send to new Fabric for migrated L3
Consider during migration not to use summaries, until a full block is migrated to the new
Fabric
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
120
Filter the host prefixes to Core is recommended, will help also in case no summaries
been build during migration
Keep in mind the Loopback in the VRFs on the Border Leaf or maybe also Server Leaf,
allow these host mask to the Core
For migration, to filter host prefixes also minimize the impact for back out the change /
moving back the Layer 3 to the existing environment
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
121
Many options in the field, from static routing over EIGRP / OSPF to BGP
This will tell: How smooth will be the migration, as well the redistribute options you can
use on the Border Leaf
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
122
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
123
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
124
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
125
Multiple connection points between new Fabric and existing STP environments required
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
126
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
127
When existing environments need to merge into one Fabric, overlapping VLAN
IDs is often an issue due to historical reasons
There are different solutions for this, which fit best depending on:
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
128
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
129
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
130
In this example:
vlan 3100
vn-segment 31000
vlan 3101
vn-segment 31001
vlan 3102
vn-segment 31002
vlan 3103
vn-segment 31003
!
interface Ethernet1/7
switchport mode trunk
switchport vlan mapping enable
switchport vlan mapping 3000 3100
switchport vlan mapping 3001 3101
switchport trunk allowed vlan 3100,3101
!
interface Ethernet1/8
switchport mode trunk
switchport vlan mapping enable
switchport vlan mapping 3000 3102
switchport vlan mapping 3001 3103
switchport trunk allowed vlan 3102-3103
3001
3000
3001
3000
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
131
Current limit is 100 PV mappings per interface, total 1K L2 VNIs per Leaf
Allows to map the same 802.1Q VLAN tag to different VNIs on different
interfaces of the same Leaf
The VLAN ID to Segment ID mapping happens per switch and not per port
The Overlapping VLAN IDs will NOT get a Segment ID!
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
132
If VLAN ID 1-10 overlaps, and coming from 4 different environments, we need as VLAN
block for translation 40 free VLAN IDs
For historical grown environments maybe above ID 1000 is free
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
133
Migration Example
1.1.1.0/30
1.1.1.12/30
Layer 3 Routing
Static, OSPF, BGP
Blue VRF
Migration
Vlan 10,11
VXLAN
L2
Lega
cy
conn
ectio
n
Vlan10
Vlan20
VNI
blue_1
VNI
blue_2
Border
Router
VLAN 11
(10.10.11.0/24)
.101
.102
Access
Vlan 10
Vlan 20
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
134
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
135
Related sessions
BRKDCN-2020
2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
136