Documente Academic
Documente Profesional
Documente Cultură
• Traffic Engineering
• Scalable
• Flexible topology
SOLUTIONS
• Minimise oversubscription
• Scale out and scale up Mergers VRF Shared
Acquisitions services
• Scalable L4-7 Service Layer L3VPN
• No spanning tree Multicast VPN
• Incremental scale
• Virtual FW/LB per tenant
• Flexible placement
• Incremental capacity Compliance
Data Centre “Fabric” Journey
STP
VPC
FabricPath
APIC
MAN/WA
Application
N Policy
Infrastructure
ACI Fabric
Controller
FabricPath VXLAN
/BGP /EVPN
MAN/WAN MAN/WAN
Fabric Roles and
Definitions
Leaf and Spine Topology – Device Roles
• Spine spine
• Interconnecting Leafs and Border Leafs spine
• IP Forwarder (East / West)
• Route-Reflector (RR) for EVPN
• Rendezvous-Point (RP) for Underlay
• Does not require VTEP
• Leaf (VTEP)
• VXLAN Edge-Device
• Route/Bridges Classic Ethernet frames &
encapsulates them into VXLAN
• Requires VTEP
• Virtual Machines
• Physical Machines
• FEX
• 3rd-party Switches
• UCS FI
• Blade Switches leaf leaf leaf leaf leaf leaf leaf leaf Border leaf
• Border Leaf (VTEP)
• External Connectivity
MAN/WAN
Border-Leaf Topology – Device Roles Border-Leaf
• Border Leaf (VTEP)
• VXLAN Edge-Device VRF OSPF Process
• Route and Bridges Classic Ethernet frames from an outside network and
encapsulates them into VXLAN (North/South)
EVPN EVPN EVPN
• Internetworking of LISP/MPLS traffic from an outside network and re- Overlay Overlay Overlay
Tenant Tenant Tenant
encapsulates it into VXLAN (North/South) VRF A VRF B VRF C
• Speaks IGP/EGP routing protocols with the outside network (North/South) VRFA VRFB VRFC
• Requires VTEP
• IPv4/IPv6 routes are exchanged with the external neighbour through the VRF
VRFB VRFC
A
IPv4/IPv6 unicast address family within the VRF
EVPN EVPN EVPN
• Interface options: Physical Routed Ports, sub-interfaces, VLAN SVIs over Trunk Overlay Overlay Overlay
Tenant Tenant Tenant
Ports VRF A VRF B VRF C
External Router
Services Leaf – Device Role
• Services leaf (VTEP)
• Firewalls
• Load balancers border spine border spine
• Proxy services
• IPS services
12
Agenda
• Fabricpath/DFA Customer Deployment
• VXLAN Customer Deployment
• ACI Customer deployment
• Conclusion
… so, Please …
14
Fabricpath/DFA Customer
Deployment
DC Fabric w/FabricPath
• Externally the Fabric looks like a single switch
• Internally, ISIS adds Fabric-wide intelligence and ties the elements
together.
• Provides in a plug-and-play fashion:
• Optimal, low latency connectivity any to any
• High bandwidth, high resiliency
• Open management and troubleshooting
• ISIS for multipathing and reachability
FabricPath FabricPath
16
FabricPath: Design
- Default-Gateway
- Nx7k FP Spine (F3)
- Anycast-HSRP
FabricPath
- Nx5k FP leaf
UCS-FI
- F3 mac-scale (ARP)
Routing at FabricPath Spine
L3
Anycast HSRP
Anycast HSRP All Anycast HSRP forwarders
between agg switches L3 share same VIP and VMAC
Anycast HSRP
FabricPath
- Default-Gateway
- Nx7k FP Spine (F3)
- Anycast-HSRP
FabricPath
- Nx5k FP leaf
FabricPath: Traffic flows
FP (or) vPC
- Default-Gateway
- Nx7k FP Spine (F3)
Intra-VRF Inter-VRF
FabricPath
- Nx5k FP leaf
FabricPath: External / WAN Connectivity
• Spine/leaf architecture MPLS, WAN
, Internet, Campus
Note:
- F3 simplifies the deploy with MPLS and FabricPath Support.
- Previously we leveraged F2 for FabricPath (VDC)
FabricPath
- M2 for MPLS Connectivity (VDC)
Stand Alone Fabric (FabricPath/DFA)
RR RR MP-iBGP Adjacencies
Fabric Host/Subnet
MP-iBGP Control Plane External Subnet
Route Injection FabricPath DataPlane Route Injection
N1KV/OVS
Route-Reflectors
MAN/WAN
deployed for scaling
purposes
• DC Fabric with a FabricPath based data plane and MP-iBGP control plane.
• Use MP-iBGP on the leaf nodes to distribute internal host/subnet routes and external reachability
information.
• Introduced Segment ID to increase name space to 16M identifier in the fabric.
Optimised Networking
Distributed Gateway Mode
24
IP Forwarding Between Fabrics Across L3 Based DCI
FabricPath FabricPath
BGP AS#100 BGP AS#200
Border-leaf Border-leaf
eBGP eBGP
Scale >1000+ switches. Higher potential POAP Support with templates for
with clustering VXLAN-EVPN
1000+
DCNM cluster Topology Views for Phy, L2, L3,
VXLAN & VPC Overlays.
Nexus Nexus Nexus NXAPI
N5000 N7000 N9000 [Southbound]
NXAPI for Southbound APIs for
Modular device packs/driver for more reduced reliance on SNMP, Netconf
rapid Platform [HW/SW] updates
Nexus
Platform
DCNM
Infrastructure Provisioning Platform
New GUI using HTML5 for completely new user
experience
*Cisco Nexus 5600/6000 switches only support 9192 Byte for Layer-3 Traffic
Building Your IP Network – Interface Principles
• Know your IP addressing and IP scale
requirements
• Best to use single Aggregate for all Underlay
Links and Loopbacks
• IPv4 only
• For each Point-2-Point (P2P) connection,
S1 S2 S3 S4
minimum /31 required
• Loopback requires /32
L1 L2
• Routed Ports/Interfaces
• Layer 3 Interfaces between Spine and Leaf
(no switchport)
• VTEP uses Loopback as Source-Interface
L3
Building Your IP Network – Interface Configuration
Interface Configuration Example for (L1)
# Loopback Interface Configuration (VTEP)
interface loopback 0
ip address 10.10.10.L1/32
mtu 9192 S1 S2 S3 S4
# Point-2-Point (P2P) Interface Configuration
interface Ethernet 2/1
no switchport
ip address 192.168.1.1/31
mtu 9192
L1 L2
interface Ethernet 2/2
no switchport
ip address 192.168.1.3/31
mtu 9192
.
.
L3
IP Unnumbered – Simplifying The Principles
• IP Unnumbered – Single IP address for multiple
Interfaces
L3
Note: IP Unnumbered cross-platform support, Nexus 9000 added in 7.0(3)I3(1)
IP Unnumbered – Interface Configuration
L3
L3
Underlay Deployment with
Multicast Routing
Multicast-enabled Underlay
ASR 1000
Nexus 1000v Nexus 3000 Nexus 5600 Nexus 7000/F3 Nexus 9000 ASR 9000
CSR 1000
Multicast IGMP v2/v3 PIM ASM PIM BiDir PIM ASM / PIM BiDir PIM ASM PIM BiDir PIM ASM / PIM BiDir
Mode
Any Subnet Routed Anywhere – Any VTEP can serve any Subnet
RR RR
Spine SVI 100, Gateway IP: 192.168.1.1
Integrated Route & Bridge (IRB) - Route whenever you can,
SVI 200, Gateway IP: Bridge when needed
10.10.10.1
bridge
route
No Hairpinning V
– Optimised East/West and North/South Routing
SVI 100
Host3
V Seamless Mobility
MAC: CC:CC:CC:CC:CC:CC
- All Leaf share same Gateway MAC
V SVI 200 IP: 192.168.1.33
VLAN 100
V VXLAN VNI 30001
Reduced Failure Domain – Layer-2/Layer-3 Boundary at Leaf
V Host2
MAC: BB:BB:BB:BB:BB:BB
V IP: 10.10.10.22
Optimal Scalability – Route
VLAN 200
Distribution & closest to the Host
SVI 100
Host1 VXLAN VNI 30002
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11
VLAN 100
VXLAN VNI 30001
IP Fabric Overlay Taxonomy (1)
Edge Device
Edge Device
Local LAN
Local LAN Segment
IP Interface
Segment
Physical
Host Physical
Edge Device
Local LAN Host
Segment
Virtual Switch
Virtual Hosts
IP Fabric Overlay Taxonomy (2)
VTEP
VTEP
V Encapsulation V
Local LAN
Local LAN Segment
Segment
Physical
Host VTEP
V Physical
Local LAN Host
Segment
Virtual Hosts
MP-BGP EVPN Route Type 2
MP-BGP EVPN Route Type 2 - MAC/IP Advertisement Route
• The following fields are part of the EVPN prefix in the NLRI Ethernet Tag ID (4 octets)
• Ethernet Tag ID (zeroed out)
MAC Address Length (1 octet)
• MAC Address Length (/48), MAC Address
• IP Address Length (/32, /128), IP Address [Optional] MAC Address (6 octets)
• The following fields are part of the EVPN prefix in the NLRI IP Prefix Length (1 octet)
• IP Prefix Length (0-32 bits for IPv4 or 0-128 bits for IPv6) IP Prefix (4 or 16 octets)
• IP Prefix (IPv4 or IPv6)
GW IP Address (4 or 16 octets)
• GW IP Address
• MPLS Label (L3VNI) MPLS Label (3 octets)
Route Type: Ethernet Segment Ethernet Tag MAC Address
MAC Address IP Address Length IP Address
2 - MAC/IP Identifier Identifier Length
BGP routing table information for VRF default, address family L2VPN EVPN
Route Distinguisher: 10.0.0.1:32868
BGP routing table entry for
[2]:[0]:[0]:[48]:[0050.56a3.c2bb]:[32]:[192.168.1.73]/272,
version 4
Paths: (1 available, best #1)
Flags: (0x000202) on xmit-list, is not in l2rib/evpn, is locked
Advertised path-id 1
L3VNI Path type: internal, path is valid, is best path, no labeled nexthop
AS-Path: NONE, path sourced internal to AS
L2VNI 10.0.0.1 (metric 3) from 10.0.0.111 (10.0.0.111)
Origin IGP, MED not set, localpref 100, weight 0
Received label 30001 50001
Extcommunity: RT:65501:30001 RT:65501:50001 ENCAP:8 Router MAC:5087.89d4.5495
Originator: 10.0.0.1 Cluster list: 10.0.0.111
Remote VTEP Route Target: Route Target: Overlay Encapsulation: Router MAC of
IP Address L2VNI (VLAN) L3VNI (VRF) 8 - VXLAN Remote VTEP
Protocol Learning & Distribution
RR RR
MAC, IP L2VNI L3VNI NH MAC, IP L2VNI L3VNI NH
MAC_A, IP_A 30000 50000 local MAC_B, IP_B 30000 50000 local
1
1
1 V2
V1
Host C Host Y
MAC_C / IP_C MAC_Y / IP_Y
Protocol Learning & Distribution
RR RR
MAC, IP L2VNI L3VNI NH MAC, IP L2VNI L3VNI NH
MAC_A, IP_A 30000 50000 local MAC_B, IP_B 30000 50000 local
2
2
2 V2
V1
Host C Host Y
MAC_C / IP_C MAC_Y / IP_Y
Protocol Learning & Distribution
3 3
RR RR
MAC, IP L2VNI L3VNI NH MAC, IP L2VNI L3VNI NH
MAC_A, IP_A 30000 50000 local MAC_B, IP_B 30000 50000 local
MAC_B, IP_B 30000 50000 IP_V2 MAC_A, IP_A 30000 50000 IP_V1
MAC_C, IP_C 30000 50000 IP_V3 MAC_C, IP_C 30000 50000 IP_V3
2
VMAC_Y, IP_Y 30001 50000 IP_V3
MAC_Y, IP_Y 30001 50000
V1IP_V3
3
MAC, IP L2VNI L3VNI NH
Host C Host Y
MAC_C / IP_C MAC_Y / IP_Y
Multitenancy
What is Multi-Tenancy
RR RR
Spine
bridge
V
VLAN 100
Host3
V MAC: CC:CC:CC:CC:CC:CC
V IP: 192.168.1.33
VLAN 100
V VXLAN VNI 30001
V
V
VLAN 100
Host1
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11
VLAN 100
VXLAN VNI 30001
Layer-2 Multi-Tenancy – Bridge Domains
VXLAN Overlay
(VNI 30001)
Leaf
V
VLAN 100
Bridge Domain V
VLAN 100
Host1 Host3
MAC: AA:AA:AA:AA:AA:AA MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.33
VLAN 100 VLAN 100
VXLAN VNI 30001 VXLAN VNI 30001
Layer-2 Multi-Tenancy – Bridge Domains
VXLAN
The Overlay
Bridge Domain(VNI 30001)
is the Layer-2 Segment from Host to Host
V
VLAN 100
Bridge Domain
1) The Ethernet Segment (VLAN), between Host
V and Switch
VLAN 100
2) The Hardware Resources (Bridge Domain) within the Switch
Host1 Host3
MAC: AA:AA:AA:AA:AA:AA MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.33
VLAN 100 VLAN 100
VXLAN VNI 30001 VXLAN VNI 30001
VLAN-to-VNI mapping
VXLAN Overlay
(VNI 30001)
Leaf
V V
VLAN 100 VLAN 100
Leaf
V V
VLAN 100 VLAN 200
Leaf
V V
VLAN 100 VLAN 200 VLAN 300
Leaf#1
vlan 2500
vn-segment 30001
route V
SVI 300
Host3
V IP: 172.16.1.33 (VRF-B)
V SVI 200 VLAN 300
V
V Host2
IP: 10.10.10.22 (VRF-B)
V VLAN 200
SVI 100
Host1
IP: 192.168.1.11 (VRF-A)
VLAN 100
Layer-3 Multi-Tenancy – VRF-VNI or L3VNI
VRF-A VRF-B
(VNI 50001) (VNI 50002)
Leaf
Routing Routing
DomainV
SVI 100
V
SVI 200
Domain V
SVI 300
VRF-A VRF-B
Host1 Host2 Host3
IP: 192.168.1.11 (VRF-A) IP: 10.10.10.22 (VRF-B) IP: 172.16.1.33 (VRF-B)
VLAN 100 VLAN 200 VLAN 300
Layer-3 Multi-Tenancy – VRF-VNI or L3VNI
VRF-A VRF-B
The Routing Domain
(VNI 50001) is the VRF owning multiple
(VNI 50002)across multiple Switches
Subnets
Leaf
Routing Routing
In VXLAN EVPN, the Routing Domain consists of three Components
DomainV
VLAN 100
1) The
SVI 200
Domain
V Routing Domains (VRF), local
V to the Switch
SVI 300
2) The Routing Domain (L3VNI) between the Switches
VRF-A VRF-B
3) Multi-Protocol BGP with EVPN Address-Family
RR RR
Spine SVI 100, Gateway IP: 192.168.1.1 (VRF-A)
SVI 200, Gateway IP: 10.10.10.1 (VRF-A)
V
SVI 100
Host3
V MAC: CC:CC:CC:CC:CC:CC
V SVI 200 IP: 192.168.1.33 (VRF-A)
VLAN 100
V VXLAN VNI 30001
V Host2
MAC: BB:BB:BB:BB:BB:BB
V IP: 10.10.10.22 (VRF-A)
SVI 100 VLAN 200
Host1 VXLAN VNI 30002
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11 (VRF-A)
VLAN 100
VXLAN VNI 30001
Layer 4-7 Services
Integration
Service Chain: Firewall + Load Balancer
3) Firewall is the first device in the service chain to protect the load balancer
Client-> VIP VIP-> VLAN101 (VRF-A) VIP-> VLAN21 (VRF-A) VLAN40-> VLAN41 (VRF-A) VIP-> VLAN40 (VRF-A)
= Fabric
= Distributed Anycast Gateway
Service Chain Load Balancer(SNAT) + Firewall Services
• Firewall is the first device in the service chain
L1
fwd
fwd
VXLAN and Interaction with Spanning-Tree
• Spanning-Tree and VXLAN
• Virtual Port-Channel (vPC) will allow safe
integration with Spanning-Tree
• No Loop-Protection required as per
logical Loop-free topology
L3
• Note L2
• Follow best practices to protect the
Network Border as in Classic Ethernet
Networks L1
• BPDU Guard fwd
• Root Guard
fwd
• Storm Control
• etc
Virtual Port-Channel (VPC) Concept
• The VXLAN vPC Domain follows the
configuration similar as for Classic Ethernet
Host A
192.168.1.101
Border-leaf with VRF-lite
• Layer 3 @ Border with VRF-lite
RR RR
• aka Inter-AS Option “A “
• Provides connectivity for external BGP AS# 65500
routing connectivity
L3
• Interconnect using sub-interfaces for
Multitenant capable handoff L2
• Per-VRF routing adjacency based on L1
IEEE 802.1Q tagging BL1
• Various routing protocols available BL2
(eBGP, OSPF, EIGRP etc)
Layer-3
Sub-Interface
Border-leaf with VRF-lite (Inter-AS Option “A “)
VTEP(s) Configured on Border-leaf RR RR
A B C
Interface-Type Options:
• Physical Routed Ports Peering Interface can
• Sub-Interfaces be in Global or Tenant VRF
• VLAN SVIs over Trunk Ports
Scale >1000+ switches. Higher potential POAP Support with templates for
with clustering VXLAN-EVPN
1000+
DCNM cluster Topology Views for Phy, L2, L3,
VXLAN & VPC Overlays.
Nexus Nexus Nexus NXAPI
N5000 N7000 N9000 [Southbound]
NXAPI for Southbound APIs for
Modular device packs/driver for more reduced reliance on SNMP, Netconf
rapid Platform [HW/SW] updates
Nexus
Platform
Cisco Nexus Fabric Manager (NFM)
Intelligent fabric lifecycle management
• Fabric-wide focus – auto-configuration and management of fabric
• Initial support for Cisco Nexus 9000 Family Connection
running stand-alone NX-OS mode
• Automation based on knowledge of Creation Expansion
vCenter GUI
REST API
Automated Provisioning Open, Standards Based
• Group Based Policy model • Rest based Northbound APIs
• Overlay Provisioning • Multi-protocol support (EVPN, VXLAN)
• Service Chaining • Multi-Hypervisor
Nexus Portfolio
Nexus 2k – 9k
ACI Customer
Deployment
Application Centric Infrastructure (ACI)
App-Based Automation
APIC
Spine
Leaf
ACI Fabric Overview
Attaching the ACI APIC(s)
APIC APIC
• Tenant: Logical separator for: Customer, BU, group etc. separates traffic,
admin, visibility, etc.
• Context: Equivalent to a VRF, separates routing instances, can be used as an
admin separation
• Bridge Domain: Not a VLAN, simply a container for subnets. It can be used
to define a L2 boundary.
• End-Point Group (EPG) Container for objects requiring the same policy
treatment, i.e. app tiers, or services
Logical Model Overview
root
Tenant A Tenant B
• Fabric Hardware:
• (3) APIC Controllers
• (3) Nexus 9508 Spines
• (many) Nexus 9300 Leaf switches (mix of 9396/9372/9332)
• Layer-3 Routing:
• OSPF to ASR9K WAN router (vPC)
• OSPF to Infoblox/Mainframe (treat like OSPF Stub Areas)
• Static routes to FW/LB (except extranet FW, which use OSPF)
• L3 multicast design:
• ASR1K as external mrouter interfaces
• Exchange multicast source information with ASR9K via MP-BGP.
ACI Policy Model
High Level Overview L3-Out
L3-Out: (ASR9000)
(ASR9000)
(Mainframe)
(FW) Tenant “Y”
Tenant “X”
(Infoflox)
(Citrix-LB)
Context: Context:
Context:
Risk Domain “B” Risk Domain “C”
Risk Domain “A”
(VRF) (VRF)
Static-path bindings (VRF)
(ASR1000)
EPG EPG
EPG EPG EPG EPG
EP EP EP EP
EP EP EP EP EP EP EP EP
ACI Fabric
Attaching the Compute Resource to the Fabric
(OOB)
Spine
Leaf (OOB)
(OOB)
(OOB)
ACI Fabric
Attaching the Services to the Fabric
Spine
Leaf
LAN1
HA
Checkpoint
Firewall(s)
Infoblox
Citrix Load-balancer(s)
Extranet
Local-Internet
ACI Fabric
Attaching the VMM/Orchestration to the Fabric
Spine
Leaf
vCentre 5.5
vCentre 6
UCS director
Out-of-band Management (OOB)
ACI Fabric
Attaching the External WAN/Enterprise to the Fabric
Spine
Leaf
ASR9000 ASR9000
Intranet/Internet
ACI Fabric
Attaching the External IP Multicast Routers to the Fabric
Spine
Leaf
ASR9000
ASR9000
ASR1000 ASR1000
(mrouter) (mrouter) Intranet/Internet
VLAN = EPG
End-
point(s)
End-
End- point(s)
point(s) - Connect non-ACI networks to ACI leaf nodes
- Connect at L2 with VLAN trunks (802.1Q)
- Objective: Map VLANs to EPGs, extend policy model to non-ACI networks
ACI Policy Model: EPG To EPG Communication
Allow HTTP
EPG-A EPG-n
Allow ICMP
Provides Consumes
policies policies
EPG1 EPG3
VLAN 311…+ VLAN 411…+
EPG2 EPG4
VLAN 511…+ VLAN 611…+
For Your
ACI Multicast Configuration Reference
3) Deploy static path binding for the EPGs created for external PIM interfaces
5) ASR1000 Attach to the fabric like any other server for example (EPG
Configuration)
3) Enable Flooding
2) EPG Configuration
1) Create EPG
ASR1K-2
ASR1K-1
VLAN Encap of 311
ASR 1000 IP Multicast Configuration (VLAN-311 + others)
interface Port-channel1.305
encapsulation dot1Q 305
vrf forwarding “B”
interface Port-channel1.311 ip address 172.18.133.254 255.255.255.0
encapsulation dot1Q 311 ip pim query-interval 15
vrf forwarding ”A” ip pim sparse-mode
ip address 172.18.54.253 255.255.255.0 ip igmp version 3
ip pim dr-priority 10
ip pim sparse-mode
ip igmp version 3
interface Port-channel1.304
encapsulation dot1Q 304
vrf forwarding “C”
ip address 172.18.131.254 255.255.255.0
ip pim query-interval 15
ip pim sparse-mode
ip igmp version 3
Note: LLDP and CDP must be turned off on ASR1000, since ASR1000 shares the same MAC for all sub-interfaces, even with different
dot1q encapsulations.
Showing the two ASR1000(s) sub-interfaces (VLAN-311)
ASR1K-1#show int port-channel 1.311 ASR1K-2#show int port-channel 1.311
ASR1K-2
ASR1K-1
VLAN Encap of 312
Different bridge-
domain
ASR 1000 IP Multicast Configuration (VLAN-312)
0.0.0.2
172.16.0.x/28
.10 LAN1
.12 HA .4
Infoblox-2 leaf-2
(router-id)
Floating IP .1 (SVI), does not have OSPF enabled. This is the default gateway for the Infoblox Grid management.
L3-Outside Configuration: OSPF For Your
Reference
OSPF Area
LAN1
HA
ACI Configuration: Logical Interface For Your
Reference
• Infoblox/Mainframe are configured as a full OSPF area, the ACI Leaf(s) are OSPF
ASBR; due to iBGP redistribution with Spines as Route Reflectors. Since the Area
is a full OSPF Area, the Infoblox/mainframe devices will see a default-route
advertised from the fabric as a Type-5 LSA.
Anycast GW Anycast GW
VIP: 20.20.20.20/32
VRF: Intra
VRF: Intra
VLAN 901
VLAN 900
172.18.0.72/29
172.18.0.64/29
OSPF Area 0
OSPF Area 0
router ospf 1
interface Bundle-Ether1 nsr
! log adjacency changes detail
interface Bundle-Ether1.900 router-id 9.1.1.1
vrf Intra area 0
ipv4 address 172.18.0.69 255.255.255.248
encapsulation dot1q 900 vrf Intra
router-id 33.33.33.1
default-information originate always
redistribute bgp 3000 metric 100 metric-type 1
address-family ipv4 unicast
area 0
dead-interval 20
retransmit-interval 3
hello-interval 5
transmit-delay 1
interface Bundle-Ether1.900
Verify OSPF Output: ACI border-leaf (VRF-Intra) For Your
Reference
L3Out-A L3Out-B
Extranet
L3Out OSPF Area 0.0.0.0 on each L3Out
Local-Internet: Logical view
1) Intra-VRF default routes from ASR9k to
Fabric to Internet Only Local-
2) Other VRF(s) will have default-route
point to Firewall and Firewall will route to Internet
Intranet; based on FW policy
L3Out-A L3Out-B
(Static to FW (Static to FW
per VRF) per VRF)
EPG Green EPG Orange EPG Black EPG Green EPG Orange EPG Black
Other Context/VRF
Intra-VRF Flow
Context (VRF): “A” Context (VRF): “B”
Bridge-Domain: “A” Bridge-Domain: “B”
EPG Green EPG Orange EPG Black EPG Green EPG Orange EPG Black
intra from
ASR9000 OSPF
Area0
OSPF
Area0
1. Configure OSPF and MP-eBGP between the ASR1000(s) and ASR9000(s) per VRF
8. Send Mcast traffic, and verifiy the remote receiver can receive the mcast traffic without loss.
End to End Multicast
ASR1000 PIM Multicast ASR9000 PIM Multicast
BGP AS#
BGP AS#
WAN/MAN/Multicast
ACI Fabric OSPF AREA Multicast Sources
v 3001
Multicast Sources 0.0.0.5
v 3000
Multicast Receivers
Multicast Receivers
- The ASR9000 interfaces connected to the ACI border-leaf(s) / fabric does NOT have Multicast (PIM) enabled.
- So, the ASR9000 WAN routers will not inject multicast from remote source into the fabric directly, it will flows
via the ASR1000(s).
- Also, the Multicast sources will not send Multicast traffic directly to ASR9000(s); it will also flow through the
ASR1000(s).
End to End Multicast ASR9000 PIM Multicast
ASR1000 PIM Multicast Multicast Domain #2
Multicast Domain #1
MP-eBGP Session
BGP AS#
BGP AS#
WAN/MAN/Multicast
ACI Fabric OSPF AREA Multicast Sources
v 3001
Multicast Sources 0.0.0.5
v 3000
Multicast Receivers
Multicast Receivers
MP-eBGP Session
BGP AS#
BGP AS#
WAN/MAN/Multicast
ACI Fabric OSPF AREA Multicast Sources
v 3001
Multicast Sources 0.0.0.5
v 3000
Multicast Receivers
Multicast Receivers
Multicast Source/MSDP
Multicast traffic flows were verified and monitored under different failure scenarios;
1) Intra VLAN:
L2 multicast with sources and receivers attached to different leafs within the fabric
2) Inter VLAN:
L3 multicast with routing via the ASR1K. Sources and receivers are attached to different leafs within
the fabric
- Create EPG
ACI and VMM vCentre Integration
- port-groups on the
vDS
ACI and VMM vCentre Integration
Failure Scenarios
Failure Scenarios and Outages
1) OSPF Failover: SVI - ASR9K Failure 3) Uni-Cast Traffic: Transit - Border Leaf Failure
ASR9K-1 Power supply down: Border Leaf-232 Failure with Unicast Traffic Flow 4
OSPF Dead timers: (Intranet-VRF) outage times:
Intra 20s Inbound - 1.4s
LRD 40s Outbound - 1.7s
HRD 80s
Traffic outage time:
Intra 18s
LRD 36s