Sunteți pe pagina 1din 166

Real World Fabric Based

Network Design and


Deployment
David Jansen, Distinguished Systems Engineer
dajansen@cisco.com
Agenda
• Requirements
• Fabric Roles and Definitions
• Fabricpath/DFA Customer Deployment
• VXLAN Customer Deployment
• ACI Customer deployment
• Conclusion
Requirements
Business Drivers & Solutions for Network Segmentation
• Multi-tenancy Multi-
• Security and Separation tenancy

• Traffic Engineering
• Scalable
• Flexible topology
SOLUTIONS
• Minimise oversubscription
• Scale out and scale up Mergers VRF Shared
Acquisitions services
• Scalable L4-7 Service Layer L3VPN
• No spanning tree Multicast VPN
• Incremental scale
• Virtual FW/LB per tenant
• Flexible placement
• Incremental capacity Compliance
Data Centre “Fabric” Journey
STP
VPC
FabricPath

Application Centric Infrastructure VXLAN

APIC

MAN/WA
Application
N Policy
Infrastructure
ACI Fabric
Controller

FabricPath VXLAN
/BGP /EVPN

MAN/WAN MAN/WAN
Fabric Roles and
Definitions
Leaf and Spine Topology – Device Roles
• Spine spine
• Interconnecting Leafs and Border Leafs spine
• IP Forwarder (East / West)
• Route-Reflector (RR) for EVPN
• Rendezvous-Point (RP) for Underlay
• Does not require VTEP

• Leaf (VTEP)
• VXLAN Edge-Device
• Route/Bridges Classic Ethernet frames &
encapsulates them into VXLAN
• Requires VTEP
• Virtual Machines
• Physical Machines
• FEX
• 3rd-party Switches
• UCS FI
• Blade Switches leaf leaf leaf leaf leaf leaf leaf leaf Border leaf
• Border Leaf (VTEP)
• External Connectivity

MAN/WAN
Border-Leaf Topology – Device Roles Border-Leaf
• Border Leaf (VTEP)
• VXLAN Edge-Device VRF OSPF Process

• Route and Bridges Classic Ethernet frames from an outside network and
encapsulates them into VXLAN (North/South)
EVPN EVPN EVPN
• Internetworking of LISP/MPLS traffic from an outside network and re- Overlay Overlay Overlay
Tenant Tenant Tenant
encapsulates it into VXLAN (North/South) VRF A VRF B VRF C

• Speaks IGP/EGP routing protocols with the outside network (North/South) VRFA VRFB VRFC

• Requires VTEP
• IPv4/IPv6 routes are exchanged with the external neighbour through the VRF
VRFB VRFC
A
IPv4/IPv6 unicast address family within the VRF
EVPN EVPN EVPN
• Interface options: Physical Routed Ports, sub-interfaces, VLAN SVIs over Trunk Overlay Overlay Overlay
Tenant Tenant Tenant
Ports VRF A VRF B VRF C

External Router
Services Leaf – Device Role
• Services leaf (VTEP)
• Firewalls
• Load balancers border spine border spine
• Proxy services
• IPS services

Note: the different leaf roles are logical


and not physical. The same leaf switch leaf leaf leaf leaf leaf leaf leaf leaf leaf leaf
could perform all three functions
(regular, services, and border leaf)
Border Spine Topology – Device Roles
• Border Spine (VTEP)
MAN/WAN
• Interconnecting Leafs and Border Leafs
• External Connectivity
• VXLAN Edge Device border spine border spine
• Route and Bridges Classic Ethernet
frames from an outside network and
encapsulates them into VXLAN
(North/South)
• Decapsulates MPLS/LISP traffic from an
outside network and re-encapsulates it
into VXLAN (North/South)
• Speaks IGP/EGP routing protocols with
the outside network (North/South)
• Requires VTEP
• IP transport forwarder between Leaf
(East/West)
• Potentially hosting Rendezvous-Point
leaf leaf leaf leaf leaf leaf leaf leaf leaf leaf
(RP) for Underlay
• Potentially hosting Route-Reflector
(RR) for EVPN
Minimum Maximum Transmission Unit (MTU) Guidance:

• OTV: 1542 Bytes


• OTV w/UDP: 1550 Bytes (7.2 with F3 modules)
• LISP
• IPv4 1536 Bytes
• IPv6 1556 bytes
• FabricPath: 1516 Bytes
• VXLAN: 1550 Bytes

12
Agenda
• Fabricpath/DFA Customer Deployment
• VXLAN Customer Deployment
• ACI Customer deployment
• Conclusion
… so, Please …

14
Fabricpath/DFA Customer
Deployment
DC Fabric w/FabricPath
• Externally the Fabric looks like a single switch
• Internally, ISIS adds Fabric-wide intelligence and ties the elements
together.
• Provides in a plug-and-play fashion:
• Optimal, low latency connectivity any to any
• High bandwidth, high resiliency
• Open management and troubleshooting
• ISIS for multipathing and reachability

FabricPath FabricPath

16
FabricPath: Design
- Default-Gateway
- Nx7k FP Spine (F3)
- Anycast-HSRP

FabricPath
- Nx5k FP leaf

UCS-FI

- F3 mac-scale (ARP)
Routing at FabricPath Spine
L3
Anycast HSRP
Anycast HSRP All Anycast HSRP forwarders
between agg switches L3 share same VIP and VMAC
Anycast HSRP

SVI GWY IP X SVI GWY IP X SVI GWY IP X SVI GWY IP X


GWY MAC A GWY MAC A GWY MAC A GWY MAC A
L2/L3 boundary

FabricPath

GWY MAC A→L1,L2,L3,L4

Hosts resolve shared


Routed traffic spread VIP to shared VMAC
over spines based on
ECMP
Layer 3 Link
Layer 2 CE
Layer 2 FabricPath
FabricPath: Services

- Default-Gateway
- Nx7k FP Spine (F3)
- Anycast-HSRP

FabricPath
- Nx5k FP leaf
FabricPath: Traffic flows
FP (or) vPC

- Default-Gateway
- Nx7k FP Spine (F3)

Intra-VRF Inter-VRF
FabricPath
- Nx5k FP leaf
FabricPath: External / WAN Connectivity
• Spine/leaf architecture MPLS, WAN
, Internet, Campus

• FabricPath for L2 multi-pathing


- ASR9000 - ASR9000
• MPLS Integration to WAN - MPLS / LISP - MPLS / LISP
• No spanning-tree
• Default gateway at spine layer
• ASA for firewall layer - Default-Gateway
• Nexus 5600 DC Access -- Nx7k FP Spine (F3)
MPLS PE Layer

Note:
- F3 simplifies the deploy with MPLS and FabricPath Support.
- Previously we leveraged F2 for FabricPath (VDC)
FabricPath
- M2 for MPLS Connectivity (VDC)
Stand Alone Fabric (FabricPath/DFA)

Fabric Workload Optimised Virtual Fabrics


Management Automation Networking

Bundled' functions'are' Modular,'Flexible' and'follows'


your'Choice' of'Integration'and'Speed' of'Adoption!
Standalone Fabric (FabricPath/DFA)
Host and Subnet Route Distribution

RR RR MP-iBGP Adjacencies

Fabric Host/Subnet
MP-iBGP Control Plane External Subnet
Route Injection FabricPath DataPlane Route Injection

N1KV/OVS

Route-Reflectors
MAN/WAN
deployed for scaling
purposes

• DC Fabric with a FabricPath based data plane and MP-iBGP control plane.
• Use MP-iBGP on the leaf nodes to distribute internal host/subnet routes and external reachability
information.
• Introduced Segment ID to increase name space to 16M identifier in the fabric.
Optimised Networking
Distributed Gateway Mode

• Distributed Gateway exists on all Leafs


where VLAN/Segment-ID is active
• No HSRP
• There are different Forwarding Modes for
the Distributed Gateway:
• Proxy-Gateway (Enhanced Forwarding)
• Leverages local proxy-ARP
• Intra and Inter-Subnet forwarding based on Routing vlan 123
vn-segment 30000
• Contain floods and failure domains to the Leaf !
interface vlan 123
• Anycast-Gateway (Traditional Forwarding) vrf member OrgA:PartA
• Intra-Subnet forwarding based on Bridging fabric forwarding mode proxy-gateway
ip address 10.10.10.1/24
• Data-plane based conversational learning for vlan 145
no shutdown
endpoints MAC addresses no ip redirects vn-segment 31000
!
• ARP is flooded across the fabric interface vlan 145
vrf member OrgA:PartA
fabric forwarding mode anycast-gateway
ip address 20.20.20.1/24
no shutdown

24
IP Forwarding Between Fabrics Across L3 Based DCI

FabricPath FabricPath
BGP AS#100 BGP AS#200
Border-leaf Border-leaf

eBGP eBGP

Edge router Edge router

Control-Plane peering (eBGP) Control-Plane peering (eBGP)


with local Edge-Router; no multi-
BGP AS#65500
Inter-DC Core with local Edge-Router; no multi-
hop peering hop peering
(Layer-3 IP/MPLS)
DFA Border-Leaf – Control-Plane Connectivity
Routed connection to Core-Network (e.g. WAN)
• External-BGP Session to Edge-
Router

similar to the MPLS CE-PE Routing
concept (VRF-lite)
FabricPath
• One dedicated eBGP per DFA Virtual-
BGP AS#100 Fabric
router bgp (VRF)
100 including the Backbone-
fabric-soo 100:1
Network (default VRF)
Border-leaf
[snip]
neighbor 10.254.254.2 remote-as 65500
eBGP description BACKBONE (DEFAULT VRF)
Edge router peer-type fabric-external
address-family ipv4 unicast
address-family ipv6 unicast
BGP AS#65500 vrf Ciscolive
address-family ipv4 unicast
Inter-DC Core neighbor 10.254.254.2 remote-as 65500
(Layer-3 IP/MPLS) description VF:Ciscolive
peer-type fabric-external
address-family ipv4 unicast
send-community extended
DCNM
Infrastructure Provisioning Platform
Enterprise HA Database support using
REST Updated northbound REST APIs
internal DB
[Northbound]

Scale >1000+ switches. Higher potential POAP Support with templates for
with clustering VXLAN-EVPN
1000+
DCNM cluster Topology Views for Phy, L2, L3,
VXLAN & VPC Overlays.
Nexus Nexus Nexus NXAPI
N5000 N7000 N9000 [Southbound]
NXAPI for Southbound APIs for
Modular device packs/driver for more reduced reliance on SNMP, Netconf
rapid Platform [HW/SW] updates
Nexus
Platform
DCNM
Infrastructure Provisioning Platform
New GUI using HTML5 for completely new user
experience

No Java LAN Client – Simplifies Client Operation

Config and delta config. management

Multi-site support - single pane management view and


template sync across multiple sites/clusters
VXLAN Customer
Deployment
The Underlay
Deployment Considerations: Underlay

• MTU and Overlays

• Unicast Routing Protocol and IP Addressing

• Multicast for BUM Traffic Replication


MTU and VXLAN: Underlay
• VXLAN adds 50 Bytes to the Original Ethernet Frame

• Avoid Fragmentation by adjusting the IP Networks MTU

• Data Centres often require Jumbo MTU; most Server NICs


do support up to 9000 Bytes

• Using a MTU of 9216* Bytes accommodates VXLAN


Overhead plus server max. MTU

*Cisco Nexus 5600/6000 switches only support 9192 Byte for Layer-3 Traffic
Building Your IP Network – Interface Principles
• Know your IP addressing and IP scale
requirements
• Best to use single Aggregate for all Underlay
Links and Loopbacks
• IPv4 only
• For each Point-2-Point (P2P) connection,
S1 S2 S3 S4
minimum /31 required
• Loopback requires /32
L1 L2
• Routed Ports/Interfaces
• Layer 3 Interfaces between Spine and Leaf
(no switchport)
• VTEP uses Loopback as Source-Interface
L3
Building Your IP Network – Interface Configuration
Interface Configuration Example for (L1)
# Loopback Interface Configuration (VTEP)
interface loopback 0
ip address 10.10.10.L1/32
mtu 9192 S1 S2 S3 S4
# Point-2-Point (P2P) Interface Configuration
interface Ethernet 2/1
no switchport
ip address 192.168.1.1/31
mtu 9192
L1 L2
interface Ethernet 2/2
no switchport
ip address 192.168.1.3/31
mtu 9192
.
.

L3
IP Unnumbered – Simplifying The Principles
• IP Unnumbered – Single IP address for multiple
Interfaces

• Remember way-back when.. On serial interfaces 


S1 S2 S3 S4
• Used for Layer 3 Interfaces between Spine and Leaf
(no switchport)

• For each switch in the fabric, single IP address is


sufficient L2
L1
• Loopback for VTEP
• IP Unnumbered from Loopback for routed
Interfaces

L3
Note: IP Unnumbered cross-platform support, Nexus 9000 added in 7.0(3)I3(1)
IP Unnumbered – Interface Configuration

Interface Configuration Example for (L1)


# Loopback Interface Configuration (VTEP & IP
Unnumbered)
interface loopback 0 S1 S2 S3 S4
ip address 10.10.10.L1/32
mtu 9192

# Point-2-Point (P2P) Interface Configuration


interface Ethernet 2/1
no switchport L2
ip unnnumbered loopback 0 L1
mtu 9192

interface Ethernet 2/2


no switchport
ip unnnumbered loopback 0
mtu 9192
.
. L3

Check Platform & Rlease Support for Ethernet IP Unnumbered


IP Unnumbered– Simplifying The Math

Example from topology:


4 Spine + 3 Leaf = 7 Individual Devices

= 7 IP Addresses for Loopback Interface S1 S2 S3 S4


(Used for VTEP & Routed Interfaces; IP Unnumbered)

7 IP Addresses required == /29 Prefix

A More Realistic Scenario: L2


4 Spine + 40 Leaf = 44 Individual Devices L1
= 44 IP Addresses for Loopback Interface
(Used for VTEP & Routed Interfaces; IP Unnumbered)

44 IP Addresses required == /26 Prefix

L3

Check Platform & Release Support for Ethernet IP Unnumbered


Building Your IP Network – Routing Protocols: OSPF

• OSPF – watch your network type


• Network Type Point-2-Point (P2P)
• Preferred (only LSA type-1)
• No DR/BDR election S1 S2 S3 S4
• Suits well for routed interfaces/ports (optimal from a LSA
Database perspective)
• Full SPF calculation on Link Change
L1 L2
• Network Type Broadcast
• Suboptimal from a LSA Database perspective (LSA type-1
& 2)
• DR/BDR election
• Additional election and Database Overhead
L3
Building Your IP Network – Routing Protocols: OSPF
Configuration Example for (L1)
# Loopback Interface Configuration (VTEP)
interface loopback 0
ip address 10.10.10.L1/32
mtu 9192
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point S1 S2 S3 S4
# Point-2-Point (P2P) Interface Configuration
interface Ethernet 2/1
no switchport
ip address 192.168.1.1/31
L1 L2
mtu 9192
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

L3
Underlay Deployment with
Multicast Routing
Multicast-enabled Underlay
ASR 1000
Nexus 1000v Nexus 3000 Nexus 5600 Nexus 7000/F3 Nexus 9000 ASR 9000
CSR 1000

Multicast IGMP v2/v3 PIM ASM PIM BiDir PIM ASM / PIM BiDir PIM ASM PIM BiDir PIM ASM / PIM BiDir
Mode

• PIM-ASM or PIM-BiDir (Different hardware has different capabilities)


• Spine and Aggregation Switches make good Rendezvous-Point (RP); much like RR
• PIM-ASM (sparse-mode)
• Source-trees, build a couple of unidirectional trees from RP; (s,g)
• Every VTEP is Source and Destination
• PIM-Anycast RP vs MSDP for example
• PIM-BiDir
• No Sources tree use a bi-directional shared tree
• No (S,G), we have (*,G)
• Phanton RP (Leverages Unicast for convergence)
• Each VNI does not need the same Multicast Group; can be different.
Multicast-enabled Underlay – PIM ASM For Your
Reference

Configuration Example for (Spine)


# Anycast-RP Configuration
ip pim rp-address 10.10.10.anycast
ip pim anycast-rp 10.10.10.anycast 10.10.10.S1
ip pim anycast-rp 10.10.10.anycast 10.10.10.S2
RP RP
# Loopback Interface Configuration (RP)
interface loopback 0
ip address 10.10.10.S1/32
mtu 9192
Configuration Example for (L1)
ip pim sparse-mode # Using Anycast Rendezvous-Point
ip pim rp-address 10.10.10.anycast
# Loopback Interface Configuration (Anycast RP)
L1 L2
interface loopback 1 # Loopback Interface Configuration (VTEP)
ip address 10.10.10.anycast/32 interface loopback 0
mtu 9192 ip address 10.10.10.L1/32
ip pim sparse-mode mtu 9192
ip pim sparse-mode
RP Rendezvous-Point
# Point-2-Point (P2P) Interface Configration
interface Ethernet 2/1
no switchport
L3
ip address 192.168.1.1/31
mtu 9192
ip pim sparse-mode
Multicast Replication for VXLAN EVPN
Handling of VXLAN Overlay BUM Traffic
• Broadcast/Unknown-unicast/Multicast (BUM) traffic in a VXLAN overlay network can be transported through the
underlay network.

Flood-&-Learn mode VXLAN: VXLAN EVPN:


Vlan 2 Vlan 200
vn-segment 4098 vn-segment 20000
Interface nve 1 Interface nve 1
Multicast replication in the underlay network member vni 10000 host-reachability protocol
mcast-group 225.1.1.1
• Each VNI is mapped to a multicast group. bgp
member vni 20000
mcast-group 225.1.1.1
BUM traffic in the VNI will be encapsulated
into multicast packets using this multicast
group as the outer destination IP address and
then sent to the remote VTEPs using the
underlay network multicast replication and
forwarding.
Introducing VXLAN /EVPN
Overlay
Overlay with Optimised Routing
EVPN Control Plane -- Host and Subnet Route Distribution
BGPMultiprotocol
Scalable Multi-Tenancy with Update BGP
RR RR • Host-MAC
Spine • Host-IP
EVPN Address-Family:
Border Host MAC+IP,• internal/external IP
Internal IP Subnet
Subnets
• External Prefixes
V
BGP enhanced for Fast Convergence at Large Scale

V Extensions for Fast and Seamless Host Mobility


V
V
Distributed Gateway with Traffic Flow Symmetry
V
V
BGP Adjacencies ARP Suppression
Route-Reflectors deployed
RR
for scaling purposes (iBGP)
Distributed IP Anycast Gateway

Any Subnet Routed Anywhere – Any VTEP can serve any Subnet
RR RR
Spine SVI 100, Gateway IP: 192.168.1.1
Integrated Route & Bridge (IRB) - Route whenever you can,
SVI 200, Gateway IP: Bridge when needed
10.10.10.1
bridge

route
No Hairpinning V
– Optimised East/West and North/South Routing
SVI 100
Host3
V Seamless Mobility
MAC: CC:CC:CC:CC:CC:CC
- All Leaf share same Gateway MAC
V SVI 200 IP: 192.168.1.33
VLAN 100
V VXLAN VNI 30001
Reduced Failure Domain – Layer-2/Layer-3 Boundary at Leaf
V Host2
MAC: BB:BB:BB:BB:BB:BB
V IP: 10.10.10.22
Optimal Scalability – Route
VLAN 200
Distribution & closest to the Host
SVI 100
Host1 VXLAN VNI 30002
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11
VLAN 100
VXLAN VNI 30001
IP Fabric Overlay Taxonomy (1)

Edge Device
Edge Device

Local LAN
Local LAN Segment
IP Interface
Segment

Physical
Host Physical
Edge Device
Local LAN Host
Segment

Virtual Switch

Virtual Hosts
IP Fabric Overlay Taxonomy (2)

VTEP
VTEP

V Encapsulation V
Local LAN
Local LAN Segment
Segment

Physical
Host VTEP
V Physical
Local LAN Host
Segment

VTEP – VXLAN Tunnel End-Point


Virtual Switch
VNI/VNID – VXLAN Network Identifier

Virtual Hosts
MP-BGP EVPN Route Type 2
MP-BGP EVPN Route Type 2 - MAC/IP Advertisement Route

• Route Type 2 provides End-Host reachability information RD (1 octet)

ESI (10 octets)

• The following fields are part of the EVPN prefix in the NLRI Ethernet Tag ID (4 octets)
• Ethernet Tag ID (zeroed out)
MAC Address Length (1 octet)
• MAC Address Length (/48), MAC Address
• IP Address Length (/32, /128), IP Address [Optional] MAC Address (6 octets)

• Additional Route Attributes IP Address Length (1 octet)


• Ethernet Segment Identifier (ESI) (zeroed out)
IP Address (0, 4, or 16 octets)
• MPLS Label1 (L2VNI)
• MPLS Label2 (L3VNI) MPLS Label1 (3 octets)

MPLS Label2 (0 or 3 octets)


MP-BGP EVPN Route Type 5
MP-BGP EVPN Route Type 5 - IP Prefix Route

• Route Type 5 provides IP Prefix advertisement in EVPN


RD (8 octet)
• RT-5 decouples IP prefix from MAC (RT-2) and provides
flexible advertisement of IPv4 and IPv6 Prefixes with variable ESI (10 octets)
length
Ethernet Tag ID (4 octets)

• The following fields are part of the EVPN prefix in the NLRI IP Prefix Length (1 octet)

• IP Prefix Length (0-32 bits for IPv4 or 0-128 bits for IPv6) IP Prefix (4 or 16 octets)
• IP Prefix (IPv4 or IPv6)
GW IP Address (4 or 16 octets)
• GW IP Address
• MPLS Label (L3VNI) MPLS Label (3 octets)
Route Type: Ethernet Segment Ethernet Tag MAC Address
MAC Address IP Address Length IP Address
2 - MAC/IP Identifier Identifier Length

V2# show bgp l2vpn evpn 192.168.1.73

BGP routing table information for VRF default, address family L2VPN EVPN
Route Distinguisher: 10.0.0.1:32868
BGP routing table entry for
[2]:[0]:[0]:[48]:[0050.56a3.c2bb]:[32]:[192.168.1.73]/272,
version 4
Paths: (1 available, best #1)
Flags: (0x000202) on xmit-list, is not in l2rib/evpn, is locked

Advertised path-id 1
L3VNI Path type: internal, path is valid, is best path, no labeled nexthop
AS-Path: NONE, path sourced internal to AS
L2VNI 10.0.0.1 (metric 3) from 10.0.0.111 (10.0.0.111)
Origin IGP, MED not set, localpref 100, weight 0
Received label 30001 50001
Extcommunity: RT:65501:30001 RT:65501:50001 ENCAP:8 Router MAC:5087.89d4.5495
Originator: 10.0.0.1 Cluster list: 10.0.0.111
Remote VTEP Route Target: Route Target: Overlay Encapsulation: Router MAC of
IP Address L2VNI (VLAN) L3VNI (VRF) 8 - VXLAN Remote VTEP
Protocol Learning & Distribution

RR RR
MAC, IP L2VNI L3VNI NH MAC, IP L2VNI L3VNI NH

MAC_A, IP_A 30000 50000 local MAC_B, IP_B 30000 50000 local
1
1
1 V2
V1

MAC, IP L2VNI L3VNI NH

MAC_C, IP_C 30000 50000 local

MAC_Y, IP_Y 30001 50000 local


Host A
MAC_A / IP_A V3 Host B
MAC_B / IP_B

VTEPs advertise End-Host reachability


1 information (MAC,IP) within MP-BGP
Virtual Switch

Host C Host Y
MAC_C / IP_C MAC_Y / IP_Y
Protocol Learning & Distribution

RR RR
MAC, IP L2VNI L3VNI NH MAC, IP L2VNI L3VNI NH

MAC_A, IP_A 30000 50000 local MAC_B, IP_B 30000 50000 local
2
2
2 V2
V1

MAC, IP L2VNI L3VNI NH

MAC_C, IP_C 30000 50000 local

MAC_Y, IP_Y 30001 50000 local


Host A
MAC_A / IP_A V3 Host B
MAC_B / IP_B

BGP Route-Reflector “reflects” Overlay related


2 reachability information to other VTEPs
Virtual Switch

Host C Host Y
MAC_C / IP_C MAC_Y / IP_Y
Protocol Learning & Distribution

3 3
RR RR
MAC, IP L2VNI L3VNI NH MAC, IP L2VNI L3VNI NH

MAC_A, IP_A 30000 50000 local MAC_B, IP_B 30000 50000 local

MAC_B, IP_B 30000 50000 IP_V2 MAC_A, IP_A 30000 50000 IP_V1

MAC_C, IP_C 30000 50000 IP_V3 MAC_C, IP_C 30000 50000 IP_V3
2
VMAC_Y, IP_Y 30001 50000 IP_V3
MAC_Y, IP_Y 30001 50000
V1IP_V3
3
MAC, IP L2VNI L3VNI NH

MAC_C, IP_C 30000 50000 local

MAC_Y, IP_Y 30001 50000 local


Host A
MAC_A / IP_A V3 MAC_A, IP_A 30000 50000 IP_V1
Host B
MAC_B / IP_B
MAC_B, IP_B 30000 50000 IP_V2
VTEPs receive respective reachability information
3 and installs them related to route-policy into RIB/FIB
Virtual Switch

Host C Host Y
MAC_C / IP_C MAC_Y / IP_Y
Multitenancy
What is Multi-Tenancy

• A mode of operation, where multiple independent instances (tenant)


operate in a shared environment.

• Each instance (i.e. VRF/VLAN) is logically isolated, but physically


integrated.
Where can we apply Multi-Tenancy

Multi-Tenancy at Layer-2 Multi-Tenancy at Layer-3


• Per-Switch VLAN-to-VNI mapping • VRF-to-VNI mapping

• Per-Port VLAN Significance • MP-BGP for scaling with VPNs


Layer-2 Multi-Tenancy

RR RR
Spine
bridge

V
VLAN 100
Host3
V MAC: CC:CC:CC:CC:CC:CC
V IP: 192.168.1.33
VLAN 100
V VXLAN VNI 30001

V
V
VLAN 100
Host1
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11
VLAN 100
VXLAN VNI 30001
Layer-2 Multi-Tenancy – Bridge Domains
VXLAN Overlay
(VNI 30001)

Leaf

V
VLAN 100
Bridge Domain V
VLAN 100

Host1 Host3
MAC: AA:AA:AA:AA:AA:AA MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.33
VLAN 100 VLAN 100
VXLAN VNI 30001 VXLAN VNI 30001
Layer-2 Multi-Tenancy – Bridge Domains
VXLAN
The Overlay
Bridge Domain(VNI 30001)
is the Layer-2 Segment from Host to Host

In VXLAN, the Bridge Domain consists of three Components


Leaf

V
VLAN 100
Bridge Domain
1) The Ethernet Segment (VLAN), between Host
V and Switch
VLAN 100
2) The Hardware Resources (Bridge Domain) within the Switch

3) The VXLAN Segment (VNI) between Switch and Switch

Host1 Host3
MAC: AA:AA:AA:AA:AA:AA MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.33
VLAN 100 VLAN 100
VXLAN VNI 30001 VXLAN VNI 30001
VLAN-to-VNI mapping
VXLAN Overlay
(VNI 30001)

Leaf

V V
VLAN 100 VLAN 100

Host1 Host2 Host3


MAC: AA:AA:AA:AA:AA:AA MAC: BB:BB:BB:BB:BB:BB MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.22 IP: 192.168.1.33
VLAN 100 VLAN 100 VLAN 100
VXLAN VNI 30001 VXLAN VNI 30001 VXLAN VNI 30001
For Your
Reference
CLI Modes - VLAN based (per-Switch)

Leaf#1 • VLAN to VNI configuration on a


vlan 100 per-switch basis
vn-segment 30001
• VLAN becomes “Switch Local
Leaf#2 Identifier”
vlan 100
vn-segment 30001 • VNI becomes “Network Global
Identifier”
Per-Switch VLAN-to-VNI mapping
VXLAN Overlay
(VNI 30001)

Leaf

V V
VLAN 100 VLAN 200

Host1 Host2 Host3


MAC: AA:AA:AA:AA:AA:AA MAC: BB:BB:BB:BB:BB:BB MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.22 IP: 192.168.1.33
VLAN 100 VLAN 100 VLAN 200
VXLAN VNI 30001 VXLAN VNI 30001 VXLAN VNI 30001
For Your
Reference
CLI Modes - VLAN based (per-Switch)

Leaf#1 • VLAN to VNI configuration on a


vlan 100 per-switch basis
vn-segment 30001
• VLAN becomes “Switch Local
Leaf#2 Identifier”
vlan 200
vn-segment 30001 • VNI becomes “Network Global
Identifier”
• 4k VLAN limitation has been
removed
Per-Port VLAN-to-VNI mapping
VXLAN Overlay
(VNI 30001)

Leaf

V V
VLAN 100 VLAN 200 VLAN 300

Host1 Host2 Host3


MAC: AA:AA:AA:AA:AA:AA MAC: BB:BB:BB:BB:BB:BB MAC: CC:CC:CC:CC:CC:CC
IP: 192.168.1.11 IP: 192.168.1.22 IP: 192.168.1.33
VLAN 100 VLAN 200 VLAN 300
VXLAN VNI 30001 VXLAN VNI 30001 VXLAN VNI 30001
For Your
Reference
CLI Modes - VLAN based (per-Port)

Leaf#1
vlan 2500
vn-segment 30001

interface Ethernet 1/8


switchport mode trunk
switchport vlan mapping enable
switchport vlan mapping 100 2500

interface Ethernet 1/9


switchport mode trunk
switchport vlan mapping enable
switchport vlan mapping 200 2500
Layer-3 Multi-Tenancy
VRF-A (VNI 50001)
VRF-B (VNI 50002)

RR RR SVI 100, Gateway IP: 192.168.1.1 (VRF-A)


Spine SVI 200, Gateway IP: 10.10.10.1 (VRF-B)
route SVI 300, Gateway IP: 172.16.1.1 (VRF-B)

route V
SVI 300
Host3
V IP: 172.16.1.33 (VRF-B)
V SVI 200 VLAN 300

V
V Host2
IP: 10.10.10.22 (VRF-B)
V VLAN 200
SVI 100
Host1
IP: 192.168.1.11 (VRF-A)
VLAN 100
Layer-3 Multi-Tenancy – VRF-VNI or L3VNI
VRF-A VRF-B
(VNI 50001) (VNI 50002)

Leaf
Routing Routing
DomainV
SVI 100
V
SVI 200
Domain V
SVI 300

VRF-A VRF-B
Host1 Host2 Host3
IP: 192.168.1.11 (VRF-A) IP: 10.10.10.22 (VRF-B) IP: 172.16.1.33 (VRF-B)
VLAN 100 VLAN 200 VLAN 300
Layer-3 Multi-Tenancy – VRF-VNI or L3VNI
VRF-A VRF-B
The Routing Domain
(VNI 50001) is the VRF owning multiple
(VNI 50002)across multiple Switches
Subnets

Leaf
Routing Routing
In VXLAN EVPN, the Routing Domain consists of three Components

DomainV
VLAN 100
1) The
SVI 200
Domain
V Routing Domains (VRF), local
V to the Switch
SVI 300
2) The Routing Domain (L3VNI) between the Switches
VRF-A VRF-B
3) Multi-Protocol BGP with EVPN Address-Family

Host1 Host2 Host3


IP: 192.168.1.11 (VRF-A) IP: 10.10.10.22 (VRF-B) IP: 172.16.1.33 (VRF-B)
VLAN 100 VLAN 200 VLAN 300
For Your
Reference
Layer-3 Multi-Tenancy – VXLAN EVPN
vrf context VRF-B vrf context VRF-B
vni 50002 L3VNI 50001 vni 50002
rd auto rd auto
address-family ipv4 unicast
VXLAN address-family ipv4 unicast
route-target both auto L3VNI 50002 route-target both auto
route-target both auto evpn route-target both auto evpn

vrf context VRF-A vrf context VRF-A


vni 50001
Leaf vni 50001
rd auto rd auto
address-family ipv4 unicast
V address-family ipv4 unicast
V
route-target both auto route-target both auto
route-target
SVI 100 both auto evpn
SVI 200 route-target
SVI 300 both auto evpn
SVI 400

router bgp 65500 router bgp 65500


address-family ipv4 unicast address-family ipv4 unicast
neighbor 1.1.1.2 remote-as 65500 neighbor 1.1.1.1 remote-as 65500
address-family l2vpn evpn address-family l2vpn evpn
send-community extended send-community extended
vrf VRF-A vrf VRF-A
address-family ipv4 unicast address-family ipv4 unicast
Host1 advertise l2vpn Host2
evpn Host3 Host4l2vpn evpn
advertise
vrf VRF-B
MAC: AA:AA:AA:AA:AA:AA MAC: BB:BB:BB:BB:BB:BB vrf VRF-B
MAC: CC:CC:CC:CC:CC:CC MAC: DD:DD:DD:DD:DD:DD
address-family
IP: 192.168.1.11 (VRF-A) ipv4 unicast (VRF-B)
IP: 10.10.10.22 address-family
IP: 172.16.1.33 (VRF-B) ipv4 (VRF-A)
IP: 10.44.44.44 unicast
VLAN 100 advertise l2vpn evpn
VLAN 200 VLAN 300 advertise l2vpn
VLAN 400 evpn
VXLAN VNI 30001 VXLAN VNI 30002 VXLAN VNI 30003 VXLAN VNI 30004
Integrated Route & Bridge + Multi-Tenancy
VRF-A (VNI 50001)

RR RR
Spine SVI 100, Gateway IP: 192.168.1.1 (VRF-A)
SVI 200, Gateway IP: 10.10.10.1 (VRF-A)

V
SVI 100
Host3
V MAC: CC:CC:CC:CC:CC:CC
V SVI 200 IP: 192.168.1.33 (VRF-A)
VLAN 100
V VXLAN VNI 30001

V Host2
MAC: BB:BB:BB:BB:BB:BB
V IP: 10.10.10.22 (VRF-A)
SVI 100 VLAN 200
Host1 VXLAN VNI 30002
MAC: AA:AA:AA:AA:AA:AA
IP: 192.168.1.11 (VRF-A)
VLAN 100
VXLAN VNI 30001
Layer 4-7 Services
Integration
Service Chain: Firewall + Load Balancer

1) The load balancer is deployed in one-arm mode with source-NAT (SNAT).

2) Load balancer and Firewall Service chain

3) Firewall is the first device in the service chain to protect the load balancer

4) Servers are leveraging anycast gateway in both examples.


Service Chain Load Balancer (SNAT)+ Firewall Services:
Logical Flow default route 0.0.0.0/0
default route 0.0.0.0/0
Default-gateway =ToR
VLAN 21 (anycast gateway)
VLAN 101 VLAN 41
(transit) VLAN 40
(transit) (transit)
VRF-A (server)
VRF-C Load- VRF-A Servers
VRF-A
Firewall balancer VLAN 40
Client Router
VIP VRF-A
w/SNAT
VLAN 100 VLAN 20
VRF-C VRF-A

Client-> VIP VIP-> VLAN101 (VRF-A) VIP-> VLAN21 (VRF-A) VLAN40-> VLAN41 (VRF-A) VIP-> VLAN40 (VRF-A)

= Fabric
= Distributed Anycast Gateway
Service Chain Load Balancer(SNAT) + Firewall Services
• Firewall is the first device in the service chain

• Load balancer is the second device in the service chain

• Source NAT implemented on the load balancer

• Fabric providing anycast gateway

• Traffic is symmetric in both directions for the LB + FW

• Additional VIP(s) can be implemented in this model


Client
Anycast-gateway Anycast-gateway
Anycast-gateway 10.10.10.1/24 10.10.10.1/24
192.168.40.1/24

VLAN 21: VRF-A


…………………..
VLAN 100: VRF-C
10.10.10.101 10.10.10.100
VIP1 VLAN 40 VLAN 40
(192.168.40.110/32) VRF-A VRF-A
VLAN 41
External Connectivity
VXLAN and Interaction with Spanning-Tree
• Spanning-Tree and VXLAN
• VXLAN has no integration with
Spanning-Tree for Loop protection
• VXLAN does not forward BPDU
• Loop-free topologies required L3
southbound of VXLAN Edge-Devices
• Use VPC to provide Ethernet-based fwd
Loop-free topologies
L2

L1
fwd

fwd
VXLAN and Interaction with Spanning-Tree
• Spanning-Tree and VXLAN
• Virtual Port-Channel (vPC) will allow safe
integration with Spanning-Tree
• No Loop-Protection required as per
logical Loop-free topology
L3

• Note L2
• Follow best practices to protect the
Network Border as in Classic Ethernet
Networks L1
• BPDU Guard fwd
• Root Guard
fwd
• Storm Control
• etc
Virtual Port-Channel (VPC) Concept
• The VXLAN vPC Domain follows the
configuration similar as for Classic Ethernet

• There are some VXLAN specifics for vPC


10.10.10.2/32
peer-link configuration 10.10.10.254/32 secondary

• With vPC, an additional common secondary 10.10.10.1/32


IP address is attached to the VTEP – 10.10.10.254/32 secondary V2
Anycast IP for VTEP
V1

Host A
192.168.1.101
Border-leaf with VRF-lite
• Layer 3 @ Border with VRF-lite
RR RR
• aka Inter-AS Option “A “
• Provides connectivity for external BGP AS# 65500
routing connectivity
L3
• Interconnect using sub-interfaces for
Multitenant capable handoff L2
• Per-VRF routing adjacency based on L1
IEEE 802.1Q tagging BL1
• Various routing protocols available BL2
(eBGP, OSPF, EIGRP etc)
Layer-3
Sub-Interface
Border-leaf with VRF-lite (Inter-AS Option “A “)
VTEP(s) Configured on Border-leaf RR RR

BGP AS# 65500

VRF for External Routing


L3
needs to exist on Border Leaf
L2
BL L1

A B C
Interface-Type Options:
• Physical Routed Ports Peering Interface can
• Sub-Interfaces be in Global or Tenant VRF
• VLAN SVIs over Trunk Ports

BGP AS# 65599


Border-leaf with eBGP VRF-lite Configuration
# Sub-Interface Configuration
interface Ethernet1/1
no switchport
interface Ethernet1/1.10 RR RR
mtu 9216
encapsulation dot1q 10
vrf member VRF-A
ip address 10.254.254.1/30
L3
# eBGP Configuration L2
router bgp 65500

BL # Interface Configuration
L1 Ethernet1/1.10
interface
vrf VRF-A mtu 9216
address-family ipv4 unicast encapsulation dot1q 10
advertise l2vpn evpn A B C vrf member VRF-A
aggregate-address 10.0.0.0/8 summary-only ip address 10.254.254.2/30
neighbor 10.254.254.2 remote-as 65599
update-source Ethernet1/1.10 # eBGP Configuration
peer-type fabric-external router bgp 65599
address-family ipv4 unicast …
send-community both vrf VRF-A
address-family ipv4 unicast
neighbor 10.254.254.1 remote-as 65500
update-source Ethernet1/1.10
address-family ipv4 unicast
DCNM
Infrastructure Provisioning Platform
Enterprise HA Database support using
REST Updated northbound REST APIs
internal DB
[Northbound]

Scale >1000+ switches. Higher potential POAP Support with templates for
with clustering VXLAN-EVPN
1000+
DCNM cluster Topology Views for Phy, L2, L3,
VXLAN & VPC Overlays.
Nexus Nexus Nexus NXAPI
N5000 N7000 N9000 [Southbound]
NXAPI for Southbound APIs for
Modular device packs/driver for more reduced reliance on SNMP, Netconf
rapid Platform [HW/SW] updates
Nexus
Platform
Cisco Nexus Fabric Manager (NFM)
Intelligent fabric lifecycle management
• Fabric-wide focus – auto-configuration and management of fabric
• Initial support for Cisco Nexus 9000 Family Connection
running stand-alone NX-OS mode
• Automation based on knowledge of Creation Expansion

underlying fabric architecture NFM

• Designed to simplify fabric management Reporting Fault Mgmt


through its various lifecycle phases Fabric Management Lifecycle
• Delivered via VXLAN-based architecture
Cisco VTS: Virtual Topology System
Overlay Controller

vCenter GUI

REST API
Automated Provisioning Open, Standards Based
• Group Based Policy model • Rest based Northbound APIs
• Overlay Provisioning • Multi-protocol support (EVPN, VXLAN)
• Service Chaining • Multi-Hypervisor

Scalable Multi-Tenancy VTS Overlay Management


• MP-BGP EVPN control plane • Automatic Topology Discovery
• Physical and Virtual overlay support • Resources Management
• High performance virtual forwarding • Overlay monitoring and troubleshooting
Programmable Fabric (VXLAN)

Nexus Portfolio
Nexus 2k – 9k
ACI Customer
Deployment
Application Centric Infrastructure (ACI)

App-Based Automation
APIC

Automated L4-7 Stitching

Turnkey network automation


ACI Fabric Overview
Spine and Leaf Architecture / Design

Spine

Leaf
ACI Fabric Overview
Attaching the ACI APIC(s)

APIC APIC

Out-of-band Management (OOB)


Defining Terms

• Tenant: Logical separator for: Customer, BU, group etc. separates traffic,
admin, visibility, etc.
• Context: Equivalent to a VRF, separates routing instances, can be used as an
admin separation
• Bridge Domain: Not a VLAN, simply a container for subnets. It can be used
to define a L2 boundary.
• End-Point Group (EPG) Container for objects requiring the same policy
treatment, i.e. app tiers, or services
Logical Model Overview
root

Tenant A Tenant B

Context A Context B Context A

Bridge Bridge Bridge Bridge


Domain Domain Domain Domain
Subnet A Subnet B Subnet A
Subnet C
EPG A EPG C EPG D EPG E
EPG B
Context and subnets are independent between tenants
Design / Deployment Requirements
• Greenfield Deployment

• Fabric Hardware:
• (3) APIC Controllers
• (3) Nexus 9508 Spines
• (many) Nexus 9300 Leaf switches (mix of 9396/9372/9332)

• Enterprise compute block:


• (3) vCentres / (4) vDS
• Services blocks: FW, LB, Infoblox, mainframe
• 9332 connecting to ASR9K belong to these blocks
• Compute UCS-B blades and UCS-FI

• The design is taking a network-centric approach:


• VLAN is mapped to EPG/BD
• Contract is permit-any for all the EPGs

• Each risk domain is mapped to context (VRF) in ACI:


• Communication within the same risk domain between different sites go through the WAN router within the
corresponding VRF.
• Inter-context communication with Firewall policy
Design / Deployment Requirements
• Default gateway is on ACI for BDs with one exception; the load-balanced deployed in 2-ARM mode.

• Layer-3 Routing:
• OSPF to ASR9K WAN router (vPC)
• OSPF to Infoblox/Mainframe (treat like OSPF Stub Areas)
• Static routes to FW/LB (except extranet FW, which use OSPF)

• Fabric provide network connection (L2/L3) for FW/LB


• No L4-7 device-package level integration

• L3 multicast design:
• ASR1K as external mrouter interfaces
• Exchange multicast source information with ASR9K via MP-BGP.
ACI Policy Model
High Level Overview L3-Out
L3-Out: (ASR9000)
(ASR9000)
(Mainframe)
(FW) Tenant “Y”
Tenant “X”
(Infoflox)
(Citrix-LB)

Context: Context:
Context:
Risk Domain “B” Risk Domain “C”
Risk Domain “A”
(VRF) (VRF)
Static-path bindings (VRF)
(ASR1000)

Bridge Bridge Bridge Bridge Bridge Domain


Domain Domain Domain Domain

EPG EPG
EPG EPG EPG EPG

EP EP EP EP
EP EP EP EP EP EP EP EP
ACI Fabric
Attaching the Compute Resource to the Fabric

(OOB)

Spine

Leaf (OOB)

(OOB)
(OOB)
ACI Fabric
Attaching the Services to the Fabric

Spine

Leaf
LAN1
HA
Checkpoint
Firewall(s)

Infoblox
Citrix Load-balancer(s)
Extranet
Local-Internet
ACI Fabric
Attaching the VMM/Orchestration to the Fabric

Spine

Leaf

vCentre 5.5

vCentre 6
UCS director
Out-of-band Management (OOB)
ACI Fabric
Attaching the External WAN/Enterprise to the Fabric

Spine

Leaf

ASR9000 ASR9000

Intranet/Internet
ACI Fabric
Attaching the External IP Multicast Routers to the Fabric

Spine

Leaf

ASR9000
ASR9000
ASR1000 ASR1000
(mrouter) (mrouter) Intranet/Internet
VLAN = EPG

EPG-A EPG-B EPG-n

End-
point(s)
End-
End- point(s)
point(s) - Connect non-ACI networks to ACI leaf nodes
- Connect at L2 with VLAN trunks (802.1Q)
- Objective: Map VLANs to EPGs, extend policy model to non-ACI networks
ACI Policy Model: EPG To EPG Communication

Allow HTTP
EPG-A EPG-n
Allow ICMP
Provides Consumes
policies policies

Zero Trust Security Model


- Need to define a Contract (Policy); - A contract is used to specify the interaction between two EPG(s), a provider/consumer pair.
- The goal is to provide a global policy view that focuses on improving automation and scalability.
ACI and IP Multicast
ASR1000 IP Multicast EPG Deployment: Static-path Binding
(EPG)
ASR1000 PIM Interface (mrouter)
• No L3 routing between ASR1000 and ACI fabric
• PIM routers attached to L2 Network
• IGMPv2 and IGMPv3 in the Fabric
• VLAN Encap provides L2/L3 (VRF) separation

EPG1 EPG3
VLAN 311…+ VLAN 411…+

Bridge-Domain: “A” Bridge-Domain: “B”

EPG2 EPG4
VLAN 511…+ VLAN 611…+
For Your
ACI Multicast Configuration Reference

1) Create Layer-2 Bridge-domain

2) Create EPGs for BDs where multicast traffic are flowing

3) Deploy static path binding for the EPGs created for external PIM interfaces

4) 1:1 Static-path binding for each BD (which requires Multicast traffic)

5) ASR1000 Attach to the fabric like any other server for example (EPG
Configuration)

Note: LLDP and CDP must be turned off on ASR1000, since


ASR1000 shares the same MAC for all sub-interfaces, even with
different dot1q encapsulations.
1) Bridge-domain Configuration
1) Create Bridge-domain

2) Associate with proper Context/VRF

3) Enable Flooding
2) EPG Configuration

1) Create EPG

2) Associate with the BDs where multicast traffic is


Bridge-domain
required
3) Static-path Bindings Configuration

1) Configure static path bindings for the EPGs

2) These are the ASR1000 PIM interfaces


connected to the fabric.
VLAN Encap of 311
3) 1:1 Static-path binding for each BD (which
require Multicast)

4) ASR1000 Attach to the fabric like any other


server for example (EPG Configuration)
4) Verifying the two ASR1000(s) connected to EPG

ASR1K-2

ASR1K-1
VLAN Encap of 311
ASR 1000 IP Multicast Configuration (VLAN-311 + others)
interface Port-channel1.305
encapsulation dot1Q 305
vrf forwarding “B”
interface Port-channel1.311 ip address 172.18.133.254 255.255.255.0
encapsulation dot1Q 311 ip pim query-interval 15
vrf forwarding ”A” ip pim sparse-mode
ip address 172.18.54.253 255.255.255.0 ip igmp version 3
ip pim dr-priority 10
ip pim sparse-mode
ip igmp version 3
interface Port-channel1.304
encapsulation dot1Q 304
vrf forwarding “C”
ip address 172.18.131.254 255.255.255.0
ip pim query-interval 15
ip pim sparse-mode
ip igmp version 3

Note: LLDP and CDP must be turned off on ASR1000, since ASR1000 shares the same MAC for all sub-interfaces, even with different
dot1q encapsulations.
Showing the two ASR1000(s) sub-interfaces (VLAN-311)
ASR1K-1#show int port-channel 1.311 ASR1K-2#show int port-channel 1.311

Port-channel1.311 is up, line protocol is up Port-channel1.311 is up, line protocol is up


Hardware is 10GEChannel, address is 0023.5e49.20c0 (bia Hardware is 10GEChannel, address is 0021.a00c.86c0 (bia
0023.5e49.20c0) 0021.a00c.86c0)
Description: BD ENT_INTRA_LOGISTICS1 L2ext Description: BD ENT_INTRA_LOGISTICS1 L2ext
Internet address is 172.18.54.254/24 Internet address is 172.18.54.253/24
MTU 1500 bytes, BW 20000000 Kbit/sec, DLY 10 usec, MTU 1500 bytes, BW 20000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255 reliability 255/255, txload 1/255, rxload 1/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 311 Encapsulation 802.1Q Virtual LAN, Vlan ID 311
ARP type: ARPA, ARP Timeout 04:00:00 ARP type: ARPA, ARP Timeout 04:00:00
Keepalive set (10 sec) Keepalive set (10 sec)
Last clearing of "show interface" counters never Last clearing of "show interface" counters never
ASR1K-1# ASR1K-2#
Verifying the two ASR1000(s) connected to EPG (VLAN-312)

ASR1K-2

ASR1K-1
VLAN Encap of 312

Different bridge-
domain
ASR 1000 IP Multicast Configuration (VLAN-312)

ASR1K-1#show runn int port-channel 1.312 ASR1K-2#show runn interface Port-channel1.312

interface Port-channel1.312 interface Port-channel1.312


description BD ENT_INTRA_LOGISTICS2 L2ext description BD ENT_INTRA_LOGISTICS2 L2ext
encapsulation dot1Q 312 encapsulation dot1Q 312
vrf forwarding Intra vrf forwarding Intra
ip address 172.18.53.254 255.255.255.0 ip address 172.18.53.253 255.255.255.0
ip pim sparse-mode ip pim dr-priority 10
ip igmp version 3 ip pim sparse-mode
ip igmp version 3
ASR1K-1#
ASR1K-2#
Showing the two ASR1000(s) sub-interfaces (VLAN-312)

ASR1K-1#show int port-channel 1.312 ASR1K-2#show int port-channel 1.312


Port-channel1.312 is up, line protocol is up Port-channel1.312 is up, line protocol is up
Hardware is 10GEChannel, address is 0023.5e49.20c0 (bia Hardware is 10GEChannel, address is 0021.a00c.86c0 (bia
0023.5e49.20c0) 0021.a00c.86c0)
Description: BD ENT_INTRA_LOGISTICS2 L2ext Description: BD ENT_INTRA_LOGISTICS2 L2ext
Internet address is 172.18.53.254/24 Internet address is 172.18.53.253/24
MTU 1500 bytes, BW 20000000 Kbit/sec, DLY 10 usec, MTU 1500 bytes, BW 20000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255 reliability 255/255, txload 1/255, rxload 1/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 312. Encapsulation 802.1Q Virtual LAN, Vlan ID 312.
ARP type: ARPA, ARP Timeout 04:00:00 ARP type: ARPA, ARP Timeout 04:00:00
Keepalive set (10 sec) Keepalive set (10 sec)
Last clearing of "show interface" counters never Last clearing of "show interface" counters never
ASR1K-1# ASR1K-2#
Infoblox DNS/DHCP
Integration
Infoblox Anycast (DNS/DHCP) L3-Out ACI Deployment
Grid Management 172.16.0.8/32
Anycast DNS address 172.16.0.25/32 - Access Interface (Untagged)
- Leaf advertises default-route to the Infoblox. "External Network
Instance Profile advertise 0.0.0.0/0 to Infoblox – like OSPF Stub
no-summary.
- Infoblox OSPF Priority = 0
OSPF AREA
- OSPF Network Type: Broadcast
LAN1 0.0.0.2 - HA Active / Standby Anycast Management VIP
HA (VRRP) 172.16.0.0/28
- Anycast DNS Address (OSPF) - Physical: Infoblox1 LAN1/HA connects to Leaf1. Infoblox2
LAN1/HA connects to Leaf2. (2 OSPF peers)
L3Out-InfoBlox
- Grid Management Address (OSPF) - LAN and HA interfaces all have to be in the same
(Floats btw act/std) EPG/BD/Subnet.
- Passive nodes listen to VRRP advertisements on the HA port
while Active nodes listen on the LAN port.
Context (VRF): “A”
Bridge-Domain: “A” - Peering is on leaf interface, the SVI for the default gateway
- Default route leak policy being used as an alternative to a pre-
Anycast GW Anycast GW Anycast GW existing default-route. The VRF-Intra, it is being injected via the
ASR9000 (OSPF) or configure a static-route via the FW (security
EPG Green EPG Orange EPG Black policy on L3-Out)

APP Green APP Orange APP Black


Infoblox Grid Geographical Redundancy
leaf-1
(router-id)

Anycast DNS address 172.16.0.25/32 Infoblox


.3 Grid
.9 LAN1
Manager
.11 HA
Infoblox-1 .1
ACI Fabric
Grid Management 172.16.0.8/32 OSPF AREA Different Network

0.0.0.2
172.16.0.x/28
.10 LAN1
.12 HA .4
Infoblox-2 leaf-2
(router-id)

Floating IP .1 (SVI), does not have OSPF enabled. This is the default gateway for the Infoblox Grid management.
L3-Outside Configuration: OSPF For Your
Reference

1) Configure L3Out for OSPF

2) Select Context / VRF

3) Define OSPF Area, in this case OSPF Area 0.0.0.2

4) Define OSPF Area type, in this case regular OSPF Area

5) The external routed domain, policy for managing the VRF


physical infrastructure, such as ports/VLANS, that can be used
by an L3 routed outside network.

OSPF Area

OSPF Area Type


Logical Node Profile: Leaf OSPF Router-id (Node) For Your
Reference
Logical Interface Profile For Your
Reference

LAN1

HA
ACI Configuration: Logical Interface For Your
Reference

Node-204; interface eth1/15 Node-204; interface eth1/16


Infoblox OSPF Area: Default-route
• Today, Infoblox is deployed as TSSA OSPF Area

• The TSSA Areas do not have type 4 or 5 LSAs.

• Infoblox/Mainframe are configured as a full OSPF area, the ACI Leaf(s) are OSPF
ASBR; due to iBGP redistribution with Spines as Route Reflectors. Since the Area
is a full OSPF Area, the Infoblox/mainframe devices will see a default-route
advertised from the fabric as a Type-5 LSA.

• Verify OSPF database LSA; the routes appear as E2:


0.0.0.0/0 appears as Type 5 LSA
AS External Link States
Link ID ADV Router Age Seq# CkSum Route
0.0.0.0 203.0.0.1 16 0x80000002 0xba49 E2 0.0.0.0/0 [0xffffffff]
0.0.0.0 204.1.1.1 16 0x80000002 0xa25e E2 0.0.0.0/0 [0xffffffff
Mainframe OSPF
Integration
Mainframe: L3-Out ACI Deployment
- Mainframe L3-out is a regular OSPF Area.

- Defined external network instance for Export Route


Control Subnet for 0.0.0.0/0 (make sure un-check
"Aggregate Export“).
OSPF AREA
0.0.0.1
- Trying to “treat” as OSPF Stub Area.
172.16.0.0/28
L3Out- L3Out-
- Type 5 LSA(s); leaf(s) are OSPF ASBR(s) Mainframe Mainframe
(SVI) (SVI)

Context (VRF): “Intra” Context (VRF): “risk-domain”


Bridge-Domain: “A” Bridge-Domain: “B”

Anycast GW Anycast GW

Mainframe Green Mainframe Blue

ENCAP VLAN 751 ENCAP VLAN 753


Mainframe Mainframe
ACI Configuration: External Networks
ACI Configuration: default-route
- The default-route already exists in each VRF.

- Export control subnet, in this case, IP Address is


0.0.0.0; the subnets configured for IP Address 0.0.0.0;
that is what I want you to advertise.

- Aggregate Export, do not enable. We do not want all of


the fabric routes advertised.
Verify Mainframe Routing information
 VRF: “Intranet” has a default-route advertised by WAN router ASR9K via ospf
mainframe# sh ip route ospf-1 vrf Intranet
IP Route Table for VRF "Intranet"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]

0.0.0.0/0, ubest/mbest: 2/0


*via 172.18.3.67, Vlan751, [110/1], 1d13h, ospf-1, type-2, tag 4294967295
*via 172.18.3.68, Vlan751, [110/1], 1d13h, ospf-1, type-2, tag 4294967295

 VRF: “Risk-domain” has a static default-route pointing to FW cluster via OSPF.


mainframe# sh ip route ospf-1 vrf risk-domain
IP Route Table for VRF "risk-domain"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]

0.0.0.0/0, ubest/mbest: 2/0


*via 172.18.15.66, Vlan753, [110/1], 00:00:13, ospf-1, type-2, tag 4294967295
*via 172.18.15.67, Vlan753, [110/1], 00:00:20, ospf-1, type-2, tag 4294967295
Citrix Load-balancers
Integration
Citrix 2-arm Load-balancer: Static-Bindings
External-arm (VLAN) for the VIP / Client

Static route for LB servers pointing to VIP


VLAN 10 SVI on L3out

VIP: 20.20.20.20/32

1) External-arm: VIP / Client


2) Internal-arm: server default-gateway is on the load-
balancer.
VLAN 400 (Bridge-domain same for Servers)
192.168.50.100

Server(s) default-gateway L2 Bridge-Domain


(Server subnet)

L3Out Static route to Servers

Internal-arm (VLAN) is the Server default-gateway on the load-balancer


ACI: Configuring the Server-side bridge-domain
Enabled Flooding (ARP) as this L2 Only
ACI: Configuring the Server-side bridge-domain
No Unicast routing enabled, as we want the external LB to be the gateway; not BD.
External Connectivity
ACI Interaction with STP
• No STP running within ACI fabric APIC
• BPDU frames are flooded within
EPG. No Configuration required
• External switches break any
potential loop upon receiving the
flooded BPDU frame fabric
• BPDU filter and BPDU guard can
be enabled with interface policy
Same EPG

STP Root Switch


ASR9000 External L3out OSPF via SVI and vPC
VRF: risk-domain
ACI Fabric
VRF: risk-domain VLAN 903
(SVI)
VLAN 902 172.18.159.72/29
172.18.159.64/29 OSPF Area 0
OSPF Area 0

VRF: Intra
VRF: Intra
VLAN 901
VLAN 900
172.18.0.72/29
172.18.0.64/29
OSPF Area 0
OSPF Area 0

VRF: risk-domain VRF: risk-domain


VLAN 904 ASR9000:A ASR9000:B VLAN 905
172.18.181.0/29 Intranet/Internet 172.18.181.8/29
OSPF Area 0 OSPF Area 0
L3-out to ASR9000 VRF:Intra

1) Configure L3Out for OSPF

2) Select Context / VRF

3) Define OSPF Area, in this case OSPF Area


0.0.0.0
VRF
4) Define OSPF Area type, in this case regular
OSPF Area external routed domain
5) The external routed domain, policy for OSPF Area
managing the physical infrastructure, such
as ports/VLANS, that can be used by an L3
routed outside network. OSPF Area Type
ACI Configuration: Logical Interface Profile vPC to ASR9000 VRF:Intra

1) Leaf231 and leaf232 are a


logical vPC pair

2) Configure SVI(s) on “leaf231”


and “leaf232”

3) Configuration for other 9332


border-leaf

4) Define SVI(s) for OSPF Area 0


to ASR9000
ACI Configuration: SVI interface vPC to ASR9000 VRF:Intra
Leaf 231 and 232 to ASR9k-1 Leaf 231 and 232 to ASR9k-2
ASR9000 OSPF Configuration: VRF-Intra For Your
Reference

vrf Intra interface Loopback0


address-family ipv4 unicast vrf Intra
address-family ipv4 multicast ipv4 address 9.9.9.1 255.255.255.255

router ospf 1
interface Bundle-Ether1 nsr
! log adjacency changes detail
interface Bundle-Ether1.900 router-id 9.1.1.1
vrf Intra area 0
ipv4 address 172.18.0.69 255.255.255.248
encapsulation dot1q 900 vrf Intra
router-id 33.33.33.1
default-information originate always
redistribute bgp 3000 metric 100 metric-type 1
address-family ipv4 unicast
area 0
dead-interval 20
retransmit-interval 3
hello-interval 5
transmit-delay 1
interface Bundle-Ether1.900
Verify OSPF Output: ACI border-leaf (VRF-Intra) For Your
Reference

 Verify OSPF Neighbors


leaf231# show ip ospf neighbors vrf Active:DA_Intra
OSPF Process ID default VRF Active:DA_Intra
Total number of neighbors: 4
Neighbor ID Pri State Up Time Address Interface
32.1.9.1 1 FULL/DR 1w0d 172.18.0.68 Vlan7
33.33.33.1 1 FULL/DROTHER 1w0d 172.18.0.69 Vlan7
32.1.9.1 1 FULL/BDR 1w0d 172.18.0.76 Vlan8
33.33.33.2 1 FULL/DR 1w0d 172.18.0.77 Vlan8
 Verify default-route from ASR9000 to 9332 vrf-intra
leaf231# show ip route 0.0.0.0 vrf Active:DA_Intra
IP Route Table for VRF "Active:DA_Intra"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

0.0.0.0/0, ubest/mbest: 2/0


*via 172.18.0.77, vlan8, [110/1], 01w07d, ospf-default, type-2, tag 2
*via 172.18.0.69, vlan7, [110/1], 01w07d, ospf-default, type-2, tag 2
leaf231#
Checkpoint / ASA Firewall
Integration
Extranet: Routing Between Contexts

Context (VRF): “A” Context (VRF): “B”


Bridge-Domain: “A” Bridge-Domain: “B”

L3Out-A L3Out-B

Extranet
L3Out OSPF Area 0.0.0.0 on each L3Out
Local-Internet: Logical view
1) Intra-VRF default routes from ASR9k to
Fabric to Internet Only Local-
2) Other VRF(s) will have default-route
point to Firewall and Firewall will route to Internet
Intranet; based on FW policy

L3Out-A L3Out-B
(Static to FW (Static to FW
per VRF) per VRF)

Context (VRF): “A” Context (VRF): “B”


Bridge-Domain: “A” Bridge-Domain: “B”

Anycast GW Anycast GW Anycast GW Anycast GW Anycast GW Anycast GW

EPG Green EPG Orange EPG Black EPG Green EPG Orange EPG Black

APP Green APP Orange APP Black APP Black


APP Green APP Orange
Static Routes: Logical SVI Interface / VRF
Static Routes: Static Routes for Inter-Context Firewall
Communication (VRF)

LRD Firewall Interface

Other Context/VRF

HRD VRF Firewall Interface


External Network
Intra-VRF and Intera-VRF Traffic Flows
Local-
Internet
Extranet

OSPF AREA OSPF AREA


0.0.0.0 0.0.0.0
Inter-VRF Flow L3Out L3Out

Intra-VRF Flow
Context (VRF): “A” Context (VRF): “B”
Bridge-Domain: “A” Bridge-Domain: “B”

Anycast GW Anycast GW Anycast GW Anycast GW Anycast GW Anycast GW

EPG Green EPG Orange EPG Black EPG Green EPG Orange EPG Black

APP Green APP Orange APP Black APP Black


APP Green APP Orange
Logical Extranet
Local-
Internet Intranet

0.0.0.0/0 sent OSPF


Area0
to fabric VRF OSPF

intra from
ASR9000 OSPF
Area0
OSPF
Area0

Static Routes Intra


to Intra Risk-Doamin-A
Risk-Domain-B

Static Routes Static


Static

to Intra OSPF Static

Risk-Domain-A Risk-Domain-B Intra Other(s)


End to End IP Multicast
For Your
End to End Multicast: Configuration steps Reference

1. Configure OSPF and MP-eBGP between the ASR1000(s) and ASR9000(s) per VRF

2. Enable the Multicast address-family only for MP-eBGP

3. ASR9000 originates default routes to ASR1000 via multicast address family

4. Configure Anycast RP and MSDP between ASR1000(s)

5. Configure Anycast RP and MSDP between ASR9000(s)

6. Configure inter-domain MSDP between ASR1000(s) and ASR9000(s)

7. Configure PIM on the path between sources and receivers

8. Send Mcast traffic, and verifiy the remote receiver can receive the mcast traffic without loss.
End to End Multicast
ASR1000 PIM Multicast ASR9000 PIM Multicast

BGP AS#

BGP AS#
WAN/MAN/Multicast
ACI Fabric OSPF AREA Multicast Sources

v 3001
Multicast Sources 0.0.0.5

v 3000
Multicast Receivers
Multicast Receivers

- The ASR9000 interfaces connected to the ACI border-leaf(s) / fabric does NOT have Multicast (PIM) enabled.

- So, the ASR9000 WAN routers will not inject multicast from remote source into the fabric directly, it will flows
via the ASR1000(s).

- Also, the Multicast sources will not send Multicast traffic directly to ASR9000(s); it will also flow through the
ASR1000(s).
End to End Multicast ASR9000 PIM Multicast
ASR1000 PIM Multicast Multicast Domain #2
Multicast Domain #1
MP-eBGP Session

BGP AS#

BGP AS#
WAN/MAN/Multicast
ACI Fabric OSPF AREA Multicast Sources

v 3001
Multicast Sources 0.0.0.5

v 3000
Multicast Receivers
Multicast Receivers

MP-eBGP Session

- ASR1000 and ASR9000 are directly connected L3 sub-interfaces


- OSPF is enabled between the ASR1000 and ASR9000
- PIM is enabled on these interfaces for multicast RPF check
- Support for both PIM-ASM & SSM; IGMPv2 and v3 receivers
End to End Multicast
ASR1000 PIM Multicast
ASR9000 PIM Multicast
Multicast Domain #2
Multicast Domain #1
Multicast Source/MSDP

BGP AS#

BGP AS#
WAN/MAN/Multicast
ACI Fabric OSPF AREA Multicast Sources

v 3001
Multicast Sources 0.0.0.5

v 3000
Multicast Receivers
Multicast Receivers

Multicast Source/MSDP

• Exchange multicast source information with ASR9000 via MP-BGP


• MP-eBGP will carry IP Multicast address-family.
• The ASR9000 will learn the inside multicast sources via ASR1000(s) and originate default route to
ASR1000(s) in the multicast address family.
• Inter-domain MSDP for exchanging sa-cache
• Anycast-RP and MSDP between two ASR1000(s) & between the ASR9000(s)
For Your
End to End Multicast: Traffic flows Reference

Multicast traffic flows were verified and monitored under different failure scenarios;

1) Intra VLAN:
L2 multicast with sources and receivers attached to different leafs within the fabric

2) Inter VLAN:
L3 multicast with routing via the ASR1K. Sources and receivers are attached to different leafs within
the fabric

3) External Multicast Source:


The ASR9K routes multicast traffic via the ASR1K towards receivers attached to the ACI fabric.

4) External Multicast Receiver:


The ASR1K routes multicast traffic from sources within the ACI fabric via the ASR9K towards
receivers in the corporate Intranet.
vCentre VMM Integration
with ACI/APIC
ACI and VMM vCentre Integration
- Cisco APIC integrates with the VMware vCentre.

- Ability to transparently extend the Cisco ACI


policy framework to VMware vSphere
workloads.

- APIC uses Application Network Profiles (ANPs)


to represent the Cisco ACI policy.

- APIC creates a virtual distributed switch (VDS)


in VMware vCentre for virtual networking.

- APIC manages all application infrastructure


components. The network administrator creates
EPGs and pushes them to VMware vCentre as
port groups on the DVS.

- Server administrators can then associate the


virtual machines and provision them
accordingly.
ACI and VMM vCentre Integration

- Show configured VMware VMM


vCentre

- Focusing on vCentre 6 instances


vCentre 6 instance integrated into APIC
ACI: EPG/ANP

- Create EPG
ACI and VMM vCentre Integration

- Add VMM Domain to EPG

- This will create port-group to


vCentre
ACI: EPG(s) pushed to vCentre Port-groups

- port-groups on the
vDS
ACI and VMM vCentre Integration
Failure Scenarios
Failure Scenarios and Outages
1) OSPF Failover: SVI - ASR9K Failure 3) Uni-Cast Traffic: Transit - Border Leaf Failure
ASR9K-1 Power supply down: Border Leaf-232 Failure with Unicast Traffic Flow 4
OSPF Dead timers: (Intranet-VRF) outage times:
Intra 20s Inbound - 1.4s
LRD 40s Outbound - 1.7s
HRD 80s
Traffic outage time:
Intra 18s
LRD 36s

2) OSPF Failover: Point-to-Point - ASR9K Failure


Traffic outage time:
LRD 2.5s
Intra 2.7s
UCS Director work-flows
UCS Director workflows
- Provision new server
- Decommission server
- ACI - Create Context
- ACI - Create Bridge Domain
- ACI - Create EPG
- ACI - Create Application Profile
- ACI - Create Contract
- ACI - Assign EPG to PortChannel/Alias
- ACI - Unassign EPG from PortChannel/Alias
- ACI Combined Provisioning Workflow
- ACI Combined De-provisioning Workflow
- Create a data LUN (array based on 'class') for presentation via VPLEX
- Expand LUN and volume
- Remove LUN and volume
- Present virtual volume to a host
- Present virtual volume to a RP cluster
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco 2016 T-Shirt by completing the
Overall Event Survey and 5 Session
Evaluations.
– Directly from your mobile device on the Cisco Live
Mobile App
– By visiting the Cisco Live Mobile Site
http://showcase.genie-connect.com/ciscolivemelbourne2016/
– Visit any Cisco Live Internet Station located
throughout the venue
Learn online with Cisco Live!
T-Shirts can be collected Friday 11 March Visit us online after the conference
for full access to session videos and
at Registration presentations.
www.CiscoLiveAPAC.com
Thank you

S-ar putea să vă placă și