Sunteți pe pagina 1din 48

SD-Access

Deep Dive
Campus Fabric + DNA Center
APIC-EM 2.0 + Assurance
APIC-EM
1.X
 SD-Access

ISE PI GUI approach provides automation &


assurance of Fabric configuration,
DNA Center management and group-based policy.

DNA Center integrates multiple management


systems, to orchestrate your LAN, Wireless
LAN and WAN access networks.

B B  Campus Fabric

CLI or API approach to build a LISP/VXLAN


C overlay Fabric for your enterprise Campus
access networks.

Campus CLI provides backwards compatibility and


customization, box-by-box. API provides
Fabric device automation via NETCONF/YANG.

Separated management systems.

2
Campus Fabric
Deep Dive
What exactly is a Fabric?
A Fabric is an Overlay
• An Overlay network is a logical topology used to virtually connect devices, built on top of
some arbitrary physical Underlay topology.
• An Overlay network network often uses alternate forwarding attributes to provide
additional services, not provided by the Underlay.

15
SD-Access Underlay / Overlay
Overlay Network Overlay Control Plane

Encapsulation

Edge Device Edge Device

Hosts
(End-Points)

Underlay Network Underlay Control Plane

5
SD-Access Underlay
Manual vs. Automated(Roadmap)

Manual Underlay Automated Underlay


You can reuse your existing IP network as the Prescriptive fully automated Global and IP
Fabric Underlay! Underlay Provisioning!
• Key Requirements • Key Requirements
• IP reach from Edge to Edge/Border/CP
• Leverages standard PNP for Bootstrap
• Can be L2 or L3 – We recommend L3
• Assumes New / Erased Configuration
• Can be any IGP – We recommend ISIS
• Uses a Global “Underlay” Address Pool

• Key • Key Considerations


• MTU (Fabric Header adds 50B)
• PNP pre-setup is required
• Latency (RTT of =/< 100ms)
• 100% Prescriptive (No Custom)

6
Key Components of SD-Access
1. Control Plane based on LISP
2. Data-Plane based on VXLAN

3. Policy-Plane based on TrustSec


Key Differences
• L2 + L3 Overlay -vs- L2 or L3 Only
• Host Mobility with Anycast Gateway
• Adds VRF + SGT into Data-Plane
• Virtual Tunnel Endpoints (No Static)
• No Topology Limitations (Basic IP)

7
LISP Control Plane
Locator / ID Separation
Protocol
SD-Access Key Component – LISP
Host
1. Control Plane based on LISP Mobility

Routing Protocols = Big Tables & More CPU LISP DB + Cache = Small Tables & Less CPU
with Local L3 Gateway with Anycast L3 Gateway

BEFORE AFTER
IP Address = Location + Identity Separate Identity from Location
Prefix RLOC
192.58.28.128 ….....171.68.228.121
Prefix Next-hop 189.16.17.89
22.78.190.64
….....171.68.226.120
….....171.68.226.121
189.16.17.89 ….....171.68.226.120 172.16.19.90 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121
172.16.19.90 ….....171.68.226.120 192.58.28.128 ….....171.68.228.121
192.58.28.128
189.16.17.89
…....171.68.228.121
…....171.68.226.120
Prefix Next-hop 189.16.17.89 ….....171.68.226.120

Mapping
189.16.17.89 ….....171.68.226.120 22.78.190.64 ….....171.68.226.121
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120 192.58.28.128 ….....171.68.228.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89
22.78.190.64
172.16.19.90
192.58.28.128
…....171.68.226.120
….....171.68.226.121
…......171.68.226.120
…......171.68.228.121
Endpoint 192.58.28.128 …....171.68.228.121

Database
189.16.17.89 ….....171.68.226.120
22.78.190.64
172.16.19.90
192.58.28.128
…......171.68.226.121
….....171.68.226.120
….....171.68.228.121
Routes are
Consolidated
Prefix
189.16.17.89
22.78.190.64
Next-hop
…......171.68.226.120
….....171.68.226.121
to LISP DB
172.16.19.90 ….....171.68.226.120
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120 Prefix Next-hop
22.78.190.64 ….....171.68.226.121 189.16.17.89 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 22.78.190.64 ….....171.68.226.121
192.58.28.128 ….....171.68.228.121 172.16.19.90 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 192.58.28.128 …....171.68.228.121
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Prefix Next-hop
Prefix Next-hop 189.16.17.89 ….....171.68.226.120
189.16.17.89 ….....171.68.226.120 22.78.190.64 ….....171.68.226.121
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 ….....171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121

Topology + Endpoint Routes


189.16.17.89
22.78.190.64
172.16.19.90
192.58.28.128
…....171.68.226.120
….....171.68.226.121
…......171.68.226.120
….....171.68.228.121
Only Local Routes
189.16.17.89
22.78.190.64
….....171.68.226.120
…......171.68.226.121
Topology Routes
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121

Endpoint Routes

9
Locator / ID Separation Protocol
LISP Mapping System

LISP “Mapping System” is analogous to a DNS lookup


‒ DNS resolves IP Addresses for queried Name Answers the “WHO IS” question

[ Who is lisp.cisco.com ] ?
DNS
DNS Name -to- IP
Host Server
URL Resolution
[ Address is 153.16.5.29, 2610:D0:110C:1::3 ]

‒ LISP resolves Locators for queried Identities Answers the “WHERE IS” question

[ Where is 2610:D0:110C:1::3 ] ?

LISP LISP Map


LISP
Router System ID -to- Locator
Map Resolution
[ Locator is 128.107.81.169, 128.107.81.170 ]

10
LISP Roles & Responsibilities
Map System EID RLOC
a.a.a.0/24 w.x.y.1
b.b.b.0/24 x.y.w.2
c.c.c.0/24 z.q.r.5
d.d.0.0/16 z.q.r.5

EID RLOC
EID Space a.a.a.0/24
b.b.b.0/24
w.x.y.1
x.y.w.2
c.c.c.0/24 z.q.r.5

Map Server / Resolver


d.d.0.0/16 z.q.r.5

EID RLOC
ITR a.a.a.0/24 w.x.y.1

• EID to RLOC Mappings b.b.b.0/24


c.c.c.0/24
d.d.0.0/16
x.y.w.2
z.q.r.5
z.q.r.5
Non-LISP
• Can be distributed across Prefix Next-hop
w.x.y.1
x.y.w.2
e.f.g.h
e.f.g.h

multiple LISP devices


z.q.r.5 e.f.g.h
z.q.r.5 e.f.g.h

PXTR RLOC Space


Tunnel Router - XTR
• Edge Devices Encap / Decap
ETR
• Ingress / Egress (ITR / ETR)

Proxy Tunnel Router - PXTR EID Space

• Connects between LISP and non-


LISP domains • EID = End-point Identifier
• Host Address or Subnet
• Ingress / Egress (PITR / PETR) • RLOC = Routing Locator
• Local Router Address

11
SD-Access Border and Default Border
Known Unknown
Networks
B B Networks

Border Default Border


• Connects the Campus Fabric to • Connects the Campus Fabric to Un-
Known networks. Known networks
• part of your company network
• Known networks are generally WAN, DC, • not part of the company network
Shared Services, etc. • Un-known networks are generally the Internet
• Responsible for advertising prefixes to (import) and/or Public Cloud.
and from (export) the local fabric and external • Responsible for advertising prefixes only from
domain. (export) the local fabric to external domain.

15
SD-Access Default Border
Forwarding to External Domain
2
EID-Prefix: Not found , map-cache miss
Mapping
Entry Locator-Set: ( use-petr)

INTERNET 3.1.1.1, priority: 1, weight: 100 (D1)

Dest
193.3.0.0/24

4 Default
Border
10.2.0.1  193.3.0.1
3.1.1.1
5.1.1.1

Control Plane
nodes
3
1.1.2.1  3.1.1.1 SDA Fabric 5.2.2.2

10.2.0.1  193.3.0.1

1.1.1.1 Edge 1.1.2.1 1.1.3.1 Edge 1.1.4.1

1
10.2.0.1  193.3.0.1
Campus
Bldg 1
Src Campus
10.2.0.0/24 10.3.0.0/24 Bldg 2

16
SD-Access Border (Known Border)
Forwarding from Fabric Edge to External Domain
3 EID-prefix: 192.168.1.0/24
Mapping Locator-set: Note: Path Preference
Entry Controlled by Destination Site
2.1.1.1, priority: 1, weight: 100 (D1)
192.1.1.0/24
Branch

Dest

Border
5 5.1.1.1

10.1.1.1  192.1.1.1 2.1.1.1 Control Plane


nodes
5.2.2.2

4
2.1.1.1  1.1.1.1
SDA Fabric
10.1.1.1  192.1.1.1
1.1.1.1 1.1.2.1 1.1.3.1 1.1.4.1

2 Edge Edge
10.1.1.1  192.1.1.1
Src
1
DNS Entry: Campus
Campus
10.1.1.0/24 10.3.0.0/24 Bldg 2
branch.abc.com A 192.1.1.1 Bldg 1

17
VXLAN Data Plane
SD-Access Key Components – VXLAN
1. Control Plane based on LISP
2. Data-Plane based on VXLAN
ORIGINAL PACKET
ETHERNET IP PAYLOAD

Supports L3
Overlay
PACKET IN LISP
ETHERNET IP UDP LISP IP PAYLOAD

Supports L2
& L3 Overlay
PACKET IN VXLAN
ETHERNET IP UDP VXLAN ETHERNET IP PAYLOAD

19
VXLAN Header
MAC-in-IP Encapsulation Next-Hop MAC Address

Src VTEP MAC Address


Dest. MAC 48

Source MAC 48
IP Header
14 Bytes 72
VLAN Type Misc. Data
16
0x8100 (4 Bytes Optional)
Protocol 0x11 (UDP) 8
VLAN ID 16
Header
16 20 Bytes
Outer MAC Header
Underlay

Ether Type Checksum


16
0x0800
Source IP 32
Src RLOC IP Address
Outer IP Header Dest. IP 32
Source Port 16 Dst RLOC IP Address

UDP Header Dest Port 16


8 Bytes Hash of inner L2/L3/L4 headers of original frame.
UDP Length 16 Enables entropy for ECMP load balancing.

VXLAN Header
Checksum 0x0000 16 UDP 4789

Inner (Original) MAC Header


Allows 64K
possible SGTs
Inner (Original) IP Header VXLAN Flags RRRRIRRR 8

Segment ID
Overlay

16
8 Bytes
Original Payload VN ID 24
Allows 16M
possible VRFs
Reserved 8

20
SD-Access Virtual Network
Virtual Network maintains a separate Routing & Switching instance for the devices
within it.
C
• Control Plane uses Instance ID to maintain
Known Unknown
separate VRF topologies Networks
B B Networks

(“Default” VRF is Instance ID “4097”)


• Nodes add VNID to the Fabric encapsulation
VN VN VN
• Endpoint ID prefixes (Host Pools) are “A” “B” “C”
advertised within one (or more) Virtual Networks
• Uses standard “vrf definition” configuration, along
with RD & RT for remote advertisement (Border Node)

21
SD-Access Scalable Group
Scalable Group is a logical ID object to “group” Users and/or Devices.

C
• CTS uses “Scalable Groups” to ID and assign a
unique Scalable Group Tag (SGT) to Host Pools Known Unknown
Networks
Networks
B B
• Nodes add SGT to the Fabric encapsulation
SGT
SGT SGT SGT
• CTS SGTs used to manage address- 17
4
8 25
SGT
independent “Group-Based Policies” SGT SGT SGT 19
SGT SGT
3 23 11 12 6
• Edge or Border Nodes use SGT to enforce
local Scalable Group ACLs (SGACLs)

22
VXLAN helps to preserve SGT/EPG and Context information while connecting with ACI/VxLAN networks

SDA Policy Domain VXLAN Fabric


ISE VXLAN, BGP EVPN
SD-Access Fabric

B
Enterprise Backbone
Border Border
APIC-EM
SGT
EVPN-AF

VXLAN GBP, LISP VXLAN, BGP-EVPN


VTS/NFM/DCNM/CLI
Exchange IP/subnet Info between TrustSec/SDA and
DCNM using ISE/APIC-EM FABRIC
MANAGE
R
TrustSec Policy Plane
SD-Access Key Components – TrustSec
1. Control Plane based on LISP
2. Data-Plane based on VXLAN
3. Policy-Plane based on TrustSec
Virtual Routing & Forwarding
Scalable Group Tagging
VRF + SGT

ETHERNET IP UDP VXLAN ETHERNET IP PAYLOAD

25
Cisco TrustSec
Simplified access control with Group Based Policy

Enforcement Shared Application


Group Based Policies Services Servers
ACLs, Firewall Rules
Enforcement DC Switch
or Firewall
Propagation
Carry “Group” context
through the network Enterprise
using only SGT Backbone
ISE

Classification
Static or Dynamic Campus Switch Campus Switch DC switch receives policy
for only what is connected
SGT assignments

Employee Tag
Supplier Tag
Non-Compliant Employee Voice Voice Employee Supplier Non-Compliant Non-Compliant Tag

VLAN A VLAN B

BRKCRS-3800 27
Cisco TrustSec
Identity Services Engine (ISE) enables CTS
NDAC
Network Device Admission
Control
NDAC authenticates
Network Devices for a Scalable Group ACL Scalable Group Tags
trusted CTS domain SGACL Cisco ISE SGT &
Destinations 3: Employee
Name Table SGT Names
✕✓✕✓✓✓ 4: Contractors

Sources
SGT & SGT Names ✓✓✕✓✕✕ 8: PCI_Servers
Centrally defined ✕✓✓✕✕✕ 9: App_Servers
Endpoint ID Groups

SGACL - Name Table


Policy matrix to be
pushed down to the
network devices

ISE dynamically
Rogue
authenticates endpoint Device(s) Dynamic SGT Static SGT
users and devices, and Assignment Assignment
assigns SGTs
802.1X

BRKCRS-3800 28 28
Anycast
What is Anycast?
•Just a configuration methodology.

•The basic idea is extremely simple.

•Multiple instances of a service share the same IP address.

•The routing infrastructure directs any packet to the topologically nearest instance of
the service.
Example :-

Router 2 Server Instance A

Client Router 1

Router 3 Router 4 Server Instance B


SD-Access Anycast Gateway
Anycast GW provides a single L3 Default Gateway for IP capable endpoints
• Similar principles and behavior as HSRP / VRRP
with a shared Virtual IP and MAC address C
• The same Switch Virtual Interface (SVI) is Known Unknown
Networks

present on EVERY Edge, with the same


Networks
B B
Virtual IP and MAC
• Control Plane with Fabric Dynamic EID mapping
creates a Host (Endpoint) to Edge relationship
• If (when) a Host moves from Edge 1 to Edge 2,
it does not need to change it’s IP Default Gateway! GW GW GW GW GW

34
Fabric Wireless
Centralized Unified Wireless Network Strengths
ISE / AD

Simplified operations? Yes with WLC

WLC

CAPWAP (Control)
CAPWAP (Data) Network Overlay? CAPWAP

WLC as Mobility
L3 roaming across Campus? Anchor

WLC as mobility
Simplified IP addressing?
Anchor

Guest traffic segmentation? Foreign-Anchor


Wired Network Strengths
ISE / AD

Segmentation VRF-Lite, MPLS

Complex ACL capabilities Scalable TCAMs

Scalable and
Distributed Data Plane Reliable

Distributed Feature Plane AVC, NetFlow,

Comprehensive QoS capable 12-class, Queuing


Wireless Integration in SDA Fabric
CUWN wireless Over The Top (OTT) SD-Access Wireless
ISE / AD APIC-EM ISE / AD APIC-EM

Non-Fabric Fabric
WLC enabled WLC
CAPWAP
B B Cntrl plane B B
CAPWAP
Control & Data
VXLAN
C Data plane
C

SD-Access SD-Access
Fabric Fabric

Non-Fabric Fabric enabled


APs APs

 CAPWAP for Control Plane and Data Plane  CAPWAP Control Plane, VXLAN Data plane
 SDA Fabric is just a transport  WLC/APs integrated in Fabric, SD-Access advantages
 Supported on any WLC/AP software and hardware  Requires software upgrade (8.5+)
 Migration step to full SDA  Optimized for 802.11ac Wave 2 APs
SD-Access Architecture
Roles and Terminology  DNA Controller – Enterprise SDN Controller
DNA provides GUI management abstraction via multiple
Group Controller Service Apps, which share information
Repository  Group Repository – External ID Services (e.g. ISE)
ISE / AD NDP
is leveraged for dynamic User or Device to Group
Analytics mapping and policy definition
Engine  Analytics Engine – Network Data Platform (NDP) is
Fabric Border leveraged to analyze User or Device to App flows
and monitor fabric status
Fabric Mode
WLC  Control-Plane Nodes – Map System that manages
B B Endpoint ID to Device relationships
 Border Nodes – A Fabric device (e.g. Core) that
Control-Plane
connects External L3 network(s) to the SDA Fabric
C Nodes

 Edge Nodes – A Fabric device (e.g. Access or


Distribution) that connects Wired Endpoints to the
Fabric Edge SDA Fabric
Intermediate Nodes
Nodes (Underlay)  Fabric Wireless Controller – Wireless Controller
(WLC) that is fabric-enabled
Fabric Fabric
Mode APs  Fabric Mode APs – Access Points that are fabric-
Mode APs
enabled.
SD-Access Wireless Architecture
Simplifying the Control Plane
DNAC
Automation
ISE / AD • DNAC simplifies the Fabric deployment,
Policy Abstraction • Including the wireless integration component
and Configuration
CAPWAP Automation
Cntrl plane Centralized Wireless Control Plane
LISP • WLC still provides client session management
Cntrl plane • AP Mgmt, Mobility, RRM, etc.
• Same operational advantages of CUWN
WLC
B B Fabric enabled WLC:
WLC is part of LISP control plane
LISP control plane Management
C • WLC integrates with LISP control plane
• WLC updates the CP for wireless clients
• Mobility is integrated in Fabric thanks to LISP CP
SD-Access
Fabric
Caveat: WLC cannot currently
connect directly to Border Node; use
a fusion router in between
SD-Access Wireless Architecture
Optimizing the Data Plane
Fabric Mode AP integrates with the VXLAN Data Plane
Wireless Data Plane is distributed across APs
C
• Fabric mode AP is a local mode AP and needs to be Known

directly connected to FE
Networks
B B
• CAPWAP control plane goes to the WLC using Fabric
• Fabric is enabled per SSID:
• For Fabric enabled SSID, AP converts 802.11 traffic to
802.3 and encapsulates it into VXLAN encoding VNI and
SGT information of the client
• Forwards client traffic based on forwarding table as programmed
by the WLC. Usually VXLAN DST is first hop switch.
VXLAN
• AP applies all wireless specific feature like SSID policies, (Data)
AVC, QoS, etc.
SD-Access Wireless Architecture
Simplifying Policy and Segmentation
B
VXLAN C
(Data) FE A

FE B
SD Fabric

IP payload IP 802.11

AP removes the
1 802.11 header

EID underlay
IP payload 802.3 VXLAN UDP
IP IP

AP adds the
2 802.3/VXLAN/underlay IP
header
SD-Access Wireless Architecture
Simplifying Policy and Segmentation
B
VXLAN C
(Data) FE A

FE B
SD Fabric

R Client SGT Client VRF R

Hierarchical Segmentation:
1. Virtual Network (VN) == VRF - isolated Control Plane + Data Plane
IP payload
EID
802.3 VXLAN UDP
underlay 2. Scalable Group Tag (SGT) – User Group identifier
IP IP

APs embed the Policy


2 information in the VXLAN
header and forwards it
SD-Access Wireless Architecture
Simplifying Policy and Segmentation
B
VXLAN C
(Data) FE A

FE B
SD Fabric

Client is placed in the


right VRF

EID underlay
IP payload 802.3 VXLAN UDP
IP IP

FE removes the outer IP header, looks


3 at the L2 VNID and maps it to the
VLAN and L2 LISP instance.
Then encapsulates to the destination
FE
SD-Access Wireless Architecture
Simplifying Policy and Segmentation
B
VXLAN C
(Data) FE A

FE B
SD Fabric

Client Policy is
carried end to
end in the
SGT policy is applied
overlay

EID underlay
IP payload 802.3 VXLAN UDP
IP IP

FE removes the outer IP header, looks at


the L2 VNID maps it to the VLAN.
4
Also looks at the SGT and apply the policy
before forwarding the packet
SD-Access Wireless Benefits
User Group policy rollout
DNA Center 1. Define Groups in AD
Production AAA
Servers
DHCP
2. Design and Deploy in DNA-C
Developer
• Create Virtual Network for Corporate
Servers AD • Define Policies
• Role/Group based
LAN core • Apply Policies
• SGT based
Production Serv. Developer Serv.
L3 Switch SGT 10 SGT 20
Trunk
WLC
VN VXN Fabric Fabric
Employee
Employee
Contractor
BYOD SGT
ID HDR SRC DST SGT 100

Corporate VN
BYOD
L3 Switch SGT 200

Contractor
SGT 300
Touch Point
One SSID

Original packet 3. Upon user authentication, Policy is


BYOD Employee Contractor automatically applied and carried
end to end
SD-Access Wireless Benefits
User Group policy rollout
DNA Center 1. Define Groups in AD
Production AAA
Servers
DHCP
2. Design and Deploy in DNA-C
Developer
• Create Virtual Network for Corporate
Servers AD • Define Policies
• Role/Group based
LAN core • Apply Policies One
• SGT based Touch
Production Serv.
Point
Developer Serv.
L3 Switch
IoT/HVAC Virtual Network SGT 10 SGT 20
Trunk
WLC
Guest Virtual Network Employee
SGT 100

Corporate VN
BYOD
L3 Switch SGT 200

Contractor
SGT 300
Touch Point
One SSID

3. Upon user authentication, Policy is


BYOD Employee Contractor automatically applied and carried
end to end
SD-Access
Hardware Requirement
SD-Access – Edge Node
Platform Support

50
SD-Access – Control Plane
Platform Support

51
SD-Access – Border Node
Platform Support

52
SD-Access – Fabric Wireless
Platform Support
*with Caveats
NEW

3504 WLC 5520 WLC 8540 WLC Wave 2 APs Wave 1 APs

• AIR-CT3504 • AIR-CT5520 • AIR-CT8540 • 1800/2800/3800 • 1700/2700/3700


• 1G/mGig • No 5508 • 8510 supported • 11ac Wave2 APs • 11ac Wave1 APs*
• AireOS 8.5.1+ • 1G/10G SFP+ • 1G/10G SFP+ • 1G/MGIG RJ45 • 1G RJ45
• AireOS 8.5.1+ • AireOS 8.5.1+ • AireOS 8.5.1+ • AireOS 8.5.1+
End Of Module

Questions?
LAB Time 
Design ,Policy and
Provision with SD-Access

S-ar putea să vă placă și