Sunteți pe pagina 1din 29

Intro to OTV

Lets say, we have 3 switches (A,B,C). Switch A is connectec to B and Switch


B is connected to Switch C. and Switch A has 2 vlans created on it, vlan 10
and 20. What if we want the the vlan 10 and 20 to be extended to Switch C
over Switch B, We will have to simply create vlan 10 and 20 on both switch B
and C and allow both the vlans on trunks connecting the switches, right? and
its simple!!
If you look at this pic, we have two Datacenters, DC1 and DC2 which are
geographicaly far away from each other, lets say one in Newyork and
another one in Los Angles and there are some server which are there in both
data centers,however, they sync their hearbeat over layer 2 only and doesnt
work on layer 3. So,we have a requirment that we have to extend vlan 10
and 20 from DC1 to another data center, DC2!! You may call it Datacenter
Interconnect (DCI).

can we do the same thing which we did to extend vlan from switch A to
switch C in above example? Ofcourse Not!!, so what the are the solutions to
achieve this?
Until OTV came into picture, we had few of the below options to achieve this:
-VPLS
-Dark Fiber (CWDM or DWDM)
-AToM
-L2TPv3
These are the services provided by Service Providers and they work on
different mechanisms but basicaly what they do is, they provide you a layer
2 path between DC1 to DC2 similar to a trunk link between Switch A and
Switch B. So what does that mean? If a broadcast is sent or a ARP request is
sent, that will travel across the service provider to another data center in
that VLAN? Ofcourse YES!! Your STP domain will also get extended over DCI.
So, if a device in vlan 10 in DC1 is trying to communicate with another
device which is also in DC1 but the ARP request will go all the way to DC2
switches on which that particular vlan is configured.

So, to avoid such problems, Cisco introduced OTV (Overlay Transport


Virtualization) which is basicaly a DCI (data center interconnect) technology
to be configured on Nexus Switches. Using OTV, we can extend Layer 2
between two or more datacenters over traditional L3 infrastructure provided
by Service Provider, and we dont need a seperate L2 link for layer 2
extension and we will still be able to limit STP domain and unnecessary
broadcast over WAN links. It can overlay multiple VLAN with a simple design.
Basically what it does is that, Datacenters will be able to advertise their MAC
addresses to each other(its called
Mac in IP" routing) and a decision can be made on the basis of MAC
addresses whether that MAC address is local or in another data center and
based on that, frame can be forwarded or limited to a particular data center
only. OTV uses a control protocol to map MAC address destinations to IP next
hops that are reachable through the normal L3 network core.
So, in Cisco's language "OTV can be thought of as MAC routing in which the
destination is a MAC address, the next hop is an IP address, and traffic is
encapsulated in IP so it can simply be carried to its MAC routing next hop
over the core IP network. Thus a flow between source and destination host
MAC addresses is translated in the overlay into an IP flow between the source
and destination IP addresses of the relevant edge devices. This process is
called encapsulation rather than tunneling as the encapsulation is imposed
dynamically and tunnels are not maintained"

How this is implemented, that i will show in another simplified post!!Thank


you!!

OTV Silent Host Connectivity Problem


There are several scenarios where a silent host can cause connectivity
issues. The purpose of this document is to show why traffic loss occurs
during an AED failover.
(Power Point Presentation appended to this document).

1) Before failover, traffic between hosts is working successfully.

2) AED failure occurs at site-1

3) New AED role established for the vlan at site-1 but traffic continues to fail.
The duration of the connectivity issue can range from a few seconds to
several minutes depending on the length of time it takes for Host-2 to
generate a packet.

4) Connectivity is restored once new AED learns Host-2's MAC

Failover scenario for between non-silent host. Rarely in real networks will a
there be completely silent devices. Thus, failovers between AEDs converge
quickly as the local MACs are relearned. Generally, the "silent host" scenario
is seen in testing environments only.
1) Before failover, traffic between hosts is working successfully.

2) AED failure occurs at site-1

3) AED immediately learns Host-2's MAC and is able to install the entry into
it's CAM and OTV route table. It then advertises the route to S2-OTV-1 and
connectivity is quickly restored. Note that the original AED failure generates
a TCN on the vlan of site-1. Thus packets from Host-2 to Host-1 will be
flooded throughout the network ensuring that it reaches the new AED.

- See more at: https://supportforums.cisco.com/document/65531/otv-silenthost-connectivity-problem#sthash.Q5INzcE3.dpuf


Posted by Inderdeep Singh lll at 2:06 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: OTV
Troubleshooting OTV Adjacency
OTV adjacency is formed across the layer 3 multicast clouds by exchanging
L1 ISIS Hello packets.
Sample Topology

1. Verify IGMP V3 Join


After the OTV configuration completed, enabling the join interface sends any
source IGMP v3 join on that interface.
interface Ethernet1/31
description [OTV-JOIN-INTERFACE]
vrf member OTV
ip address 10.10.15.6/30
ip router eigrp 10
ip igmp version 3
SITE1-OED1(config-if)# int eth 1/31
SITE1-OED1(config-if)# no shut
SITE1-AGG1%OTV# show ip mroute
IP Multicast Routing Table for VRF "OTV"
(*, 239.1.1.1/32), uptime: 0.283775, igmp pim
Incoming interface: Ethernet1/5, RPF nbr: 10.10.15.1 <<Interface facing
Multicast RP
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 0.283711, igmp <<< Interface connecting to OTV
VDC SITE1-OED1
2. Verify ISIS Hello Packets
When you enable the OTV overlay interface, the switch will generate a L1
ISIS hello packet with the source IP address as the OTV join interface and the
destination IP address as the OTV control-group.

SITE1-AGG1%OTV# show ip mroute


IP Multicast Routing Table for VRF "OTV"
(*, 239.1.1.1/32), uptime: 00:16:00, igmp pim ip
Incoming interface: Ethernet1/5, RPF nbr: 10.10.15.1
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 00:16:00, igmp
(10.10.15.6/32, 239.1.1.1/32), uptime: 00:00:19, ip mrib pim
<<< S,G entry represents Multicast packet received from 10.10.15.6
Incoming interface: Ethernet1/9, RPF nbr: 10.10.15.6
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 00:00:19, mrib, (RPF)

3. Verify Multicast Transport


After all the four OTV edge devices are configured and enabled, the multicast
transport network should show all the sources
SITE1-AGG1%OTV# show ip mroute
IP Multicast Routing Table for VRF "OTV"
(*, 239.1.1.1/32), uptime: 00:23:13, igmp pim ip
Incoming interface: Ethernet1/5, RPF nbr: 10.10.15.1
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 00:23:13, igmp
(10.10.15.6/32, 239.1.1.1/32), uptime: 00:07:33, ip mrib pim <<< SITE1OED1
Incoming interface: Ethernet1/9, RPF nbr: 10.10.15.6
Outgoing interface list: (count: 2)
Ethernet1/5, uptime: 00:05:44, pim
Ethernet1/9, uptime: 00:07:33, mrib, (RPF)
(10.10.16.6/32, 239.1.1.1/32), uptime: 00:04:48, ip mrib pim <<< SITE1OED2
Incoming interface: Ethernet1/5, RPF nbr: 10.10.15.1
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 00:04:48, mrib
(10.10.17.6/32, 239.1.1.1/32), uptime: 00:04:21, ip mrib pim <<< SITE2OED1
Incoming interface: Ethernet1/5, RPF nbr: 10.10.15.1
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 00:04:21, mrib

(10.10.18.6/32, 239.1.1.1/32), uptime: 00:03:46, ip mrib pim <<< SITE2OED2


Incoming interface: Ethernet1/5, RPF nbr: 10.10.15.1
Outgoing interface list: (count: 1)
Ethernet1/9, uptime: 00:03:46, mrib
You can also monitor the OTV control packets statistics as below.
SITE1-AGG1%OTV# show ip mroute summary
IP Multicast Routing Table for VRF "OTV"
Total number of routes: 6
Total number of (*,G) routes: 1
Total number of (S,G) routes: 4
Total number of (*,G-prefix) routes: 1
Group count: 1, rough average sources per group: 4.0
Group: 239.1.1.1/32, Source count: 4
Source
packets
bytes
aps pps
bit-rate
oifs
(*,G)
7
10106
1443 0
0.000 bps 1
10.10.15.6
73
86674
1187 0
1349.600 bps 2
10.10.16.6
96
76987
801 0
1429.600 bps 1
10.10.17.6
103
81610
792 0
1416.267 bps 1
10.10.18.6
155
144435
931 0
4092.800 bps 1
4. Verify OTV ISIS Adjacency
You can verify the Adjacency using the below command:
SITE1-OED1# show otv adjacency detail
Overlay Adjacency database
Overlay-Interface Overlay1 :
Hostname
System-ID
Dest Addr
Up Time State
SITE1-OED2
0024.986f.bac4 10.10.16.6
00:05:31 UP
HW-St: Up Peer-ID: 3
SITE2-OED1
0026.51ce.0f43 10.10.17.6
00:46:29 UP
HW-St: Up Peer-ID: 2
SITE2-OED2
0026.51ce.0f44 10.10.18.6
00:46:29 UP
HW-St: Up Peer-ID: 1
5. Troubleshoot OTV ISIS Adjacency

If there is any issue in establishing OTV ISIS adjacency, you can look at the
ISIS adjacency log as below:
/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table
Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-stylenoshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-styleparent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in;
mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-paramargin-left:0in; line-height:115%; mso-pagination:widow-orphan; fontsize:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New
Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}
SITE1-OED1# show otv isis internal event-history adjacency
ISIS default process
adjacency Events for ISIS process
2011 May 21 08:54:53.773678 isis_otv default [10371]:
L1
SITE2-OED1 over Overlay1 - UP
2011 May 21 08:54:53.773662 isis_otv default [10371]:
add adja
cency for overlay:Overlay1, addr: 10.10.17.6
2011 May 21 08:54:53.653906 isis_otv default [10371]:
adjacency
SITE2-OED1 over Overlay1 IPv4 address to 10.10.17.6
2011 May 21 08:54:53.653827 isis_otv default [10371]:
L1
SITE2-OED1 over Overlay1 - INIT (New)
2011 May 21 08:54:53.653799 isis_otv default [10371]:
fo
r L1 MT-0 for iib Overlay1
You can clear the log as below:
SITE1-OED1# clear otv isis event-history
6. Verify OTV overlay interface
Verify Overlay interface is up.
SITE1-OED1# show otv
OTV Overlay Information

(Overlay1) : LAN adj


[10376]: Sent OTV
(Overlay1) : Set
(Overlay1) : LAN adj
[10376]: Initialize adj

Overlay interface Overlay1


VPN name
: Overlay1
VPN state
: UP
Extended vlans
: 100-200 (Total:101)
Control group
: 239.1.1.1
Data group range(s) : 232.1.1.0/26
Join interface(s) : Eth1/31 (10.10.15.6)
Site vlan
: 95 (up)
<<< Site VLAN should be Up
Site VLAN should be in spanning tree forwarding on at least one port in the
OTV VDC. If not, the data will not be forwarded across the OTV even though
the OTV overlay is up and adjacency is forwarded with other peers.
7. Verify Tunnel Creation
After the OTV adjacency is setup, OTV process creates implicit tunnels for
each discovered peer. These tunnels will be used to forward the data across
OTV to the respective site.
show otv internal event-history debug
2011 May 21 10:10:58.428388 otv [10358]: [10361]: otv_tunnel_gre_add
Tunnel (10.
10.15.6, 10.10.17.6) does not exist, creating NEW
2011 May 21 10:10:58.426206 otv [10358]: [10361]: Received ISIS add
adjacency me
ssage for (Overlay1, 10.10.17.6) , sid 0026.51ce.0f43
Tunnel process in the NX-OS creates these tunnels based on the request from
OTV process. You can verify the implicit tunnels as below:
SITE1-OED1# show tunnel internal implicit otv brief
------------------------------------------------------------------------------Interface
Status
IP Address
Encap type
MTU
------------------------------------------------------------------------------Tunnel16424
up
-GRE/IP
9178
Tunnel16425
up
-GRE/IP
9178
SITE1-OED1# show tunnel internal implicit otv detail
Tunnel16424 is up
MTU 9178 bytes, BW 9 Kbit
Transport protocol is in VRF "OTV"
Tunnel protocol/transport GRE/IP
Tunnel source 10.10.15.6, destination 10.10.18.6

Last clearing of "show interface" counters never


Tx
0 packets output, 1 minute output rate 0 packets/sec
Rx
0 packets input, 1 minute input rate 0 packets/sec
Tunnel16425 is up
MTU 9178 bytes, BW 9 Kbit
Transport protocol is in VRF "OTV"
Tunnel protocol/transport GRE/IP
Tunnel source 10.10.15.6, destination 10.10.17.6
Last clearing of "show interface" counters never
Tx
0 packets output, 1 minute output rate 0 packets/sec
Rx
0 packets input, 1 minute input rate 0 packets/sec
The below command will shows the mapping between OTV adjacency and
implicit tunnel.
SITE1-OED1# show otv internal adjacency
Overlay Adjacency database
Overlay-Interface Overlay1 :
System-ID
Dest Addr
Adj-State TM_State Up Time Adj-State inAS
0024.986f.bac4 10.10.16.6

default default 00:05:59 UP

UP

HW-St: Up Curr-Peer-St: Up, New-Peer_St: Default Peer-ID: 3 N Tunnel16426


0026.51ce.0f43 10.10.17.6
default default 00:46:57 UP
UP
HW-St: Up Curr-Peer-St: Up, New-Peer_St: Default Peer-ID: 2 N Tunnel16425
0026.51ce.0f44 10.10.18.6
default default 00:46:58 UP
UP
HW-St: Up Curr-Peer-St: Up, New-Peer_St: Default Peer-ID: 1 N Tunnel16424
SITE1-OED1#
- See more at:
https://supportforums.cisco.com/document/64481/troubleshooting-otvadjacency#sthash.g7HdFvKU.dpuf
Posted by Inderdeep Singh lll at 2:04 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: OTV
Cisco OTV (or Overlay Transport Virtualisation)

Cisco OTV (or Overlay Transport Virtualisation) is a technology inside Cisco


Nexus switches (7K) for extending VLANs across a routed network. You can
read all about OTV here and here. This post consists of an example
configuration for a lab where you have a single Nexus 7K and you want to
get OTV over multicast running between VDCs and comes from my CCIE
study notes for when I was practicing with OTV. Ive heard some people have
issues with getting the multicast configuration working, so I figured I would
share mine here.
Overlay Transport Virtualization (OTV) is a solution used to extend Layer 2
networks across an IP network. BRKDCT-2049 discusses the technology in
detail, along with deployment scenarios and an architectural view of the
solution. This session will serve as a guide for configuration, verification, and
troubleshooting of OTV through both a unicast and multicast enabled IP core.
This includes an in-depth discussion of how OTV adjacencies are built and
how routes are advertised. Finally, it will discuss common deployment issues
and how they can be resolved. This session is useful for network engineers
who are responsible for designing, configuring and troubleshooting Service
Provider, Enterprise and Data Center Networks. Prior experience with NX-OS
is useful but not required. BRKDCT-2049: Overlay Transport Virtualization is
advised as a prerequisite.
First off, to create the illusion of a routed network, this will be an OTV on a
stick type configuration and not a directly connected one (that would be too
easy). Lets take a look at the topology we want to create:

We want the two servers in 10.0.10.0/24 to reach each other through the
routed fabric in between. The supervisor were using has a 4 VDC limit and in
this example I will be using all 4. The smaller router icons are VRFs on the
SW1 and SW2 VDCs which you will see return in the configuration.
Im going to assume you have a working knowledge of basic NX-OS, OSPF
and OTV knowledge and have the VDCs set up, have the proper licenses,

made the proper interface allocations (mind your M and F linecards and
make sure absolutely no other connections are active and trunking, youll
have a field day looking for why it doesnt work otherwise) and can start from
there. First, some basic information:
OTV Extended VLAN: 10
OTV Site-VLAN: 99
OTV Site-IDs: 0x1 & 0x2
OTV Control: 239.1.1.1
OTV Data Range: 232.1.1.0/24
Lets begin with numbering all the interfaces and creating the OSPF network:
OTV1
feature ospf
! Define VLANs
vlan 10
name SERVERS
vlan 99
name OTV-SITE
! Enable OSPF
router ospf 1
log-adjacency-changes
! Join interface
interface Ethernet1/27
ip address 172.16.0.2/30
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
no shutdown
! Internal interface
interface Ethernet8/1
switchport mode trunk
switchport trunk allowed vlan 10,99
no shutdown
SW1
feature ospf
! Define VLANs
vlan 10
name SERVERS
vlan 99

name OTV-SITE
! Define vrf context for separating 'intra-dc' traffic from local traffic
vrf context OTV
! Used for OSPF router-id
interface loopback0
vrf member OTV
ip address 1.1.1.1/32
! OSPF redistribution for following ranges
ip prefix-list OSPF-REDIST permit 172.16.0.0/30
ip prefix-list OSPF-REDIST permit 1.1.1.1/32
route-map OSPF_REDIST permit 10
match ip address prefix-list OSPF_REDIST
! Have OSPF send default gateway to following peers:
ip prefix-list OSPF_DEF_ROUTE permit 172.16.0.0/30
route-map OSPF_DEF_ROUTE permit 10
match ip address prefix-list OSPF_DEF_ROUTE
! Enable OSPF and redistribute proper prefixes and default gateway to OTV
VDC
router ospf 1
default-information originate always route-map OSPF_DEF_ROUTE
redistribute direct route-map OSPF_REDIST
log-adjacency-changes
vrf OTV
! Link to OTV1 (Join interface)
interface Ethernet1/28
vrf member OTV
ip address 172.16.0.1/30
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
no shutdown
! Link to SW2
interface Ethernet1/37
vrf member OTV
ip address 192.168.0.1/30
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
no shutdown

! Link to OTV1 (Internal interface)


interface Ethernet8/9
switchport mode trunk
switchport trunk allowed vlan 10,99
no shutdown
! Link to access switch
interface Ethernet8/10
switchport mode trunk
switchport trunk allowed vlan 10
no shutdown
SW1
feature ospf
! Define VLANs
vlan 10
name SERVERS
vlan 99
name OTV-SITE
! Define vrf context for separating 'intra-dc' traffic from local traffic
vrf context OTV
! Used for OSPF router-id
interface loopback0
vrf member OTV
ip address 2.2.2.2/32
! OSPF redistribution for following ranges
ip prefix-list OSPF-REDIST permit 172.16.1.0/30
ip prefix-list OSPF-REDIST permit 2.2.2.2/32
route-map OSPF_REDIST permit 10
match ip address prefix-list OSPF_REDIST
! Have OSPF send default gateway to following peers:
ip prefix-list OSPF_DEF_ROUTE permit 172.16.1.0/30
route-map OSPF_DEF_ROUTE permit 10
match ip address prefix-list OSPF_DEF_ROUTE
! Enable OSPF and redistribute proper prefixes and default gateway to OTV
VDC
router ospf 1
router-id 2.2.2.2

default-information originate always route-map OSPF_DEF_ROUTE


redistribute direct route-map OSPF_REDIST
log-adjacency-changes
vrf OTV
! Link to OTV2 (Join interface)
interface Ethernet1/36
vrf member OTV
ip address 172.16.1.1/30
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
no shutdown
! Link to SW1
interface Ethernet1/38
vrf member OTV
ip address 192.168.0.2/30
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
no shutdown
! Link to OTV2 (Internal interface)
interface Ethernet8/17
switchport mode trunk
switchport trunk allowed vlan 10,99
no shutdown
! Link to access switch
interface Ethernet8/18
switchport mode trunk
switchport trunk allowed vlan 10
no shutdown
OTV2
feature ospf
! Define VLANs
vlan 10
name SERVERS
vlan 99
name OTV-SITE
! Enable OSPF
router ospf 1
log-adjacency-changes

! Join interface
interface Ethernet1/48
ip address 172.16.1.2/30
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
no shutdown
! Internal interface
interface Ethernet8/15
switchport mode trunk
switchport trunk allowed vlan 10,99
no shutdown
With that out of the way you should have a routed network and OTV1 should
be able to reach OTV2:
OTV1# ping 172.16.1.2
PING 172.16.1.2 (172.16.1.2): 56 data bytes
64 bytes from 172.16.1.2: icmp_seq=0 ttl=254 time=48.052 ms
64 bytes from 172.16.1.2: icmp_seq=1 ttl=254 time=4.923 ms
64 bytes from 172.16.1.2: icmp_seq=2 ttl=254 time=20.752 ms
64 bytes from 172.16.1.2: icmp_seq=3 ttl=254 time=2.935 ms
64 bytes from 172.16.1.2: icmp_seq=4 ttl=254 time=4.965 ms
--- 172.16.1.2 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 2.935/16.325/48.052 ms
Now its time for the multicast and OTV configuration. Disclaimer: I am not
a multicast expert, but I know enough to get it to work. It might be very well
that there is a more efficient way of setting this up, this is just what I did to
get it working.
OTV1
feature otv
! Define local OTV identifiers
otv site-vlan 99
otv site-identifier 0x1
! Join interface
interface Ethernet1/27
ip igmp version 3
! Define OTV overlay with join interface, the extended VLAN and multicast
addresses
interface Overlay0

otv join-interface Ethernet1/27


otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
no shutdown
SW1
! Enable multicast routing between SW1 and SW2
feature pim
ip
ip
ip
ip

pim
pim
pim
pim

send-rp-announce loopback0 group-list 224.0.0.0/4


auto-rp mapping-agent loopback0
ssm range 232.0.0.0/8
auto-rp forward listen

! Link to OTV1 (Join interface)


interface Ethernet1/28
ip pim sparse-mode
ip igmp version 3
! Link to SW2
interface Ethernet1/37
ip pim sparse-mode
ip igmp version 3
interface loopback0
ip pim sparse-mode
SW2
! Enable multicast routing between SW1 and SW2
feature pim
ip
ip
ip
ip

pim
pim
pim
pim

send-rp-announce loopback0 group-list 224.0.0.0/4


auto-rp mapping-agent loopback0
ssm range 232.0.0.0/8
auto-rp forward listen

! Link to OTV2 (Join interface)


interface Ethernet1/36
ip pim sparse-mode
ip igmp version 3
! Link to SW1
interface Ethernet1/38

ip pim sparse-mode
ip igmp version 3
interface loopback0
ip pim sparse-mode
OTV2
feature otv
! Define local OTV identifiers
otv site-vlan 99
otv site-identifier 0x2
! Join interface
interface Ethernet1/48
ip igmp version 3
! Define OTV overlay with join interface, the extended VLAN and multicast
addresses
interface Overlay0
otv join-interface Ethernet1/48
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
no shutdown

If youve come this far, youve got the following topology:

At this time, you should have a working OTV adjacency between OTV1 and
OTV2. Dont freak out when theres no connectivity at first, as OTV
adjacencies could take a while before they are actually forwarding. If theres
no connectivity, check the output of this command:
OTV1# sh otv vlan

OTV Extended VLANs and Edge Device State Information (* - AED)


Legend:
(NA) - Non AED, (VD) - Vlan Disabled, (OD) - Overlay Down
(DH) - Delete Holddown, (HW) - HW: State Down
(NFC) - Not Forward Capable
VLAN Auth. Edge Device
Vlan State
Overlay
---- ----------------------------------- ---------------------- ------10 OTV1
inactive(33 s left) Overlay0
If your output is similar to the above output, wait patiently for the countdown
to finish. After that grace period the VLAN status should switch to active.
If everything is working correctly, you should have these types of output:
OTV1# sh otv adjacency
Overlay Adjacency database
Overlay-Interface Overlay0 :
Hostname System-ID
Dest Addr Up Time State
OTV2
001e.bd27.c000 172.16.1.2 00:11:21 UP
OTV1# sh otv vlan
OTV Extended VLANs and Edge Device State Information (* - AED)
Legend:
(NA) - Non AED, (VD) - Vlan Disabled, (OD) - Overlay Down
(DH) - Delete Holddown, (HW) - HW: State Down
(NFC) - Not Forward Capable
VLAN Auth. Edge Device
Vlan State
Overlay
---- ----------------------------------- ---------------------- ------10 OTV1
active
Overlay0
If your adjacency is not online, you have some troubleshooting to do. Heres
a few links that could help you in that case:
Posted by Inderdeep Singh lll at 2:03 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: OTV
Something About OTV (Overlay Transport Virtualization)- Basics
Note that VLAN SVIs cannot co-exist on the same router or VDC as OTV
transport of those VLANs. The solution is to either do OTV in another router
(Nexus 7K or ASR 1000), or do OTV-on-a-stick. The latter is where you use
a trunk for L2 and L3 connectivity from core router (or other router) to an
OTV device or VDC. Or use two links, one at L3 and one at L2. L3 usually
ends up on the L3 WAN or site core. L2 may end up there or at the
distribution layer however high in the hierarchical design your server
VLANs extend.
(I feel the lack of a diagram here see the Cisco document above for many
diagrams.)

Configure the site ID. It cant hurt. And its now mandatory (per TFM).
As to why, see the next item.

If you have two OTV devices at a site, make sure they can route to
each other. OTV now requires both site VLAN heartbeat and OTV-side
adjacency to operate. I prefer to have a fairly direct connection for this,
i.e. if the L3 side of the OTV device or VDC connects to your site or
WAN core, then make sure the two WAN cores are connected to each
other at L3. Routing via a site L2 core and crosslink puts a lot ofup.
(Required the last time I tested it, anyway.)

When doing unicast-based OTV, dpending on your High Availability


requirements, strongly consider having two adjacency servers.

When doing multicast-based OTV, it might be a good idea to (a) verify


all OTV site pairs in your WAN support multicast routing and (b)
actually verify that multicast gets to the other end (in each direction)
with a low packet loss rate. I.e. catch any IPmc problems in your WAN
before you end up troubleshooting OTV instability or other odd
behavior.

I and some customers are torn between unicast and multicast-based


OTV. For many sites, muup. (Required the last time I tested it, anyway.)

When doing unicast-based OTV, dpending on your High Availability


requirements, strongly consider having two adjacency servers.

When doing multicast-based OTV, it might be a good idea to (a) verify


all OTV site pairs in your WAN support multicast routing and (b)
actually verify that multicast gets to the other end (in each direction)
with a low packet loss rate. I.e. catch any IPmc problems in your WAN
before you end up troubleshooting OTV instability or other odd
behavior.

I and some customers are torn between unicast and multicast-based


OTV. For many sites, multicast-based OTV has clear benefits. On the
other hand, many of us (well, me and a couple others Ive talked to?)
feel that IPmc code in general is less mature, likely to be less robust,
and it adds complexity, suggesting there is less overall risk to doing
unicast-based OTV in the absence of any factors making IPmc-based
more attractive. Such as many datacenter sites, or need to transport
IPmc flows.

We have been setting the OTV devices up with each having a virtual
port channel to the L2 Aggregation switches, and no cross-link between

them (no real value to having one). If you cross-connect the OTV pair,
you probably want wide vPC from them to the Aggregation pair,
assuming the latter are Nexus / Nexus VDC and doing vPC.

The OTV VDC or device can use static routing. Dynamic routing has the
virtue of providing logging of adjacency loss when the link goes down,
which is usually more conspicuous. Links bouncing is normal, routing
adjacency issues tend to get noticed.

Practice troubleshooting in the lab, before your first outage or problem.


(This really applies to any technology. Practicing a STP loop can be
enlightening, motivating! And save a lot of time when you experience
your first STP loop.)

Do think about inbound and outbound routing when considered HSRP /


FHRP filtering between OTV-connected datacenters. If you have
stateful firewalls or load balancers, you may well need to exercise
control over default routing, in which case you likely wont always be
able to do optimal outbound routing.

Note the OTV ARP timer should be far less than the MAC aging timer.
You want this in general on switches, to avoid / reduce unicast flooding.

Limit which VLANs cross the OTV tunnel(s). Yeah, every-VLANeverywhere in one datacenter (VLAN entropy) may eventually mean
every-VLAN-everywhere-in-every-datacenter. It strikes me as
worthwhile risk reduction to fight the battle to limit VLAN scopes as
much as possible. It may be a rear-guard action, but still is worthwhile.
YMMV.

Check the current Cisco documentation as to recommended (tested)


OTV scalability limits. Exceeding them may bite you. I personally see
the MAC address maximum (which is across all extended VLANs) as the
limit I think most sites will hit first.

If you run PIM on the OTV join interface, make sure it is not PIM DR:
adjust the DR priority to ensure this.

S-ar putea să vă placă și