Sunteți pe pagina 1din 17

vPC Chalk Talk

Presented By
Srinivas Chaparala
Guru Swaminathan
vPC Overview
vPC Terminology (1 of 2)
vPC Terminology (2 of 2)
Best Practice for vPC component
configuration
The vPC-keepalive link is : vPC peer-link down !

• Layer 3 link
3s
Keepalive Hold Timeout
• Used at the boot up of the vPC systems
5s
• Guarantee both peer devices are up before forming vPC domain and also when vPC peer-link fails to down state
Keepalive Timeout
• Data Structure: 32 byte payload, UDP (3600)

• N7k(config-vpc-domain)# peer-keepalive destination ipaddress [source ipaddress | hold-timeout secs | interval msecs {timeout
secs}]

• Strong Recommendations:

• When building a vPC peer-keepalive link, use the following in descending order of preference:

• Dedicated link(s) (1-Gigabit Ethernet port is enough) configured as L3. Port-channel with 2 X 1G port is even better.

• Create a dedicated VRF for vPC peer-keepalive link

• As a last resort, route the peer-keepalive link over the Layer 3 infrastructure

• WARNING: Do not configure vPC peer-keepalive link on top of vPC peer-link; peer-keepalive messages must not be carried over
vPC peer-link to avoid fate sharing in case peer-link goes down.
Best Practice for vPC component
configuration
The vPC peer-link is a standard 802.1Q trunk that can perform the following actions:

• Carry vPC and non-vPC VLANs.

• Carry Cisco Fabric Services messages that are tagged with CoS=4 for reliable communication.

• Carry flooded traffic from the other vPC peer device.

• Carry STP BPDUs, HSRP hello messages, and IGMP updates

Strong Recommendations:

Ensure that member ports are 10-Gigabit Ethernet interfaces.

Use a minimum of two 10-Gigabit Ethernet ports. vPC peer-link member ports can be scaled up to line card capacity in regards to port-
channel (M1 line card supports up to 8 members ports while F1 and F2 support up to 16 member ports).

Use at least 2 different line cards to increase high availability of peer-link.

Use dedicated 10-Gigabit Ethernet ports with M1 32 10G line card. Do not use shared mode ports.

Do not insert any device between vPC peers. A peer-link is a point-to-point link.
Stages of VPC initialization

1. VPC manager starts


2. Peer-keepalive comes up (receives keepalives from the peer)
3. Peer-link comes up (data is not passing through yet, just CFS)
4. Primary/Secondary Role resolved
5. Global Consistency check
6. Peer-link is up for data
7. SVIs brought up (VPC + 10 sec)
8. VPCs brought up (SVI + 30 sec)
vPC Consistency Check

Inconsistency Type Action Example of inconsistency


Type 1 / Global Vlans suspended on peer-link, VPCs up with Rapid-PVST STP on one peer, MST STP
respective vlans suspended on another
Type 1 / Interface Vlans suspended on respective VPC MTU mismatch, STP guard config
mismatch
Type 2 Syslog message SVI is up on one peer, down on another
IP Multicast with VPC
Receiver sends IGMP report (join)
RP
Access switch sends join to right VPC peer

Right VPC peer creates (*,G) adds VPC to OIF (as


proxy-DR)
Source S1 IGMP is encapsulated in CFS and sent to left peer

Left peer (DR) creates (*,G) adding VPC to OIF

DR (left peer) sends PIM Join to RP


(*,G)VPC (*,G)VPC
Primary 2ndary Once (S1,G) traffic starts arriving, VPC peers will
CFS:IGMP
(S1,G)VPC (S1,G)null resolve which one will be forwarder for that (S,G):
DR Proxy-DR peer with best metric to source or primary in a tie
(this mechanism is specific to PIM in VPC mode,
normally PIM would use assert)
Receiver Only forwarder will have OIFs populated in (S,G)
the non-forwarder won’t have VPC SVIs in OIF list

IGMP join Forwarder will send a copy of frame to the peer-


link for receivers single-connected to other peer
Goal is to allow the peer receiving source traffic to forward it to receivers behind
VPC without crossing peer-link (VPC check will drop such traffic otherwise)
IP Multicast with VPC: Prebuilt-SPT
RP

In case of DR failure proxy-DR becomes DR and


Source S1
posts OIF-list from (*,G) to (S,G), but it will also
need to pull traffic from RP/source which delays
recovery

(*,G)VPC (*,G)VPC With ‘ip pim pre-build-spt’ proxy-DR will also send
Primary 2ndary a PIM Join to source/RP to draw the traffic
(S1,G)VPC (S1,G)VPC
(S1,G)null
DR New DR Traffic pulled by proxy-DR will be dropped until it
becomes DR – provision uplink and replication
bandwidth accordingly

Receiver
IP Multicast with VPC: source behind
VPC RP

When Source is behind VPC both DR and Proxy-


DR will add OIFs for the group to (S,G)
(*,G)VPC2 (*,G)VPC2
Primary 2ndary
This is because either peer can receive source
(S1,G)VPC2 (S1,G)VPC2
traffic and need to be able to send it to receivers
behind VPCs without crossing peer-link (to avoid
DR Proxy-DR
dropping the traffic by VPC check)

VPC1 VPC2
Source S1
Receiver

 When VPC is configured on N7K-F248XP-25 linecard (F2) there is no proxy-DR


function (due to hardware specifics). Packet will be bridged to DR over peer-link
(VPC check is modified accordingly for L3 multicast packets on F2 linecards)
Summary: VPC traffic forwarding with
Nexus 7000

√ √ x X √
vPC Enhancements Graceful Consistency Check
vPC Enhancements – Auto-Recovery
vPC Enhancement – Peer-Gateway
No Questions right ?

S-ar putea să vă placă și