Documente Academic
Documente Profesional
Documente Cultură
13.a
Student Guide
Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents
This two-day course is designed to introduce the features introduced by the QFX5100 and EX4300 Series Ethernet
Switches including, but not limited to, zero touch provisioning (ZTP), unified in-service software upgrade (ISSU),
multichassis link aggregation (MC-LAG), mixed Virtual Fabric, and Virtual Chassis Fabric (VCF). Students will learn to
configure and monitor these features that exist on the Junos operating system running on the QFX5100 and
EX4300 Series platform.
Through demonstrations and hands-on labs, students will gain experience configuring, monitoring, and analyzing the
above features of the Junos OS. This course is based on Junos OS Release 13.2X51-D21.1.
Objectives
After successfully completing this course:
• Identify current challenges in today’s data center environments and explain how the QFX5100 system
solves some of those challenges.
• List the various models of QFX5100 Series switches.
• List some data center architecture options.
• Explain the purpose and value of ZTP.
• Describe the components and operations of ZTP.
• Deploy a QFX5100 Series switch using ZTP.
• Explain the purpose and value of ISSU.
• Describe the components and operations of ISSU.
• Upgrade a QFX5100 Series switch using ISSU.
• Explain the purpose and value of MC-LAG.
• Describe the components and operations of MC-LAG.
• Implement an MC-LAG on QFX5100 Series Switches.
• Describe key concepts and components of a mixed Virtual Chassis.
• Explain the operational details of a mixed Virtual Chassis.
• Implement a mixed Virtual Chassis and verify its operations.
• Describe key concepts and components of a Virtual Chassis Fabric.
• Describe the control plane and forwarding plane of a Virtual Chassis Fabric.
• Describe how to use the CLI to configure and monitor a Virtual Chassis Fabric.
• Describe how to provision a Virtual Chassis Fabric using nonprovisioning, preprovisioning, and
autoprovisioning.
• Describe the software requirements and upgrade procedure of Virtual Chassis Fabric.
• Describe how to manage a Virtual Chassis Fabric with Junos Space.
Intended Audience
This course benefits individuals responsible for configuring and monitoring switching features that exist on the Junos OS
running on the QFX5100 and EX4300 Series platforms, including individuals in professional services, sales and support
organizations, and the end users.
Course Level
Data Center Switching (DCX) is an intermediate-level course.
Day 1
Chapter 1: Course Introduction
Chapter 2: System Overview
Chapter 3: Zero Touch Provisioning
Zero Touch Provisioning Lab
Chapter 4: In-Service Software Upgrades
In-Service Software Upgrade Lab
Chapter 5: Multichassis Link Aggregation
Multichassis Link Aggregation Lab
Day 2
Chapter 6: Mixed Virtual Chassis
Mixed Virtual Chassis Lab
Chapter 7: Virtual Chassis Fabric
Chapter 8: Virtual Chassis Fabric Management
Virtual Chassis Fabric Lab
Franklin Gothic Normal text. Most of what you read in the Lab Guide
and Student Guide.
CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type
config.ini in the Filename field.
CLI Undefined Text where the variable’s value is Type set policy policy-name.
the user’s discretion or text where
ping 10.0.x.y
the variable’s value as shown in
GUI Undefined the lab guide might differ from the Select File > Save, and type
value the user must input filename in the Filename field.
according to the lab topology.
We Will Discuss:
• Objectives and course content information;
• Additional Juniper Networks, Inc. courses; and
• The Juniper Networks Certification Program.
Introductions
The slide asks several questions for you to answer during class introductions.
Course Contents
The slide lists the topics we discuss in this course.
Prerequisites
The slide lists the prerequisites for this course.
Additional Resources
The slide provides links to additional resources available to assist you in the installation, configuration, and operation of
Juniper Networks products.
Satisfaction Feedback
Juniper Networks uses an electronic survey system to collect and analyze your comments and feedback. Depending on the
class you are taking, please complete the survey at the end of the class, or be sure to look for an e-mail about two weeks from
class completion that directs you to complete an online survey form. (Be sure to provide us with your current e-mail address.)
Submitting your feedback entitles you to a certificate of class completion. We thank you in advance for taking the time to help
us improve our educational offerings.
Courses
You can access the latest Education Services offerings covering a wide range of platforms at
http://www.juniper.net/training/technical_education/.
Junos Genius
The Junos Genius application takes certification exam preparation to a new level. With Junos Genius you can practice for your
exam with flashcards, simulate a live exam in a timed challenge, and even build a virtual network with device achievements
earned by challenging Juniper instructors. Download the app now and Unlock your Genius today!
Find Us Online
The slide lists some online resources to learn and share information about Juniper Networks.
Any Questions?
If you have any questions or concerns about the class you are attending, we suggest that you voice them now so that your
instructor can best address your needs during class.
We Will Discuss:
• Some challenges found in today’s data center;
• The QFX5100 Series switch offerings; and
• Some data center architecture options.
QFX5100-24Q
The QFX5100-24Q is a compact 1 U high-density 40GbE data center access and aggregation switch that includes a base
density of 24 QSFP+ ports. Each QSFP+ socket can be configured to support 40 GbE or as a set of 4 independent 10 GbE
ports using breakout cables. Any of the 24 ports can be configured as either uplink or access ports. The QFX5100-24Q switch
has two module bays for the optional expansion module, QFX-EM-4Q, which can add a total of 8 additional QSFP+ ports to the
chassis. The QFX-EM-4Q ports can also be configured as either access ports or as uplinks. With the two four-port expansion
modules, all 32 ports support wire-speed performance with an aggregate throughput of 2.56 Tbps or 1.44 Bpps per switch.
When fully populated with 2 QFX-EM-4Q Expansion Modules, the QFX5100-24Q has 128 physical ports. However, only 104
logical ports can be used for port channelization. Depending on the system mode you configure for channelization, different
ports are restricted. If you attempt to channelize a restricted port, the configuration is ignored. The following system modes
are available on the QFX5100-24Q switch:
• Default mode: All 24 QSFP+ ports on the switch (PIC 0) are channelized by default (96 ports). With QFX-EM-4Q
Expansion Modules (PIC 1) and (PIC 2), the QSFP+ ports are supported for access or uplink ports, but cannot be
channelized. Ports are over-subscribed in this mode and could be subject to packet-loss. You can have one of
two port combinations: 32 40-Gbps QSFP+ ports, or 96 10-Gigabit Ethernet ports plus 8 40-Gbps QSFP+ ports.
Continued on the next page.
{master:0}
user@qfx> request chassis system-mode ?
Possible completions:
mode-104port 104port-mode. This will restart PFE
non-oversubscribed-mode Non-oversubscribed-mode. This will restart PFE
flexi-pic-mode Flexi-pic-mode. This will restart PFE
default-mode Default-mode is oversubscribed mode. This will restart PFE
...
These switches are available with AC or DC power supplies and one of two airflow directions:
• Air-flow-in (AFI) – Air comes into the switch through the vents in the field-replaceable units (FRUs)
• Airflow-out (AFO) – Air comes into the switch through the vents in the port panel.
The image on the slide is for an AFI model, which uses blue on the power supplies and fans. The AFO models use orange on
their power supplies and fans. Note that AFI and AFO fans and power supplies cannot be mixed in the same chassis.
For more details on the hardware elements of this switch model, refer to the technical documentation found at: http://
www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/hardware/qfx-series/
qfx5100.html.
QFX5100-48S
The QFX5100-48S model is a compact 1 U 10GbE data center access switch that supports up to a maximum of 72 logical 10
GbE ports for an aggregate throughput of 1.44 Tbps or 1.08 Bpps. Forty-eight physical ports (0 through 47) support 10 Gbps
small form-factor pluggable plus (SFP+) transceivers. These ports can be configured as access ports. All 48 of these ports
can be used for SFP+ transceivers or SFP+ direct attach copper (DAC) cables. You can use 1-Gigabit Ethernet SFP+,
10-Gigabit Ethernet SFP+ transceivers and SFP+ direct attach copper cables in any access port.
The remaining 24 logical ports are available through the six 40 GbE ports (48 through 53) which use QSFP+. Each QSFP+
socket can operate either as a single 40 Gbps port or as a set of 4 independent 10 Gbps ports using QSFP+ breakout cables.
The 40 GbE ports can be configured as either access ports or as uplinks.
These switches are available with AC or DC power supplies and one of two airflow directions:
• Air-flow-in (AFI) – Air comes into the switch through the vents in the field-replaceable units (FRUs)
• Airflow-out (AFO) – Air comes into the switch through the vents in the port panel.
The image on the slide is for an AFO model, which uses orange on the power supplies and fans. The AFI models use blue on
their power supplies and fans. Note that AFI and AFO fans and power supplies cannot be mixed in the same chassis.
For more details on the hardware elements of this switch model, refer to the technical documentation found at: http://
www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/hardware/qfx-series/
qfx5100.html.
While not shown on the slide, the QFX5100-48T model offers a similar set of physical characteristics and features. The
primary difference is the integrated RJ45 port, which currently supports tri-rate speeds of 100MbE, 1GbE, or 10GbE.
QFX5100-96S
The QFX5100-96S switch is a compact 2 U high-density 10GbE aggregation switch with 96 SFP+ and 8 QSFP+ ports. Physical
ports (0 through 95) support 10 Gbps SFP+ transceivers. The eight 40-Gigabit ports (96 through 104) support QSFP+
transceivers and are normally configured as uplinks or Virtual Chassis ports (VCPs).
Although the 104 physical ports of the QFX5100-96S would map to 128 logical ports using channelization, only 104 logical
ports are supported. Because of the 104 port restriction, only two of the eight QSFP+ ports can be channelized. Depending on
how you set the system mode for channelization, the behavior of channelization for the QSFP+ ports changes. The following
system modes are available for the QFX5100-96S switch:
• Non-oversubscribed mode: All 96 SFP+ ports on the switch (PIC 0) are supported. In this mode, the eight QSFP+
ports are not supported and cannot be channelized. There is no packet loss for packets of any size in this mode.
• Default mode: All 96 SFP+ ports on the switch (PIC 0) are supported. QSFP+ ports 96 and 100 can be
channelized. If ports 96 and 100 are channelized, the interfaces on ports 97, 98, 99, 101, 102, and 103 are
disabled.
These switches are available with AC or DC power supplies and one of two airflow directions:
• Air-flow-in (AFI) – Air comes into the switch through the vents in the field-replaceable units (FRUs)
• Airflow-out (AFO) – Air comes into the switch through the vents in the port panel.
For more details on the hardware elements of this switch model, refer to the technical documentation found at: http://
www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/hardware/qfx-series/
qfx5100.html.
Resource Allocation
The unified forwarding table (UFT) feature allows you to flexibly allocate the forwarding table space on your QFX5100 Series
switch as you see fit and in a fashion that makes the most sense for your environment. Rather than having a statically defined
space within the table for Layer 2 MAC, Layer 3 host, and longest prefix match (LPM) entries, you can give space to each of the
three categories based on the needs in the network and the role of the device. The Layer 2 MAC category includes all bridge
table entries based on MAC addresses. The Layer 3 host category includes all /32 prefixes included in the route table. The
LPM category includes all prefixes in the route table other than the /32 prefixes.
A number of scenarios have been identified which optimize the forwarding table space for each category. The following output
provides some insight to the available options:
{master:0}[edit]
user@qfx# set chassis forwarding-options ?
Possible completions:
+ apply-groups Groups from which to inherit configuration data
+ apply-groups-except Don't inherit configuration data from these groups
> l2-profile-one MAC: 288K L3-host: 16K LPM: 16K. This will restart PFE
> l2-profile-three (default) MAC: 160K L3-host: 144K LPM: 16K. This will restart
PFE
> l2-profile-two MAC: 224K L3-host: 80K LPM: 16K. This will restart PFE
> l3-profile MAC: 96K L3-host: 208K LPM: 16K. This will restart PFE
> lpm-profile MAC: 32K L3-host: 16K LPM: 128K. This will restart PFE
Continued on the next page.
Note
While not listed on the slide and not currently supported, the QFX5100 Series switches
will support a number of features that make Software-Defined Networking (SDN)
possible. In future software versions, QFX5100 Series switches will support VXLAN and
OVSDB, which are required in SDN deployments that use VMware’s NSX controllers.
Note
Some features supported on the QFX5100 Series switches, such as BGP,
MPLS, and VCF, require a license. For more details on feature licensing
refer to http://www.juniper.net/techpubs/en_US/junos13.2/topics/
reference/general/qfx-series-software-license-features.html.
We Discussed:
• Some challenges found in today’s data center;
• The QFX5100 Series switch offerings; and
• Some data center architecture options.
Review Questions
1.
2.
3.
We Will Discuss:
• The purpose and value of ZTP;
• The components and operations of ZTP; and
• How to deploy a QFX5100 Series switch using ZTP.
Deploying a Switch
As you deploy a switch in your network, there are a few basic requirements you must meet before that switch provides any
operational value. As illustrated on the slide you must physically rack each switch in its designated location and then connect
it, using the required cables, with other devices on the network. Once the switch is racked and cabled, you then perform the
provisioning tasks required to make the switch ready for network operations.
Some example provisioning tasks include adding the switch to its out-of-band (OOB) management network so it can be
remotely managed using telnet or SSH as well as adding other configuration parameters to ensure the switch is a secure and
functional participant on the network. Once the switch can communicate with other devices on the network, you may need
upgrade its software to ensure it is running the desired version for your specific environment.
Once the basic installation and provisioning tasks are complete, the switch can become a value-added element on the
network. At a quick glance this seems like a simple and straightforward process, right?
Complicating Things
To install and provision a single switch may not be too difficult for a smooth operator like yourself. However, if you are tasked
with installing and provisioning dozens or even hundreds of switches things can get a bit more complicated. With an increased
load of switches that must be properly racked, cabled, and provisioned, you will also see that the time required for such
deployments also increases. You may also find that new and challenging issues are introduced because of improperly
provisioned switches on your network.
Fortunately, as discussed on forthcoming slides, this situation can be simplified in some ways!
We Discussed:
• The purpose and value of ZTP;
• The components and operations of ZTP; and
• How to deploy a QFX5100 Series switch using ZTP.
Review Questions
1.
2.
3.
We Will Discuss:
• The purpose and value of ISSU;
• The components and operations of ISSU; and
• How to upgrade a QFX5100 Series switch using ISSU.
Traditional Upgrades
In the past, software upgrades meant some disruption to service. To limit the impact, software upgrades were, and still are to
a large extent, performed during a maintenance window. These maintenance windows were typically announced to all users
that could potentially be affected by the disruption caused by the upgrade operations. The overall impact to the network and
end-user depended on the role of the devices being upgraded and the resiliency built into the network.
In many networks, especially those where uptime matters, redundant paths exist which allow for a less substantial impact to
the traffic and services supported in the affected network. At a minimum, the active paths associated with the devices being
upgraded were affected during and after the upgrade is performed, which results in the transit traffic using those paths to
also be affected during that time. With the increased demands and expectations on the network, the downtime accepted in
the past has now become unacceptable in most modern network environments.
The Problem
While the introduction of chassis-based systems with redundant Routing Engines along with the various software
enhancements that support high availability have made a significant improvement in some environments, they are not
practical in all environments. Once such environment where using a chassis-based system is not practical is in the data
center. In data center environments, space in the supporting racks is limited and does not allow for large, chassis-based
switches. The switches used in the data center, often referred to as top of rack (TOR) switches, are small and typically include
a single Routing Engine. This lack of redundancy typically prohibits the use of the software enhancements now available,
including the ability to perform an ISSU available on chassis-based platforms.
A Better Solution
While some vendors claim to support an ISSU-like solution based on a redundant network topology, it is not a true ISSU and
has inherent issues as described on the previous slide. To address this challenge and need in the data center, Juniper
Networks offers, what we consider to be, a better solution using the QFX5100 Series switches.
The QFX5100 Series switches run Junos OS within a virtual machine (VM) on top of a Linux-based host OS. During an ISSU,
Junos OS runs in two separate virtual machines (VMs) in active and standby pairs. The VMs, which represent redundant
Routing Engines, seamlessly move to the newer software version while maintaining operations in the data plane. This true
Topology-Independent ISSU (TISSU), an industry-first software upgrade feature for a fixed-configuration TOR, is supported
across all Layer 2 and Layer 3 protocols and doesn’t need the support of any other switches to perform an image upgrade.
Note
If the ISSU process stops, you can look at the log files to diagnose
the problem. The log files that include ISSU events during a failed
ISSU attempt are located at /var/log/vjunos-log.tgz.
{master:0}[edit]
user@qfx1# set routing-options nonstop-routing
{master:0}[edit]
user@qfx1# commit
configuration check succeeds
commit complete
{master:0}[edit]
user@qfx1# run request system software in-service-upgrade /var/tmp/
jinstall-qfx-5-13.2X51-D21.1-domestic-signed.tgz
Starting ISSU Tue Sep 2 20:50:48 2014
warning: Do NOT use /user during ISSU. Changes to /user during ISSU may get lost!
ISSU: Validating Image
error: 'Non Stop Bridging' not configured
error: aborting ISSU Tue Sep 2 20:50:49 2014
error: ISSU Aborted!
ISSU: IDLE
At this point you would simply need to enable NSB in the configuration. Once all of the required configuration elements are
enabled, the ISSU operations should proceed to the next steps in the ISSU process without any issues.
We Discussed:
• The purpose and value of ISSU;
• The components and operations of ISSU; and
• How to upgrade a QFX5100 Series switch using ISSU.
Review Questions
1.
2.
3.
We Will Discuss:
• The purpose and value of MC-LAGs;
• The components and operations of an MC-LAG; and
• How to implement an MC-LAG on QFX5100 Series switches.
A Potential Problem
While operational continuity is a top priority, it is not guaranteed simply by adding multiple, bundled connections between the
compute resources (servers) and their attached access switch. This design, while improved over a design with a single link,
still includes potential single points of failure including the access switch and the compute device.
While the survivability of compute resources can be handled through the duplication of the impacted resources on some other
physical device in the network, typically done through virtualization technologies, the access switch, in this deployment model,
remains a single point of failure and prohibits the utilization of the attached resources.
A Solution
To eliminate the access switch as being a single point of failure in the data center environment, you can use multichassis link
aggregation. Multichassis link aggregation builds on the standard LAG concept defined in 802.3ad and allows a LAG from one
device, in our example a server, to be spread between two upstream devices, in our example two access switches to which
the server connects. Using multichassis link aggregation avoids the single point of failure scenario related to the access
switches described previously and allows operational continuity for traffic and services, even when one of the two switches
supporting the server fails.
MC-LAG Overview
An MC-LAG allows two similarly configured devices, known as MC-LAG peers, to emulate a logical LAG interface which
connects to a separate device at the remote end of the LAG. The remote LAG endpoint may be a server, as shown in the
example on the slide, or a switch or router depending on the deployment scenario. The two MC-LAG peers appear to the
remote endpoint connecting to the LAG as a single device.
As previously mentioned, MC-LAGs build on the standard LAG concept defined in 802.3ad and provide node-level redundancy
as well as multi-homing support for mission critical deployments. Using MC-LAGs avoids the single point of failure scenario
related to the access switches described previously and allows for operational continuity for traffic and services, even when
one of the two MC-LAG peers supporting the server fails.
MC-LAGs make use of the Inter-Chassis Control Protocol (ICCP), which is used to exchange control information between the
participating MC-LAG peers. We discuss ICCP further on the next slides.
MC-LAG Modes
There are two modes in which an MC-LAG can operate: Active/Standby and Active/Active. Each state type has its own set of
benefits and drawbacks.
Active/Standby mode allows only one MC-LAG peer to be active at a time. Using LACP, the active MC-LAG peer signals to the
attached device (the server in our illustrated example) that its links are available to forward traffic. As you might guess, a
drawback to this method is that only half of the links in the server’s LAG are used at any given time. However, this method is
usually easier to troubleshoot than Active/Active because traffic is not hashed across all links and no shared MAC learning
needs to take place between the MC-LAG peers.
Using the Active/Active mode, all links between the attached device (the server in our illustrated example) and the MC-LAG
peers are active and available for forwarding traffic. Because all links are active, traffic has the potential need to go between
the MC-LAG peers. The ICL-PL can be used to accommodate the traffic required to pass between the MC-LAG peers. We
demonstrate this on the next slide. Currently, the QFX5100 Series switches only support the Active/Active mode.
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
SE - statistics enabled, NM - non configured MAC, R - remote PE MAC)
{master:0}
user@qfx2> show ethernet-switching table
MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
SE - statistics enabled, NM - non configured MAC, R - remote PE MAC)
Layer 3 Routing
Layer 3 inter-VLAN routing can be provided through MC-LAG peers using IRBs and VRRP. This allows compute devices to
communicate with other devices on different Layer 3 subnets using gateway access through their first-hop infrastructure
device (their directly attached access switch), which can expedite the required communication process.
For simplified Layer 3 gateway services, where Layer 3 routing protocols are not run on the MC-LAG peers, you simply
configure the same Layer 3 gateway IP address on both MC-LAG peers and enable IRB MAC address synchronization. MAC
address synchronization enables MC-LAG peers to forward Layer 3 packets arriving on MC-AE interfaces with either its own
IRB MAC address or its peer’s IRB MAC address. Each MC-LAG peer installs its own IRB MAC address as well as the peer’s IRB
MAC address in its local forwarding table. Each MC-LAG peer treats the packet as if it were its own packet. If IRB MAC address
synchronization is not enabled, the IRB MAC address is installed on the MC-LAG peer as if it was learned on the ICL-PL.
Control packets destined for a particular MC-LAG peer that arrive on an MC-AE interface of its MC-LAG peer are not forwarded
on the ICL-PL interface. Additionally, using the gateway IP address as a source address when you issue either a ping,
traceroute, telnet, or FTP request is not supported. To enable IRB MAC address synchronization, issue the set vlan
vlan-name l3_interface irb-name mcae-mac-synchronize on each MC-LAG peer.
Continued on the next page.
If routing protocols are used on the MC-LAG peers, they should be configured to run on the primary IP address on the peer’s
IRB interface. Any configured protocols run independently on each MC-LAG peer rather than in a unified fashion. To help with
some forwarding operations, the IRB MAC address of each peer is replicated on the other peer and is installed as a MAC
address with the forwarding next hop of the ICL-PL. This is achieved by configuring a static ARP entry for the remote peer as
shown in the following output:
{master:0}[edit interfaces]
user@qfx1# show irb unit 15
family inet {
address 172.25.15.101/24 {
arp 172.25.15.102 l2-interface ae0.0 mac dc:38:e1:5d:1c:00;
vrrp-group 15 {
virtual-address 172.25.15.1;
priority 200;
}
}
}
Note
The liveness detection intervals determine how often BFD messages are
exchanged and how much time can pass before declaring the remote ICCP
peer as dead. It is recommended that you not use a value less than 200 ms.
The actual value you use will depend on your deployment!
What If...?
In some deployments you may find that the attached server does not support LACP. Because MC-AE interfaces depend on
state information learned from LACP, this will result in the MC-AE interfaces not becoming operational and therefore will not
be able to support the traffic to and from the server. To work around this specific situation, you can force the member links
associated with an MC-AE to the up position using the command illustrated on the slide. Once the interfaces have been
forced up using the illustrated commands, you should see the operational state transition to the up state as shown in the
following output:
{master:0}[edit]
user@qfx1# commit
configuration check succeeds
commit complete
{master:0}[edit]
user@qfx1# run show interfaces terse | match "ae(1|2)"
et-0/0/50.0 up up aenet --> ae1.0
et-0/0/51.0 up up aenet --> ae2.0
ae1 up up
ae1.0 up up eth-switch
ae2 up up
ae2.0 up up eth-switch
We Discussed:
• The purpose and value of MC-LAGs;
• The components and operations of an MC-LAG; and
• How to implement an MC-LAG on QFX5100 Series switches.
Review Questions
1.
2.
3.
We Will Discuss:
• Key concepts and components of a mixed Virtual Chassis;
• The operational details of a mixed Virtual Chassis; and
• Implementing a mixed Virtual Chassis and verifying its operations.
CLI Command
For any of the supported switches to support mixed mode, you must enable mixed mode (followed by a reboot) from the CLI.
To enable mixed mode operation, issue the request virtual-chassis mode mixed reboot command. The mode
will be set to mixed and then the switch will reboot so that the change to both the software and hardware take effect.
Behavior Changes
Setting the Virtual Chassis mode to mixed, changes both the software and hardware functionality of a Virtual Chassis. In
general, the maximum scaling numbers are reduced to the lowest maximum scaling numbers between the possible
members. That is, regardless of what actual switch-types (EX4300 or QFX5100) are attached to the Virtual Chassis, the
numbers are scaled down significantly. You should only set the Virtual Chassis to this mode if there will be a mix of switches.
Software Features
In general you can expect Junos to support the LCD in regards to software features in a mixed Virtual Chassis. The slide shows
the uniform resource locator (URL) that will allow you to view behavior of many of the software features when they are used in
a mixed scenario.
Components
You can interconnect one to ten EX4300 and QFX Series switches to form a mixed Virtual Chassis.
Each EX4300 and QFX switch has single PFE. PFEs in a Virtual Chassis can be interconnected, using Virtual Chassis ports
(VCPs). Collectively, the PFEs and their connections constitute the Virtual Chassis backplane.
You can use the built-in QSFP+ VCPs on the rear of the EX4300 switches (or front of QFX5100s) or 10 GbE uplink ports,
converted to VCPs, to interconnect the member switches’ PFEs. To use an uplink port as a VCP, explicit configuration is
required.
Each member switch will take on a role of either the Master RE, Backup RE, or line card. Initially, the two REs are elected to
their role (see RE election slide). Once they are elected to their role, it is the master RE that assigns roles to all of the other
members. In a mixed Virtual Chassis environment using QFX5100 Series switches, QFX5100s will always be elected to the RE
role. In fact, any other type of switch will refuse to become the RE if it is known that there is a QFX5100 in the Virtual Chassis.
Ring Cabling
This slide illustrates the recommended cabling option and provides some related information. The actual cabling distances
are dependent on the cable or optic type and capabilities. Please refer to the latest documentation for your specific platform
to determine actual maximum distances.
In addition to the previously listed features, you can also enable graceful Routing Engine switchover (GRES) feature as shown
on the slide. GRES enables a device running the Junos OS with redundant REs to continue forwarding traffic even if one RE
fails. GRES preserves interface and kernel information and ensures minimal traffic interruption during a mastership change.
Note that GRES does not preserve the control plane.
Software Requirements
All members of a Virtual Chassis must have the same version of Junos OS installed. Notice that even though the bottom
switch is a member of the Virtual Chassis (member 3) it is placed into the Inactive state. This happens because its version
of software does not match the version on the master RE.
Show Commands
The following slides cover the usage of most of the commands shown on this slide.
VCP Status
Issue the show virtual-chassis vc-port command to determine the status of VCPs. In the output of the command,
each member is represented as an FPC (i.e. member 1 equals fpc1). This commands shows you how each VCP was
configured and its status, speed, and Virtual Chassis neighbor. Notice that each VCP is assigned a Trunk ID. If the Trunk ID is
-1, that means that it is a non-aggregated VCP. If the Trunk ID of a VCP is a positive integer, that means that it belongs to an
aggregated VCP (two or more VCPs automatically associated with the same Link Aggregation Group).
Network Ports
Neither EX4300 Series or QFX5100 Series switches have dedicated VCPs. Instead, any 10Gbps SFP+ or 40Gbps QSFP+
interface can be used as either a VCP or network port (standard routable/switchable interface). Use the commands on the
slide to switch from a VCP to a network port and vice versa.
We Discussed:
• Key concepts and components of a mixed Virtual Chassis;
• The operational details of a mixed Virtual Chassis; and
• Implementing a mixed Virtual Chassis and verifying its operations.
Review Questions
1.
2.
3.
We Will Discuss:
• Key concepts and components of a Virtual Chassis Fabric (VCF); and
• The control and forwarding plane of a VCF.
Overview of VCF
The slide lists the topics we will discuss. We discuss the highlighted topic first.
What Is a VCF?
The Juniper Networks VCF provides a low-latency, high-performance fabric architecture that can be managed as a single
device. VCF is an evolution of the Virtual Chassis feature, which enables you to interconnect multiple devices into a single
logical device, inside of a fabric architecture. The VCF architecture is optimized to support small and medium-sized data
centers that contain a mix of 1-Gbps, 10-Gbps, and 40-Gbps Ethernet interfaces.
A VCF is constructed using a spine-and-leaf architecture. In the spine-and-leaf architecture, each spine device is
interconnected to each leaf device. A VCF supports up to twenty total devices, and up to four devices can be configured as
spine devices. QFX5100 Series switches can be placed in either the Spine or Leaf location while QFX3500, QFX3600, and
EX4300 Series switches should only be wired as Leaf devices in a mixed scenario.
VCF Benefits
The slide shows some of the benefits (similar to Virtual Chassis) of VCF when compared to managing 20 individual switches.
VCF Components
You can interconnect up to 20 QFX5100 Series switches to form a VCF. A VCF can consist of any combination of model
numbers within the QFX5100 family of switches. QFX3500, QFX3600, and EX4300 Series switches are also supported in the
line card role.
Each switch has a Packet Forwarding Engines (PFE). All PFEs are interconnected by Virtual Chassis ports (VCPs). Collectively,
the PFEs and their VCP connections constitute the VCF.
You can use the built-in 40GbE QSFP+ ports or SFP+ uplink ports, converted to VCPs, to interconnect the member switches’
PFEs. To use an uplink port as a VCP, explicit configuration is required.
Spine Nodes
To be able to support the maximum throughput, QFX5100 Series switches should be placed in the Spine positions. It is further
recommended to use the QFX5100-24q switch in the Spine position. Although, any QFX5100 Series switch will work in the
Spine position, it is the QFX5100-24q switch that supports 32 40GbE QSFP+ ports which allows for the maximum expansion
possibility (remember that 16 Leaf nodes would take up 16 QSFP+ ports on each Spine). Spines are typically configured in the
RE role (discussed later).
Leaf Nodes
Although not a requirement, it is recommended to use QFX5100 Series devices in the Leaf position. Using a non-QFX5100
Series switch (even just one different switch) requires that the entire VCF is placed into “mixed” mode. When a VCF is placed
into mixed mode, the hardware scaling numbers for the VCF as a whole (MAC table size, routing table size, and many more) to
be scaled down to the lowest common denominator between potential member switches. It is recommended that each Leaf
node has a VCP connection to every Spine node.
Member Roles
The slide shows the different Juniper switches that can participate in a VCF along with their recommended node type (Spine
or Leaf node) as well as their capability to become an RE or line card. It is always recommended to use QFX5100 Series
switches in the Spine position. All other supported switch types should be place in the Leaf position. In a VCF, only a QFX5100
Series device can assume the RE role (even if you try to make another switch type an RE). Any supported switch type can be
assigned the linecard role.
Master RE
A VCF has two devices operating in the Routing Engine (RE) role—a master Routing Engine and a backup Routing Engine. All
Spine nodes should be configured for the RE role. However, based on the RE election process only two REs will be elected. Any
QFX5100 Series that is configured as an RE but is not elected to the master or backup RE role will take on the linecard role.
A QFX5100 Series configured for the RE role but operating in the linecard role can complete all leaf or spine related functions
with no limitations within a VCF.
The device that functions as the master Routing Engine:
• Should (a “must” for Juniper support) be a spine device.
• Manages the member devices.
• Runs the chassis management processes and control protocols.
• Represents all the member devices interconnected within the VCF configuration. (The hostname and other
parameters that you assign to this device during setup apply to all members of the VCF.)
Backup RE
The device that functions as the backup Routing Engine:
• Should (a “must” for Juniper support) be a spine device.
• Maintains a state of readiness to take over the master role if the master fails.
• Synchronizes with the master in terms of protocol states, forwarding tables, and so forth, so that it preserves
routing information and maintains network connectivity without disruption when the master is unavailable.
Linecard Role
The slide describes the functions of the linecard in a VCF.
Deploying VCF
Once you power up any of the switches that support VCF, they will all default to Virtual Chassis mode. To operate in VCF mode
you must set the mode to “fabric”. Once you’ve enabled “fabric” mode, you then have three choices for provisioning the VCF;
auto-provisioned, preprovisioned, or non-provisioned (dynamic). These provisioning methods are described briefly on the next
slide. The full details of provisioning are covered in another chapter. This brief discussion is to help you understand RE
election algorithm that is covered in the next few slides.
Member ID = FPC #
Every member switch of a VCF will be assigned a member ID which is some value between 0 and 31. Unless you manually
change a member ID through configuration or CLI command, once a member is assigned a member ID the member will keep
that ID forever (even through reboots and power cycles). The member ID value is not only used to identify a member switch but
it is also used to represent the FPC number. The FPC number is important when it comes to configuration interfaces and even
viewing CLI command like show chassis hardware.
lab@qfx1> show chassis hardware clei-models
Hardware inventory:
Item Version Part number CLEI code FRU model number
Routing Engine 0 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
Routing Engine 1 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
FPC 0 REV 05 650-056264 CMMRG00BRA QFX5100-48S-3AFO
PIC 0 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
...
FPC 1 REV 05 650-056264 CMMRG00BRA QFX5100-48S-3AFO
PIC 0 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
...
FPC 2 REV 06 650-044936 IPMVU10FRA EX4300-24T
PIC 0 REV 06 BUILTIN IPMVU10FRA EX4300-24T
...
FPC 3 REV 06 650-044936 IPMVU10FRA EX4300-24T
PIC 0 REV 06 BUILTIN IPMVU10FRA EX4300-24T
...
Interface Naming
As a standalone switch, member 2’s highlighted interfaces would be named ge-0/0/0 and ge-0/0/1. As a standalone switch,
member 3’s highlighted interfaces would be named et-0/0/22 and et-0/0/23. However, since each of the switches is a
member of a VCF, all interfaces must be named using the member ID as the FPC number as shown in the slide.
Smart Trunks
There are several types of trunks that you will find in a VCF.
1. Automatic Fabric Trunks - When there are two VCPs between members (2x40G between member 4 and 0) they
are automatically aggregated together to form a single logical connection using Link Aggregation Groups (LAGs).
2. Next Hop Trunks (NH-Trunks) - These are directly attached VCP between the local member and any other
member. In the slide, NHT1, NHT2, NHT3, and NHT4 are the NH-trunks for member 4.
3. Remote Destination Trunks (RD-Trunks) - These are the multiple, calculated paths between one member and a
remote member. These are discussed on the next slide.
RD-Trunks
The slide shows how member 4 is able to determine (using what it learns in the VCCP LSAs) multiple paths to a remote
member (member 15 in the example). The paths do not need to be equal cost paths. All links between members are 40Gbps
except for the link between member 4 and 0 (80Gbps) and the link between member 3 and 15 (10Gbps). Based on the
minimum bandwidth of the path, member 4 will assign a weight to each path. This is shown in the next slide.
Fabric Header
A fabric header is used to pass frames over VCPs. In the case of layer 2 switching, when an Ethernet frame arrives, the
inbound member will perform an Ethernet switching table lookup (based on MAC address) to determine the destination
member and port. After that, the inbound member encapsulates the incoming frame in the fabric header. The fabric header
specifies the destination member and port (among other things). All members along the path will forward the encapsulated
frame by performing lookups on the fabric header only. Once the frame reaches the destination member, the fabric header is
removed and the Ethernet frame is sent out of the destination port without a second MAC table lookup.
Load Balancing
The slide lists two sets of inputs to the hash that are used to load balance traffic over RD-trunks. There are also inputs for IPv6
packets which includes Next Header, Source and Destination Ports, and Source and Destination IP address. These inputs to
the hash algorithm are hard coded in the PFE and cannot be modified.
BUM Forwarding
An Ethernet frame with an unknown destination MAC address arrives on a network port on Member 4. Member 4 forwards the
frame along one of the 16 Bidirectional MDTs so that all members of the switch receive a copy. Each member then strips the
fabric header and sends the frame out of all interfaces associated with the VLAN.
We Discussed:
• Key concepts and components of a Virtual Chassis Fabric (VCF); and
• The control and forwarding plane of a VCF.
Review Questions
1.
2.
3.
We Will Discuss:
• How to use the CLI to configure and monitor a Virtual Chassis Fabric (VCF);
• How to provision a VCF using nonprovisioning, preprovisioning, and auto-provisioning;
• The software requirements for a VCF; and
• How to manage a VCF with Junos Space.
Configuration Mode
The slide shows the configuration statements that can be applied to and will appear in the active and candidate
configurations.
Split VCF
Every member of a VCF has the ability to detect a split in the fabric. A split occurs when one or more switches are unable to
communicate with any of the other members (1 or more) over VCPs. In the example, member 1 has lost VCP connectivity to
the rest of the VCF. In this case, all members (0, 1, 2, and 3) detect the split and begin to form new VCFs. One fabric will form
that includes members 0, 2, and 3. Another VCF will form consisting of only member 1. Once the new fabrics are formed and
a master RE is elected (per fabric), the master RE for each fabric will determine whether its fabric will remain active or be
deactivated. By default, only one fabric will remain in the active state.
Reactivating a VCF
Now that member 1 has had its fabric deactivated, it will remain deactivated until one of two things occurs. First, if the VCP
connectivity is fixed between member 1 and the rest of the original VCP (detected using VCF ID), then member 1 will
automatically reactivate. Second, a deactivated fabric can be reactivated by issuing the request virtual-chassis
reactivate command. Prior to reactivating an inactive fabric, please make sure that no forwarding loops will be formed
once the fabric becomes active.
Renumber a Member
The master switch typically assumes a member ID of 0 because it is the first switch powered on. Member IDs can be assigned
manually using the preprovisioned configuration method or dynamically from the master switch.
If assigned dynamically, the master switch assigns each member added to the VCF a member ID from 1 through 9, making
the complete member ID range 0–9. The master assigns each switch a member ID based the sequence that the switch was
added to the VCF system. The member ID associated with each member switch is preserved, for the sake of consistency,
across reboots. This preservation is helpful because the member ID is also a key reference point when naming individual
interfaces. The member ID serves the same purpose as a slot number when configuring interfaces.
The slide shows how you can use the request virtual-chassis renumber command to renumber a particular
member.
Disable a VCP
The slide shows how you can disable an individual VCP.
Show Commands
The slide shows the various show commands that are available to determine the status of a VCF. Most of these commands
will be used in the example provisioning slides later in this chapter.
VCF Provisioning
The slide shows that various options for provisioning a VCF. The following slides will discuss each method in detail.
Autoprovisioning
Auto-provisioning is similar to preprovisioning except that only the Spines need to be preprovisioned. Once the spines are
provisioned, leaves can by added without any changes necessary in configuration mode. Essentially, this allows you to plug
and play with Leaf nodes (like dynamic provisioning). You can optionally enable LLDP on VCP interfaces (enabled on all
network ports in factory default state) to enable the members to automatically convert network ports to VCPs.
Software Upgrade
Any member can be upgrade by issuing upgrade commands on the master RE.
Software Requirements
All members of a Virtual Chassis must have the same version of Junos OS installed. If the master RE detects a new member
that has a mismatched version of software, it places the new member into the Inactive state. This happens because its
version of software does not match the version on the master RE.
Automatic Upgrade
Instead of manually upgrading member switches as they are added to a Virtual Chassis, you can have the master RE upgrade
newly added switches automatically as the moment they are added. The slide shows the configuration necessary to
automatically upgrade newly added member switches.
All members of the VCF must be running the same version of the Junos software. NSR and graceful Routing Engine switchover
(GRES) must be enabled. Optionally, you can enable NSB. Enabling NSB ensures that all NSB-supported Layer 2 protocols
operate seamlessly during the Routing Engine switchover that is part of the NSSU. Another step that you might want to
consider is to back up the current system software—Junos OS, the active configuration, and log files—on each member to an
external storage device with the request system snapshot command.
The remainder of this section provides details of how to discover a VCF through Junos Space along with some of the key
functional areas within the Network Director application used to manage and monitor a VCF.
In addition to the configuration elements shown on the slide, you must also ensure that a VCF and the Junos Space server can
communicate for proper discovery and maintenance tasks.
To initiate the discovery process in Network Director you select the Discover Devices task option and click on the Add
button as shown on the slide.
The Result!
Once the discovery operation has finished you will see a report showing the results of the attempted discovery operation. As
shown on the slide, the sample discovery operation has succeeded.
Build Mode
This slide and the next few slides illustrate the various modes in Network Director. This slide specifically covers the Build
mode, which is where you perform device discovery, inventory verification, configuration creation, configuration validation as
well as other tasks that relate to adding or enhancing a device’s functionality through the Network Director application.
Deploy Mode
This slide covers the Deploy mode, which is where you deploy configurations, reconcile configuration issues that may exist,
manage software images for the managed devices as well as any other tasks that relate to system configuration and software
image management performed through Network Director.
Monitor Mode
This slide covers the Monitor mode, which is where you monitor traffic, system utilization, sessions, system status. You can
also perform some troubleshooting operations in this mode as well as any other system verification tasks performed through
Network Director.
Fault Mode
This slide covers the Fault mode, which is where you verify and manage fault management of the system and its
components through Network Director.
Report Mode
This slide covers the Report mode, which is where you generate, manage, and run reports for a VCF and its components
through Network Director.
We Discussed:
• How to use the CLI to configure and monitor a Virtual Chassis Fabric (VCF);
• How to provision a VCF using nonprovisioning, preprovisioning, and autoprovisioning;
• The software requirements for a VCF; and
• How to manage a VCF with Junos Space.
Review Questions
1.
2.
3.
www.juniper.net
Data Center Switching
www.juniper.net
Acronym List
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
GRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . graceful Routing Engine switchover
GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface
ICCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inter-Chassis Control Protocol
ICL-PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .interchassis link-protection link
ISSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unified in-service software upgrade
JNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Certification Program
LACP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Aggregation Control Protocol
LCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lowest common denominator
LLDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Layer Discovery Protocol
LSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link-state advertisement
MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multichassis Link Aggregation
MDT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multicast distribution tree
NSSU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .nonstop software upgrade
OOB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . out-of-band
PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Packet Forwarding Engine
RE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routing Engine
STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree Protocol
URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . uniform resource locator
VCCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis Control Protocol
VCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis Fabric
VCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis port
VME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual management Ethernet interface
vty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual terminal
ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . zero touch provisioning