Sunteți pe pagina 1din 314

Data Center Switching

13.a

Student Guide

Worldwide Education Services

1133 Innovation Way


Sunnyvale, CA 94089
USA
408-745-2000
www.juniper.net

Course Number: EDU-JUN-DCX


This document is produced by Juniper Networks, Inc.
This document or any part thereof may not be reproduced or transmitted in any form under penalty of law, without the prior written permission of Juniper Networks Education
Services.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The
Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service
marks are the property of their respective owners.
Data Center Switching Student Guide, Revision 13.a
Copyright © 2014 Juniper Networks, Inc. All rights reserved.
Printed in USA.
Revision History:
Revision 13.a—November 2014
The information in this document is current as of the date listed above.
The information in this document has been carefully verified and is believed to be accurate for Junos OS software Release 13.2X51-D21.1. Juniper Networks assumes no
responsibilities for any inaccuracies that may appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or
consequential damages resulting from any defect or omission in this document, even if advised of the possibility of such damages.

Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents

Chapter 1: Course Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1

Chapter 2: System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


Data Center Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
QFX5100 Series Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Architectures and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17

Chapter 3: Zero Touch Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1


Understanding Zero Touch Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
ZTP in Action: A Working Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Zero Touch Provisioning Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-29

Chapter 4: In-Service Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Understanding ISSU on QFX5100 Series Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
ISSU in Action: A Working Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
In-Service Software Upgrade Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22

Chapter 5: Multichassis Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Multichassis Link Aggregation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Multichassis Link Aggregation Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Deploying Multichassis Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Multichassis Link Aggregation Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37

Chapter 6: Mixed Virtual Chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1


Overview of Mixed Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Provisioning a Mixed Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
Software Requirements and Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25
Configuring and Monitoring a Mixed Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-30
Mixed Virtual Chassis Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-38

Chapter 7: Virtual Chassis Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


Overview of VCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
VCF Control and Forwarding Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30

Chapter 8: Virtual Chassis Fabric Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1


Managing a VCF Using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Dynamic Provisioning of a VCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-19
Preprovisioning and Autoprovisioning a VCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Software Requirements and Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-52
Managing a VCF with Junos Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-57
Virtual Chassis Fabric Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-75

Acronym List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACR-1

www.juniper.net Contents • iii


iv • Contents www.juniper.net
Course Overview

This two-day course is designed to introduce the features introduced by the QFX5100 and EX4300 Series Ethernet
Switches including, but not limited to, zero touch provisioning (ZTP), unified in-service software upgrade (ISSU),
multichassis link aggregation (MC-LAG), mixed Virtual Fabric, and Virtual Chassis Fabric (VCF). Students will learn to
configure and monitor these features that exist on the Junos operating system running on the QFX5100 and
EX4300 Series platform.
Through demonstrations and hands-on labs, students will gain experience configuring, monitoring, and analyzing the
above features of the Junos OS. This course is based on Junos OS Release 13.2X51-D21.1.
Objectives
After successfully completing this course:
• Identify current challenges in today’s data center environments and explain how the QFX5100 system
solves some of those challenges.
• List the various models of QFX5100 Series switches.
• List some data center architecture options.
• Explain the purpose and value of ZTP.
• Describe the components and operations of ZTP.
• Deploy a QFX5100 Series switch using ZTP.
• Explain the purpose and value of ISSU.
• Describe the components and operations of ISSU.
• Upgrade a QFX5100 Series switch using ISSU.
• Explain the purpose and value of MC-LAG.
• Describe the components and operations of MC-LAG.
• Implement an MC-LAG on QFX5100 Series Switches.
• Describe key concepts and components of a mixed Virtual Chassis.
• Explain the operational details of a mixed Virtual Chassis.
• Implement a mixed Virtual Chassis and verify its operations.
• Describe key concepts and components of a Virtual Chassis Fabric.
• Describe the control plane and forwarding plane of a Virtual Chassis Fabric.
• Describe how to use the CLI to configure and monitor a Virtual Chassis Fabric.
• Describe how to provision a Virtual Chassis Fabric using nonprovisioning, preprovisioning, and
autoprovisioning.
• Describe the software requirements and upgrade procedure of Virtual Chassis Fabric.
• Describe how to manage a Virtual Chassis Fabric with Junos Space.
Intended Audience
This course benefits individuals responsible for configuring and monitoring switching features that exist on the Junos OS
running on the QFX5100 and EX4300 Series platforms, including individuals in professional services, sales and support
organizations, and the end users.
Course Level
Data Center Switching (DCX) is an intermediate-level course.

www.juniper.net Course Overview • v


Prerequisites
The following are the prerequisites for this course:
• Understanding of the OSI model;
• Junos OS configuration experience—the Introduction to the Junos Operating System (IJOS) course or
equivalent;
• Intermediate routing knowledge—the Junos Routing Essentials (JRE) course or equivalent; and
• Intermediate switching knowledge—the Junos Enterprise Switching Using Enhanced Layer 2 Software
(JEX-ELS) course or equivalent.

vi • Course Overview www.juniper.net


Course Agenda

Day 1
Chapter 1: Course Introduction
Chapter 2: System Overview
Chapter 3: Zero Touch Provisioning
Zero Touch Provisioning Lab
Chapter 4: In-Service Software Upgrades
In-Service Software Upgrade Lab
Chapter 5: Multichassis Link Aggregation
Multichassis Link Aggregation Lab
Day 2
Chapter 6: Mixed Virtual Chassis
Mixed Virtual Chassis Lab
Chapter 7: Virtual Chassis Fabric
Chapter 8: Virtual Chassis Fabric Management
Virtual Chassis Fabric Lab

www.juniper.net Course Agenda • vii


Document Conventions

CLI and GUI Text


Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from chapter
text according to the following table.

Style Description Usage Example

Franklin Gothic Normal text. Most of what you read in the Lab Guide
and Student Guide.

Courier New Console text:


commit complete
• Screen captures
• Noncommand-related Exiting configuration mode
syntax
GUI text elements:
Select File > Open, and then click
• Menu names Configuration.conf in the
Filename text box.
• Text field entry

Input Text Versus Output Text


You will also frequently see cases where you must enter input text yourself. Often these instances will be shown in the
context of where you must enter them. We use bold style to distinguish text that is input versus text that is simply
displayed.

Style Description Usage Example

Normal CLI No distinguishing variant. Physical interface:fxp0,


Enabled
Normal GUI
View configuration history by clicking
Configuration > History.

CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type
config.ini in the Filename field.

Defined and Undefined Syntax Variables


Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax
variables where the value is already assigned (defined variables) and syntax variables where you must assign the value
(undefined variables). Note that these styles can be combined with the input style as well.

Style Description Usage Example

CLI Variable Text where variable value is already policy my-peers


assigned.
GUI Variable Click my-peers in the dialog.

CLI Undefined Text where the variable’s value is Type set policy policy-name.
the user’s discretion or text where
ping 10.0.x.y
the variable’s value as shown in
GUI Undefined the lab guide might differ from the Select File > Save, and type
value the user must input filename in the Filename field.
according to the lab topology.

viii • Document Conventions www.juniper.net


Additional Information

Education Services Offerings


You can obtain information on the latest Education Services offerings, course dates, and class locations from the World
Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.
About This Publication
The Data Center Switching Student Guide was developed and tested using the Junos OS Release 13.2X51-D21.1.
Previous and later versions of software might behave differently so you should always consult the documentation and
release notes for the version of code you are running before reporting errors.
This document is written and maintained by the Juniper Networks Education Services development team. Please send
questions and suggestions for improvement to training@juniper.net.
Technical Publications
You can print technical manuals and release notes directly from the Internet in a variety of formats:
• Go to http://www.juniper.net/techpubs/.
• Locate the specific software or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.
Juniper Networks Support
For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC
(within the United States) or 408-745-2121 (from outside the United States).

www.juniper.net Additional Information • ix


x • Additional Information www.juniper.net
Data Center Switching

Chapter 1: Course Introduction


Data Center Switching

We Will Discuss:
• Objectives and course content information;
• Additional Juniper Networks, Inc. courses; and
• The Juniper Networks Certification Program.

Chapter 1–2 • Course Introduction www.juniper.net


Data Center Switching

Introductions
The slide asks several questions for you to answer during class introductions.

www.juniper.net Course Introduction • Chapter 1–3


Data Center Switching

Course Contents
The slide lists the topics we discuss in this course.

Chapter 1–4 • Course Introduction www.juniper.net


Data Center Switching

Prerequisites
The slide lists the prerequisites for this course.

www.juniper.net Course Introduction • Chapter 1–5


Data Center Switching

General Course Administration


The slide documents general aspects of classroom administration.

Chapter 1–6 • Course Introduction www.juniper.net


Data Center Switching

Training and Study Materials


The slide describes Education Services materials that are available for reference both in the classroom and online.

www.juniper.net Course Introduction • Chapter 1–7


Data Center Switching

Additional Resources
The slide provides links to additional resources available to assist you in the installation, configuration, and operation of
Juniper Networks products.

Chapter 1–8 • Course Introduction www.juniper.net


Data Center Switching

Satisfaction Feedback
Juniper Networks uses an electronic survey system to collect and analyze your comments and feedback. Depending on the
class you are taking, please complete the survey at the end of the class, or be sure to look for an e-mail about two weeks from
class completion that directs you to complete an online survey form. (Be sure to provide us with your current e-mail address.)
Submitting your feedback entitles you to a certificate of class completion. We thank you in advance for taking the time to help
us improve our educational offerings.

www.juniper.net Course Introduction • Chapter 1–9


Data Center Switching

Juniper Networks Education Services Curriculum


Juniper Networks Education Services can help ensure that you have the knowledge and skills to deploy and maintain
cost-effective, high-performance networks for both enterprise and service provider environments. We have expert training
staff with deep technical and industry knowledge, providing you with instructor-led hands-on courses in the classroom and
online, as well as convenient, self-paced eLearning courses.

Courses
You can access the latest Education Services offerings covering a wide range of platforms at 
http://www.juniper.net/training/technical_education/.

Chapter 1–10 • Course Introduction www.juniper.net


Data Center Switching

Juniper Networks Certification Program


A Juniper Networks certification is the benchmark of skills and competence on Juniper Networks technologies.

www.juniper.net Course Introduction • Chapter 1–11


Data Center Switching

Juniper Networks Certification Program Overview


The Juniper Networks Certification Program (JNCP) consists of platform-specific, multitiered tracks that enable participants to
demonstrate competence with Juniper Networks technology through a combination of written proficiency exams and
hands-on configuration and troubleshooting exams. Successful candidates demonstrate a thorough understanding of
Internet and security technologies and Juniper Networks platform configuration and troubleshooting skills.
The JNCP offers the following features:
• Multiple tracks;
• Multiple certification levels;
• Written proficiency exams; and
• Hands-on configuration and troubleshooting exams.
Each JNCP track has one to four certification levels—Associate-level, Specialist-level, Professional-level, and Expert-level. The
Associate-level, Specialist-level, and Professional-level exams are computer-based exams composed of multiple choice
questions administered at Pearson VUE testing centers worldwide.
Expert-level exams are composed of hands-on lab exercises administered at select Juniper Networks testing centers. Please
visit the JNCP website at http://www.juniper.net/certification for detailed exam information, exam pricing, and exam
registration.

Chapter 1–12 • Course Introduction www.juniper.net


Data Center Switching

Preparing and Studying


The slide lists some options for those interested in preparing for Juniper Networks certification.

www.juniper.net Course Introduction • Chapter 1–13


Data Center Switching

Junos Genius
The Junos Genius application takes certification exam preparation to a new level. With Junos Genius you can practice for your
exam with flashcards, simulate a live exam in a timed challenge, and even build a virtual network with device achievements
earned by challenging Juniper instructors. Download the app now and Unlock your Genius today!

Chapter 1–14 • Course Introduction www.juniper.net


Data Center Switching

Find Us Online
The slide lists some online resources to learn and share information about Juniper Networks.

www.juniper.net Course Introduction • Chapter 1–15


Data Center Switching

Any Questions?
If you have any questions or concerns about the class you are attending, we suggest that you voice them now so that your
instructor can best address your needs during class.

Chapter 1–16 • Course Introduction www.juniper.net


Data Center Switching

Chapter 2: System Overview


Data Center Switching

We Will Discuss:
• Some challenges found in today’s data center;
• The QFX5100 Series switch offerings; and
• Some data center architecture options.

Chapter 2–2 • System Overview www.juniper.net


Data Center Switching

Data Center Challenges


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net System Overview • Chapter 2–3


Data Center Switching

Challenges in Traditional Data Center Environments


Data centers built more than a few years ago face one or more of the following challenges:
• The legacy multitier switching architecture cannot provide today’s applications and users with predictable
latency and uniform bandwidth. This problem is made worse when virtualization is introduced, where the
performance of virtual machines (VMs) depends on the physical location of the servers hosting those VMs.
• The management of an ever growing data center is becoming more and more taxing administratively speaking.
While the north to south boundaries have been fixed for years, the east to west boundaries have not stopped
growing. This growth, of both the compute and infrastructure elements, requires a new management approach.
• The power consumed by networking gear represents a significant proportion of the overall power consumed in
the data center. This challenge is particularly important today, when escalating energy costs are putting
additional pressure on budgets.
• The increasing performance and densities of modern CPUs has led to an increase in network traffic. The network
is often not equipped to deal with the large bandwidth demands and increased number of media access control
(MAC) addresses and IP addresses on each network port.
• Separate networks for Ethernet data and storage traffic must be maintained, adding to the training and
management budget. Siloed Layer 2 domains increase the overall costs of the data center environment. In
addition, outages related to the legacy behavior of the Spanning Tree Protocol (STP), which is used to support
these legacy environments, often results in lost revenue and unhappy customers.
Given these challenges, along with others, data center operators are seeking solutions.

Chapter 2–4 • System Overview www.juniper.net


Data Center Switching

Addressing the Challenges


Juniper Networks has introduced the QFX5100 Series switches to offer a solution to many of the aforementioned challenges
found in legacy data center environments. The QFX5100 Series switches come in different hardware configurations and
support several different architectures and deployment options. The subsequent sections provide a closer look at the various
models and some of the architectures and features they support.

www.juniper.net System Overview • Chapter 2–5


Data Center Switching

QFX5100 Series Switches


The slide highlights the topic we discuss next.

Chapter 2–6 • System Overview www.juniper.net


Data Center Switching

QFX5100 Series Switches


A number of models exist within the family of QFX5100 Series switches. Each model has its own unique set of characteristics
but they all share some similarities. This slide highlights some of the common characteristics or design elements shared by all
QFX5100 Series switches. We describe some of the unique characteristics of each model on the next few slides.

www.juniper.net System Overview • Chapter 2–7


Data Center Switching

QFX5100-24Q
The QFX5100-24Q is a compact 1 U high-density 40GbE data center access and aggregation switch that includes a base
density of 24 QSFP+ ports. Each QSFP+ socket can be configured to support 40 GbE or as a set of 4 independent 10 GbE
ports using breakout cables. Any of the 24 ports can be configured as either uplink or access ports. The QFX5100-24Q switch
has two module bays for the optional expansion module, QFX-EM-4Q, which can add a total of 8 additional QSFP+ ports to the
chassis. The QFX-EM-4Q ports can also be configured as either access ports or as uplinks. With the two four-port expansion
modules, all 32 ports support wire-speed performance with an aggregate throughput of 2.56 Tbps or 1.44 Bpps per switch.
When fully populated with 2 QFX-EM-4Q Expansion Modules, the QFX5100-24Q has 128 physical ports. However, only 104
logical ports can be used for port channelization. Depending on the system mode you configure for channelization, different
ports are restricted. If you attempt to channelize a restricted port, the configuration is ignored. The following system modes
are available on the QFX5100-24Q switch:
• Default mode: All 24 QSFP+ ports on the switch (PIC 0) are channelized by default (96 ports). With QFX-EM-4Q
Expansion Modules (PIC 1) and (PIC 2), the QSFP+ ports are supported for access or uplink ports, but cannot be
channelized. Ports are over-subscribed in this mode and could be subject to packet-loss. You can have one of
two port combinations: 32 40-Gbps QSFP+ ports, or 96 10-Gigabit Ethernet ports plus 8 40-Gbps QSFP+ ports.
Continued on the next page.

Chapter 2–8 • System Overview www.juniper.net


Data Center Switching
QFX5100-24Q (contd.)
• 104 port mode: All 24 QSFP+ ports on the switch (PIC 0) are channelized (96 ports). Two ports on QFX-EM-4Q
Expansion Module (PIC 1) are also channelized (8 additional). In this mode, ports 0 and 2 are channelized by
default and ports 1 and 3 are disabled. If additional QSFP+ ports are detected in an expansion module (PIC 2),
those ports are ignored.
• Flexi-pic mode: Ports 0 through 3 of the switch cannot be channelized; ports 4 through 24 are channelized by
default (80 ports). With QFX-EM-4Q Expansion Modules (PIC 1) and (PIC 2), the QSFP+ ports are supported for
access or uplink ports, but cannot be channelized (32 additional).
• Non-oversubscribed mode: All 24 QSFP+ ports on the switch (PIC 0) are channelized (96 ports). Expansion
modules on PIC 1 and PIC 2 are not supported and cannot be channelized. There is no packet loss for packets of
any size in this mode.
You can determine the current mode and change the mode using the commands that follow:
{master:0}
user@qfx> show chassis system-mode
fpc0:
--------------------------------------------------------------------------
Current System-Mode Configuration:
default-mode

{master:0}
user@qfx> request chassis system-mode ?
Possible completions:
mode-104port 104port-mode. This will restart PFE
non-oversubscribed-mode Non-oversubscribed-mode. This will restart PFE
flexi-pic-mode Flexi-pic-mode. This will restart PFE
default-mode Default-mode is oversubscribed mode. This will restart PFE
...
These switches are available with AC or DC power supplies and one of two airflow directions:
• Air-flow-in (AFI) – Air comes into the switch through the vents in the field-replaceable units (FRUs)
• Airflow-out (AFO) – Air comes into the switch through the vents in the port panel.
The image on the slide is for an AFI model, which uses blue on the power supplies and fans. The AFO models use orange on
their power supplies and fans. Note that AFI and AFO fans and power supplies cannot be mixed in the same chassis.
For more details on the hardware elements of this switch model, refer to the technical documentation found at: http://
www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/hardware/qfx-series/
qfx5100.html.

www.juniper.net System Overview • Chapter 2–9


Data Center Switching

QFX5100-48S
The QFX5100-48S model is a compact 1 U 10GbE data center access switch that supports up to a maximum of 72 logical 10
GbE ports for an aggregate throughput of 1.44 Tbps or 1.08 Bpps. Forty-eight physical ports (0 through 47) support 10 Gbps
small form-factor pluggable plus (SFP+) transceivers. These ports can be configured as access ports. All 48 of these ports
can be used for SFP+ transceivers or SFP+ direct attach copper (DAC) cables. You can use 1-Gigabit Ethernet SFP+,
10-Gigabit Ethernet SFP+ transceivers and SFP+ direct attach copper cables in any access port.
The remaining 24 logical ports are available through the six 40 GbE ports (48 through 53) which use QSFP+. Each QSFP+
socket can operate either as a single 40 Gbps port or as a set of 4 independent 10 Gbps ports using QSFP+ breakout cables.
The 40 GbE ports can be configured as either access ports or as uplinks.
These switches are available with AC or DC power supplies and one of two airflow directions:
• Air-flow-in (AFI) – Air comes into the switch through the vents in the field-replaceable units (FRUs)
• Airflow-out (AFO) – Air comes into the switch through the vents in the port panel.
The image on the slide is for an AFO model, which uses orange on the power supplies and fans. The AFI models use blue on
their power supplies and fans. Note that AFI and AFO fans and power supplies cannot be mixed in the same chassis.
For more details on the hardware elements of this switch model, refer to the technical documentation found at: http://
www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/hardware/qfx-series/
qfx5100.html.
While not shown on the slide, the QFX5100-48T model offers a similar set of physical characteristics and features. The
primary difference is the integrated RJ45 port, which currently supports tri-rate speeds of 100MbE, 1GbE, or 10GbE.

Chapter 2–10 • System Overview www.juniper.net


Data Center Switching

QFX5100-96S
The QFX5100-96S switch is a compact 2 U high-density 10GbE aggregation switch with 96 SFP+ and 8 QSFP+ ports. Physical
ports (0 through 95) support 10 Gbps SFP+ transceivers. The eight 40-Gigabit ports (96 through 104) support QSFP+
transceivers and are normally configured as uplinks or Virtual Chassis ports (VCPs).
Although the 104 physical ports of the QFX5100-96S would map to 128 logical ports using channelization, only 104 logical
ports are supported. Because of the 104 port restriction, only two of the eight QSFP+ ports can be channelized. Depending on
how you set the system mode for channelization, the behavior of channelization for the QSFP+ ports changes. The following
system modes are available for the QFX5100-96S switch:
• Non-oversubscribed mode: All 96 SFP+ ports on the switch (PIC 0) are supported. In this mode, the eight QSFP+
ports are not supported and cannot be channelized. There is no packet loss for packets of any size in this mode.
• Default mode: All 96 SFP+ ports on the switch (PIC 0) are supported. QSFP+ ports 96 and 100 can be
channelized. If ports 96 and 100 are channelized, the interfaces on ports 97, 98, 99, 101, 102, and 103 are
disabled.
These switches are available with AC or DC power supplies and one of two airflow directions:
• Air-flow-in (AFI) – Air comes into the switch through the vents in the field-replaceable units (FRUs)
• Airflow-out (AFO) – Air comes into the switch through the vents in the port panel.
For more details on the hardware elements of this switch model, refer to the technical documentation found at: http://
www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/hardware/qfx-series/
qfx5100.html.

www.juniper.net System Overview • Chapter 2–11


Data Center Switching

QFX5100 System Architecture


Each QFX5100 features a Linux-based hypervisor capable of supporting up to four virtual machines (VMs). Because of the
VM-based architecture, some tasks, such as reboot operations and system upgrades are much faster than the same tasks
performed on other Junos OS devices.
The primary VM, which is used to implement and manage any third-party or Juniper VMs, runs Junos OS. The standby Junos
OS VM is used when a topology-independent in-service software upgrade (TISSU) is performed. We discuss TISSU on
subsequent slides. Currently, the QFX5100 Series switches supports one additional VM outside the primary and standby
Junos OS VMs. One use-case of a third-party VM on a QFX5100 Series switch is an analytics VM, which provides visibility into
the data center infrastructure's performance and behavior and is designed for detecting and reporting on microburst traffic.
All communications between the running VMs takes place through the host bridge, which is a virtual switch and part of the
Linux-based hypervisor. The internal communications channel between VMs has an aggregate throughput of approximately
1Gbps. You can reassign the use of the eth1 management port to an installed guest VM through the VM configuration
statements.
The installation and management of guest VMs is beyond the scope of this course but you can find additional details in the
technical documentation on guest VMs at: http://www.juniper.net/techpubs/en_US/junos13.2/topics/task/installation/
qfx-series-guest-vmm.html.
Note
To install and run third-party VMs on a QFX5100 Series switch,
you must be running the enhanced-automation image. We
introduce the enhanced-automation image on the next slide.

Chapter 2–12 • System Overview www.juniper.net


Data Center Switching

QFX5100 Software Packages


In addition to a standard image option for the QFX5100 Series switches, you can optionally choose to use the enhanced
automation image, which includes additional packages required for certain environments that use automation.
The additional packages required for some automated environments are highlighted in the following output:
{master:0}
user@qfx# show version
fpc0:
--------------------------------------------------------------------------
Hostname: qfx
Model: qfx5100-48s-6q
...TRIMMED...
Puppet on Junos [2.7.19_1.junos.i386]
Ruby Interpreter [11.10.4_1.junos.i386]
Chef [11.10.4_1.junos.i386]
junos-ez-stdlib [11.10.4_1.junos.i386]
JUNOS Host Software [13.2X51-D21.1]
Junos for Automation Enhancement
These additional packages allow you to perform enhanced automation functions natively and without the Junos SDK.
Both the standard image and the enhanced automation image include the base Linux-based hypervisor image and are
available in the jinstall and install media options. Note that if your intention is to install and run a third-party VM, you must
install the enhanced automation image.

www.juniper.net System Overview • Chapter 2–13


Data Center Switching

QFX5100 Reboot Operations


Because of the architecture of the QFX5100 Series switches, a method to differentiate reboot operations between the
primary Junos OS VM and the chassis was required. To reboot the primary Junos OS VM, you use the same command you
would use on any other Junos OS device, which is the request system reboot command. To reboot the Linux-based
hypervisor and all running VMs, you use the request system reboot hypervisor command.
Note that the request system halt and request system power-off commands are still performed at the chassis
level.

Chapter 2–14 • System Overview www.juniper.net


Data Center Switching

Resource Allocation
The unified forwarding table (UFT) feature allows you to flexibly allocate the forwarding table space on your QFX5100 Series
switch as you see fit and in a fashion that makes the most sense for your environment. Rather than having a statically defined
space within the table for Layer 2 MAC, Layer 3 host, and longest prefix match (LPM) entries, you can give space to each of the
three categories based on the needs in the network and the role of the device. The Layer 2 MAC category includes all bridge
table entries based on MAC addresses. The Layer 3 host category includes all /32 prefixes included in the route table. The
LPM category includes all prefixes in the route table other than the /32 prefixes.
A number of scenarios have been identified which optimize the forwarding table space for each category. The following output
provides some insight to the available options:
{master:0}[edit]
user@qfx# set chassis forwarding-options ?
Possible completions:
+ apply-groups Groups from which to inherit configuration data
+ apply-groups-except Don't inherit configuration data from these groups
> l2-profile-one MAC: 288K L3-host: 16K LPM: 16K. This will restart PFE
> l2-profile-three (default) MAC: 160K L3-host: 144K LPM: 16K. This will restart
PFE
> l2-profile-two MAC: 224K L3-host: 80K LPM: 16K. This will restart PFE
> l3-profile MAC: 96K L3-host: 208K LPM: 16K. This will restart PFE
> lpm-profile MAC: 32K L3-host: 16K LPM: 128K. This will restart PFE
Continued on the next page.

www.juniper.net System Overview • Chapter 2–15


Data Center Switching
Resource Allocation (contd.)
To determine the currently assigned profile on your switch, you can use the following command:
{master:0}
user@qfx> show chassis forwarding-options
fpc0:
--------------------------------------------------------------------------
Current UFT Configuration:
l2-profile-three. (MAC: 160K L3-host: 144K LPM: 16K) (default)
num-65-127-prefix = 1K
If the system receives more entries in a given category than it is configured to support, the system will not add the new entries
that exceed the limit and will log a message in the messages log file indicating that the table is full. A sample log message
indicating this condition is shown in the following output:
{master:0}
user@qfx1> show log messages | match "Table full"
Oct 20 13:43:02 qfx1 fpc0 (brcm_rt_ip_uc_lpm_install:LPM route add failed) Reason :
Table full
Oct 20 13:50:16 qfx1 fpc0 (brcm_rt_ip_uc_lpm_install:LPM route change failed) Reason
: Table full
Note that, while a system reboot is not required, the PFE will restart when a UFT profile is changed, which can impact system
operations. A PFE restart takes all operational interfaces down for 30 or more seconds depending on the configuration.

Chapter 2–16 • System Overview www.juniper.net


Data Center Switching

Architectures and Features


The slide highlights the topic we discuss next.

www.juniper.net System Overview • Chapter 2–17


Data Center Switching

Data Center Architectures


The QFX5100 Series switches can be used as a key building block in many of today’s available data center architectures. As
shown on the slide these switches can be used in the traditional tiered Layer 2 architectures, considered by some to be
legacy, as well as in the emerging Layer 3 Clos overlay networks. In addition to the traditional tiered and the emerging overlay
architectures, the QFX5100 Series switches are key elements in environments that use MC-LAG, Virtual Chassis, Virtual
Chassis Fabric, and the QFabric system as an architectural basis. These switches can be standalone, as the infrastructure
Layer 2 elements, or can work with other QFX Series switches as well as the EX4300 Series switch.
Subsequent chapters include details associated with MC-LAG, mixed-mode Virtual Chassis, and Virtual Chassis Fabric
technologies and architectures. The prerequisite course, JEX-ELS, covers the implementation of QFX5100 Series switches in
traditional Layer 2 environments and non-mixed-mode Virtual Chassis.
For training on QFabric systems, consider attending the Configuring and Monitoring QFabric Systems and Troubleshooting
QFabric Systems classes.
For details on Layer 3 Clos deployments using QFX5100 Series switches, you can download an associated whitepaper at
http://www.juniper.net/assets/us/en/local/pdf/whitepapers/2000565-en.pdf.

Chapter 2–18 • System Overview www.juniper.net


Data Center Switching

QFX5100 Feature Highlights


This slide introduces a number of features supported by the QFX5100 Series switches. These features are described here:
• Insight technology for analytics: The QFX5100 provides dynamic buffer utilization monitoring and reporting with
an interval of 10 milliseconds to provide microburst and latency insight. It calculates both queue depth and
latency, and logs messages when configured thresholds are crossed. Interface traffic statistics can be monitored
at two-second granularity. The data can be viewed via CLI, system log, or streamed to external servers for more
analysis. Supported reporting formats include Java Script Object Notification (JSON), CSV and TSV. These files
can be consumed by orchestration systems, SDN controllers, or network management applications (such as
Juniper Networks Junos Space Network Director) to make better network design decisions and identify network
hot spots. For more details on this network analytics feature, visit http://www.juniper.net/techpubs/en_US/
junos13.2/topics/concept/analytics-overview.html.
• Intelligent buffer management: The QFX5100 switches have a total of 12 MB shared buffers. While 25% of the
total buffer space is dedicated, the rest is shared among all ports and is user configurable. The intelligent buffer
mechanism in the QFX5100 effectively absorbs traffic bursts while providing deterministic performance,
significantly increasing performance over static allocation. For more details on buffer management on QFX5100
Series switches, visit http://www.juniper.net/techpubs/en_US/junos13.2/topics/concept/
cos-qfx-series-buffer-configuration-understanding.html.
Continued on the next page.

www.juniper.net System Overview • Chapter 2–19


Data Center Switching
QFX5100 Feature Highlights (contd.)
• Unified forwarding table: The QFX5100’s UFT feature, discussed earlier, allows the hardware table to be carved
into configurable partitions of Layer 2 media access control (MAC), Layer 3 host, and Longest Prefix Match (LPM)
tables. In a pure L2 environment, the QFX5100 supports 288,000 MAC addresses. In L3 mode, the table can
support 128,000 host entries, and in LPM mode, it can support 128,000 prefixes. Junos OS provides
configurable options through a command-line interface (CLI) so that each QFX5100 can be optimized for
different deployment scenarios.
• Automation: The QFX5100 switches support a number of features for network automation and plug-and-play
operations. Features include zero-touch provisioning, operations and event scripts, automatic rollback, and
Python scripting. The switch also offers support for integration with VMware NSX Layer 2 Gateway Services,
Puppet, and OpenStack.
• TISSU: With its Intel dual core processor, the QFX5100 switches allow Junos OS to run within a virtual machine
(VM) on Linux. Junos OS runs in two separate VMs in active and standby pairs; during software upgrade cycles,
the switches seamlessly move to the newer software version while maintaining intact data plane traffic. This true
Topology-Independent ISSU, an industry-first software upgrade feature for a fixed-configuration top-of-rack
switch, is supported across all Layer 2 and Layer 3 protocols and doesn’t need the support of any other switches
to perform an image upgrade.
• MPLS: QFX5100 switches support a broad set of MPLS features, including L3 VPN, IPv6 provider edge router
(6PE), RSVP traffic engineering, and LDP to allow standards-based network segmentation and virtualization. The
QFX5100 can be deployed as a low-latency MPLS label-switching router (LSR) or MPLS PE router in smaller scale
environments. The QFX5100 is the industry’s only compact, low-latency, high-density, low-power switch to offer
an MPLS feature set.
• Fibre Channel over Ethernet (FCoE): As an FCoE transit switch, the QFX5100 provides an IEEE data center
bridging (DCB) converged network between FCoE-enabled servers and an FCoE-enabled Fibre Channel storage
area network (SAN). The QFX5100 offers a full-featured DCB implementation that provides strong monitoring
capabilities on the top-of-rack switch for SAN and LAN administration teams to maintain clear separation of
management. In addition, FCoE Initiation Protocol (FIP) snooping provides perimeter protection, ensuring that
the presence of an Ethernet layer does not impact existing SAN security policies. FCoE link aggregation group
(LAG) active/active support is available to achieve resilient (dual-rail) FCoE connectivity.

Note
While not listed on the slide and not currently supported, the QFX5100 Series switches
will support a number of features that make Software-Defined Networking (SDN)
possible. In future software versions, QFX5100 Series switches will support VXLAN and
OVSDB, which are required in SDN deployments that use VMware’s NSX controllers.

Note
Some features supported on the QFX5100 Series switches, such as BGP,
MPLS, and VCF, require a license. For more details on feature licensing
refer to http://www.juniper.net/techpubs/en_US/junos13.2/topics/
reference/general/qfx-series-software-license-features.html.

Chapter 2–20 • System Overview www.juniper.net


Data Center Switching

Managing QFX5100 Series Switches


The QFX5100 Series switches can be managed through Junos Space along with Network Director (a Junos Space application).
Junos Space with the Network Director application allow users to visualize, analyze, and control the entire data center
infrastructure through a single pane of glass. Network Director incorporates sophisticated analytics for real-time intelligence,
trended monitoring, and automation to increase agility as well as faster roll-out and activation of services.
For cloud deployments, Network Director provides a set of REST APIs that enable on-demand and dynamic network services
by simplifying the consumption of services for multi-tenant environments. With third-party cloud orchestration tool integration,
the Network Director API enables automation and provisioning of Layer 2, Layer 3, and security services in the data center
without the need for manual operator intervention.

www.juniper.net System Overview • Chapter 2–21


Data Center Switching

We Discussed:
• Some challenges found in today’s data center;
• The QFX5100 Series switch offerings; and
• Some data center architecture options.

Chapter 2–22 • System Overview www.juniper.net


Data Center Switching

Review Questions
1.

2.

3.

www.juniper.net System Overview • Chapter 2–23


Data Center Switching
Answers to Review Questions
1.
The QFX5100 Series switches along with their supported architectures and features provide scalable architecture and flexible design
options, which can make the data center environment more efficient. Through their design and features, they can also automated
provisioning and management efforts simplifying deployment and the ongoing management tasks.
2.
The QFX5100-48S and QFX5100-48T are specifically designed for the access layer, the QFX5100-96S is specifically designed for the
distribution layer, and the QFX5100-24Q is designed for either layer in a tiered Layer 2 environment.
3.
QFX5100 Series switches can be used as building blocks in traditional Layer 2 networks, tiered networks with MC-LAG, Virtual Chassis,
Virtual Chassis Fabric, QFabric systems, and in a standards-based Layer 3 Clos environment.

Chapter 2–24 • System Overview www.juniper.net


Data Center Switching

Chapter 3: Zero Touch Provisioning


Data Center Switching

We Will Discuss:
• The purpose and value of ZTP;
• The components and operations of ZTP; and
• How to deploy a QFX5100 Series switch using ZTP.

Chapter 3–2 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Understanding Zero Touch Provisioning


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Zero Touch Provisioning • Chapter 3–3


Data Center Switching

Deploying a Switch
As you deploy a switch in your network, there are a few basic requirements you must meet before that switch provides any
operational value. As illustrated on the slide you must physically rack each switch in its designated location and then connect
it, using the required cables, with other devices on the network. Once the switch is racked and cabled, you then perform the
provisioning tasks required to make the switch ready for network operations.
Some example provisioning tasks include adding the switch to its out-of-band (OOB) management network so it can be
remotely managed using telnet or SSH as well as adding other configuration parameters to ensure the switch is a secure and
functional participant on the network. Once the switch can communicate with other devices on the network, you may need
upgrade its software to ensure it is running the desired version for your specific environment.
Once the basic installation and provisioning tasks are complete, the switch can become a value-added element on the
network. At a quick glance this seems like a simple and straightforward process, right?

Chapter 3–4 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Complicating Things
To install and provision a single switch may not be too difficult for a smooth operator like yourself. However, if you are tasked
with installing and provisioning dozens or even hundreds of switches things can get a bit more complicated. With an increased
load of switches that must be properly racked, cabled, and provisioned, you will also see that the time required for such
deployments also increases. You may also find that new and challenging issues are introduced because of improperly
provisioned switches on your network.
Fortunately, as discussed on forthcoming slides, this situation can be simplified in some ways!

www.juniper.net Zero Touch Provisioning • Chapter 3–5


Data Center Switching

Making Things Easier


In large scale deployments, where dozens or hundreds of switches are deployed, Zero Touch Provisioning (ZTP) can be used to
automate and simplify some of the provisioning requirements. Using ZTP can save you time and, if done correctly, reduce
many of the errors and issues often introduced when performing the same provisioning tasks manually on individual
switches.
Note that while ZTP can offload some of the manual administrative load, it is really only part of an emerging approach to
managing infrastructure resources through automation in the data center. In addition to the initial provisioning efforts,
subsequent and ongoing modifications are required as network requirements change. The subsequent and ongoing
modifications required on the switches can be consolidated and simplified through the use of Junos Space and the Network
Director application. These subsequent modifications can also be automated using Puppet.
In addition to simplifying ongoing maintenance efforts through management applications, such as Network Director, or
automation tools, such as Puppet, it is also becoming increasingly more common to orchestrate operations between
infrastructure devices in data centers. OpenStack is emerging as the leading orchestration solution of all infrastructure
devices.
Note that Puppet and OpenStack are supported on many Junos OS devices positioned for the data center, including the
QFX5100 Series switches. For support details of your specific products, check the support documentation. Note that Puppet
and OpenStack are outside the scope of this class.
We cover ZTP in more detail on the next slide and throughout the remainder of this chapter.

Chapter 3–6 • Zero Touch Provisioning www.juniper.net


Data Center Switching

What Does It Do?


As previously mentioned, ZTP automates day one provisioning tasks. The specific provisioning tasks automated through ZTP
include loading and installing the correct software image on the EX or QFX Series switches installed in your network and
applying the desired configuration to each installed switch. We describe the operations of ZTP on the next slides.

www.juniper.net Zero Touch Provisioning • Chapter 3–7


Data Center Switching

How Does It Work?


Once a switch, with its factory-default configuration, boots, the ZTP process takes over. The ZTP process does the following:
1. It discovers a DHCP server and obtains an IP address. In the process of obtaining an IP address from the DHCP
server, the switch receives key details, by way of the DHCP options configured on the server, that inform the
switch where provisioning resources are located and how the required files on those resources can be accessed.
2. Once a switch, undergoing provisioning through ZTP, is on the network and has received sufficient information
from the DHCP server and the options it provided, it verifies the software version it is running against the version
listed in the corresponding DHCP option. If the versions do not match, the image for the target version is
obtained using the specified transfer method and the switch is upgraded.
3. In addition to performing the software verification and, if needed, an upgrade, the switch also locates and
applies its designated configuration as provided by the administrator.
We look at some additional details involved in the illustrated ZTP steps on the next slides.

Chapter 3–8 • Zero Touch Provisioning www.juniper.net


Data Center Switching

A Closer Look: Part 1


The first step of ZTP is to add the switch being provisioned to the network. The switch, in its factory-default state, functions as
a DHCP client and, when booted, sends a DHCP request packet out its management Ethernet interface as well as all attached
revenue ports. The management Ethernet interface, me0 on EX Series and most QFX Series and em0 on the QFX5100, is
enabled as a DHCP client in the factory-default configuration. The factory-default configuration also includes a single
integrated routing and bridging (IRB) interface enabled as a DHCP client. This IRB interface is assigned to the default VLAN as
a Layer 3 interface along with all revenue ports, which are configured as Layer 2 access ports in the same default VLAN. The
DHCP requests are sent from the switch out all physical interfaces that are connected and operational.
As long as a DHCP server is configured and accessible on either the management network or through a network accessible by
one or more of the revenue ports, the DHCP request sent from the switch should make its way to the DHCP server. Once the
DHCP server receives the request packet, it will respond with a DHCP reply offering its DHCP services. For the target DHCP
server to facilitate a successful ZTP event, it must include DHCP options, which we will discuss on later slides, in its DHCP
reply and offer packets.
The end objective of this step in the ZTP process is that the switch be added to the network and have sufficient information,
through the DHCP options, to advance to the next steps discussed on the subsequent slides.

www.juniper.net Zero Touch Provisioning • Chapter 3–9


Data Center Switching

A Closer Look: Part 2


Once the switch has been added to the network and has received instructions through the DHCP options provided from the
DHCP server, it can then move on to the next step in the ZTP process. The second step in the ZTP process is for the switch to
verify, through the appropriate DHCP options, which software image it should be running. The switch determines the software
image required for the network using DHCP Option 43 and sub option 00 (or alternatively sub option 04). If both sub options
are defined on the DHCP server and a different image name is listed by each, the switch prefers the details included in sub
option 00.
If the switch is already running the specified image, no immediate action is taken and the switch moves on to the next step in
the ZTP process. If the switch is not running the specified image, the switch begins the process of downloading and installing
the specified image. The switch evaluates other DHCP options or sub options to determine how and from where the specified
image should be obtained. Specifically, DHCP option 43 sub option is used to inform the switch of the transfer mode (FTP,
DHCP, or HTTP) and DHCP option 150 (or 66) is used to inform the switch of the storage server’s IP address. If both DHCP
option 150 and 66 are specified with different server IP addresses, DHCP option 150 is preferred.
Note that before the image is actually retrieved and the system is upgraded, step three, which is outlined on the next slide, is
performed. Once the switch evaluates all details included in the DHCP options and determines which files, if any, are
required, it then downloads the identified configuration file, which is part of step three, and then the image file. Once the
required files are downloaded, it first performs the upgraded referenced on this slide and then applies the configuration file
retrieved from the storage server. This download and operations sequence accommodates scenarios where the user wants to
update the configuration file on the switch without also upgrading the software image on the switch.

Chapter 3–10 • Zero Touch Provisioning www.juniper.net


Data Center Switching

A Closer Look: Part 3


The last step, before the aforementioned download and application process takes place, is to identify if the switch should
retrieve and apply any unique configuration file. As shown on the slide, DHCP option 43 sub option 01 is used to inform the
switch of the target configuration file it should retrieve and apply. If present, the switch uses the information in DHCP option
43 sub option 01 along with the details learned in DHCP option 43, sub option 03 and DHCP option 150 (or 66) to construct a
work order to retrieve and apply its designated configuration file. If no configuration file is referenced in the DHCP options, the
switch retains its factory default settings.
As previously mentioned, the switch determines which files (software image and configuration file), if any, are required before
it attempts to establish any connection with the storage server and download the identified files. If both a software image and
configuration file are required, they are both downloaded. Once the files are downloaded, the switch performs the upgrade
illustrated as part of step two and then, once the upgrade is complete, applies its newly retrieved configuration file.

www.juniper.net Zero Touch Provisioning • Chapter 3–11


Data Center Switching

The End Result!


Once ZTP processing finishes, the switches being provisioned should be running the desired software image and an updated
configuration making them useful and contributing participants on the network. We describe some of the functional
considerations of ZTP and look at a working example in the next section of this chapter.

Chapter 3–12 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Network Director and ZTP


In environments that use Junos Space and the Network Director application, certain administrative aspects of ZTP can be
simplified. Specifically you can provision the DHCP server with the required DHCP options it uses to inform the switches being
provisioned of the provisioning details discussed earlier. You can also use the Network Director application to manage your
image and configuration file repository located on the storage server in your network. Note that, depending on the software
version used, switches provisioned through ZTP, where the DHCP server’s configuration is created with the help of Network
Director, are automatically added to the list of managed devices within the Network Director application.
For more details regarding Junos Space and the Network Director application, please refer to the technical documentation
available at www.juniper.net or consider attending the training courses that cover Junos Space and Network Director.

www.juniper.net Zero Touch Provisioning • Chapter 3–13


Data Center Switching

ZTP in Action: A Working Example


The slide highlights the topic we discuss next.

Chapter 3–14 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Sample Objectives and Topology


This slide introduces the objectives and topology we use to illustrate a working example of ZTP. In this example we have a pair
of QFX5100 Series switches running a different software version than the desired software version for this network. As shown
on the slide, we have a single server device functioning as the DHCP server and the FTP server. This server is accessible from
the QFX5100 Series switches through their respective ge-0/0/0.0 interfaces. The server is participating on and serving
addresses from the 172.25.10.0/24 subnet.
The qfx1 switch is assigned the 172.25.10.111 IP address while qfx2 is assigned the 172.25.10.222 IP address. These IP
address assignments are statically defined on the DHCP server and are based on the switches’ IRB MAC addresses.
Predefined configurations for each switch, again based on the target switches’ IRB MAC address, are loaded on the server
and are accessible through FTP, which is the selected transfer method used in this example. Note that, once the
device-specific configuration files are retrieved and applied, the IP addresses assigned to the switches through DHCP will
eventually change to addresses outside the DHCP address pool during the ZTP process.
Note
The illustrated example is not, as you may have realized, truly a zero touch provisioning approach. In
this example, the administrator must retrieve the MAC address of each switch being provisioned, which
requires a touch. There are other ways to work through ZTP that remove the need for this specific
touch. You can, for example, automate the learning of the MAC addresses and subsequently automate
the update of the DHCP configuration used during the provisioning operation. You could also create a
provisioning infrastructure where each switch, requiring automated provisioning, is isolated and
therefore can be predictably provisioned based on its position in the provisioning infrastructure
network. Regardless of your selected approach, there is some administrative trade-off involved.

www.juniper.net Zero Touch Provisioning • Chapter 3–15


Data Center Switching

Sample DHCP Server Configuration


This slide illustrates a sample DHCP server configuration used to assign the desired IP addresses to the QFX Series switches
and to properly distribute the ZTP provisioning details through the appropriate DHCP options and sub options.

Chapter 3–16 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Sample FTP Server Configuration


This slide illustrates a sample FTP server configuration used to accept incoming sessions on port 21 and permit anonymous
FTP access, which is required for ZTP. In this sample configuration you can also see where the files are stored that will be
retrieved by FTP clients, which in our case are the QFX Series switches. You must make sure the software image and
configuration files referenced in the associated DHCP options and sub options are stored in the referenced directory. If the
required files are not stored in the directory, the operation will fail and an error message will be shown on the console. You
must also ensure that the files have sufficient read/write permissions so they can be retrieved.
You can verify the files are in the correct directory and have sufficient read/write permissions as shown in the following
output:

[root@server ~]# ls -l /var/ftp/pub


total 807468
-rwxr-xr-x 1 root root 414176908 Aug 22 08:38
jinstall-qfx-5-13.2X51-D15.5-domestic-signed.tgz
-rw-rw-rw- 1 root root 411839279 Aug 20 12:35
jinstall-qfx-5-13.2X51-D21.1-domestic-signed.tgz
-rwxrwxrwx 1 root root 1772 Aug 22 12:52 qfx1-ZTP.config
-rwxrwxrwx 1 root root 1772 Aug 22 12:51 qfx2-ZTP.config

www.juniper.net Zero Touch Provisioning • Chapter 3–17


Data Center Switching

Verifying Services and Communications


Once the DHCP and FTP server configurations are properly applied, you must ensure that the associated services are running
on the server. The sample output on the slide illustrates this verification step along with a simple test to ensure the FTP server
accepts incoming FTP sessions from the anonymous user.

Chapter 3–18 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Zeroizing the Configuration


By default, EX and QFX Series switches should support ZTP with their factory-default configurations and settings. If the
configuration file has been changed because of a previous deployment, for example, you can zeroize the system, which
essentially makes it able to participate in the ZTP process. Using the request system zeroize command, illustrated on
the slide, deletes the current configuration along with the stored rollback configurations. This command also removes all log
files on the system and restores some other default settings. Because this operation removes stored information, you must
confirm the operation by typing in yes, as illustrated on the slide.
Note that if the configuration files have been modified and other default settings have been changed, you must perform the
zeroize operation and cannot simply load the factory-default configuration using the load factory-default command
with a commit while in configuration mode.
Once you zeroize a switch, the system boots up in the Amnesiac mode and is accessible using the root login without a
password. Note that this default state should change by the end of the ZTP process, at which time any further login attempts
will require a known username and password as specified in the configuration file loaded during the ZTP process.

www.juniper.net Zero Touch Provisioning • Chapter 3–19


Data Center Switching

The Zeroized Configuration: Part 1


This and the next several slides illustrates some key parts of the zeroized configuration.

Chapter 3–20 • Zero Touch Provisioning www.juniper.net


Data Center Switching

The Zeroized Configuration: Part 2


This slide illustrates some key parts of the zeroized configuration.

www.juniper.net Zero Touch Provisioning • Chapter 3–21


Data Center Switching

The Zeroized Configuration: Part 3


This slide illustrates some key parts of the zeroized configuration.

Chapter 3–22 • Zero Touch Provisioning www.juniper.net


Data Center Switching

The Zeroized Configuration: Part 4


This slide illustrates some key parts of the zeroized configuration.

www.juniper.net Zero Touch Provisioning • Chapter 3–23


Data Center Switching

Monitoring Operations: Part 1


Once the switch boots with the zeroized configuration you can monitor the ZTP operations using the console connection. This
and the next slides illustrate some of the highlights of the ZTP process monitored through the console.

Chapter 3–24 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Monitoring Operations: Part 2


This slide illustrates some of the highlights of the ZTP process monitored through the console.

www.juniper.net Zero Touch Provisioning • Chapter 3–25


Data Center Switching

Monitoring Operations: Part 3


This slide illustrates some of the highlights of the ZTP process monitored through the console.

Chapter 3–26 • Zero Touch Provisioning www.juniper.net


Data Center Switching

We Discussed:
• The purpose and value of ZTP;
• The components and operations of ZTP; and
• How to deploy a QFX5100 Series switch using ZTP.

www.juniper.net Zero Touch Provisioning • Chapter 3–27


Data Center Switching

Review Questions
1.

2.

3.

Chapter 3–28 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Zero Touch Provisioning Lab


The slide provides the objectives for this lab.

www.juniper.net Zero Touch Provisioning • Chapter 3–29


Data Center Switching
Answers to Review Questions
1.
ZTP loads a predefined configuration and performs a software upgrade, if needed. The predefined configurations along with the
software image used for upgrade operations are stored on a server that is reachable from the switch being provisioned.
2.
Before a switch being provisioned through ZTP can retrieve its configuration or a software image from a storage server on the network,
the switch must be able to communicate on the network, which requires an IP address. Along with being able to communicate on the
network, the switch must receive instructions on how to retrieve the designated files. The instructions indicating how the switch is to
retrieve the designated files and from where they should be retrieved are relayed to the switch through DHCP options during the IP
acquisition process.
3.
One of the DHCP options communicated to the switch during the IP acquisition process informs the switch of the file transfer method.
The three file transfer methods that can be used by ZTP are TFTP, FTP, and HTTP. If no transfer method is specified, TFTP is used by
default.

Chapter 3–30 • Zero Touch Provisioning www.juniper.net


Data Center Switching

Chapter 4: In-Service Software Upgrades


Data Center Switching

We Will Discuss:
• The purpose and value of ISSU;
• The components and operations of ISSU; and
• How to upgrade a QFX5100 Series switch using ISSU.

Chapter 4–2 • In-Service Software Upgrades www.juniper.net


Data Center Switching

Understanding ISSU on QFX5100 Series Switches


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net In-Service Software Upgrades • Chapter 4–3


Data Center Switching

Traditional Upgrades
In the past, software upgrades meant some disruption to service. To limit the impact, software upgrades were, and still are to
a large extent, performed during a maintenance window. These maintenance windows were typically announced to all users
that could potentially be affected by the disruption caused by the upgrade operations. The overall impact to the network and
end-user depended on the role of the devices being upgraded and the resiliency built into the network.
In many networks, especially those where uptime matters, redundant paths exist which allow for a less substantial impact to
the traffic and services supported in the affected network. At a minimum, the active paths associated with the devices being
upgraded were affected during and after the upgrade is performed, which results in the transit traffic using those paths to
also be affected during that time. With the increased demands and expectations on the network, the downtime accepted in
the past has now become unacceptable in most modern network environments.

Chapter 4–4 • In-Service Software Upgrades www.juniper.net


Data Center Switching

A Step in the Right Direction


To help further mitigate the impact of software upgrades, many enhancements have been made over the years. In many
environments, the supporting systems have both hardware and software enhancements that significantly reduce, and in
some cases eliminate, the downtime and impact previously associated with software upgrades.
Many chassis-based systems now support redundant Routing Engines along with complementary software features that
facilitate smooth transitions from the controlling Routing Engine, often called the master Routing Engine, to the backup
Routing Engine during failure and upgrade scenarios. This redundant hardware along with the software features supporting
high availability have brought much stability to network environments throughout the world and have been instrumental in
increasing the end-user’s overall experience.
The enhanced software features that support high availability during times of failure and during software upgrades include
graceful Routing Engine switchover (GRES), nonstop routing (NSR), and nonstop bridging (NSB). These features work together
to support in-service software upgrades (ISSU), which allow a system to be upgraded with little or no impact.

www.juniper.net In-Service Software Upgrades • Chapter 4–5


Data Center Switching

The Problem
While the introduction of chassis-based systems with redundant Routing Engines along with the various software
enhancements that support high availability have made a significant improvement in some environments, they are not
practical in all environments. Once such environment where using a chassis-based system is not practical is in the data
center. In data center environments, space in the supporting racks is limited and does not allow for large, chassis-based
switches. The switches used in the data center, often referred to as top of rack (TOR) switches, are small and typically include
a single Routing Engine. This lack of redundancy typically prohibits the use of the software enhancements now available,
including the ability to perform an ISSU available on chassis-based platforms.

Chapter 4–6 • In-Service Software Upgrades www.juniper.net


Data Center Switching

One Possible Solution


In some data center environments, the applications and associated traffic supported by the network is business critical.
Because of the critical nature of the applications and associated traffic, every precaution is taken to ensure that, if a single
TOR fails or needs to be upgraded, the attached compute devices within the affected racks do not suffer a complete outage.
In these environments a second TOR is placed in each rack and each compute device includes at least one connection to
each of the installed TORs, thereby eliminating the unacceptable single point of failure scenarios.
This solution, while appealing in some ways, does come at a price and does not solve all of the immediate challenges. The
investment cost for such a design, at least for the TORs at the access layer, instantly doubles. In addition to the increased
expense, the time to deploy and maintain also increases significantly. While you can upgrade one of the two installed TORs
without causing a complete outage, some adverse impact to the traffic still occurs during the upgrade. During the upgrade of
one TOR in the rack causes some traffic loss, specifically the traffic passing through the TOR being upgraded once its data
plane goes down. Once the upgrade is done on one of the two TORs in a given rack, the process is then repeated causing a
similar disruption once again. While the cost and administrative overhead increase significantly with this design, the available
bandwidth to and from the compute devices is effectively cut in half during each upgrade.

www.juniper.net In-Service Software Upgrades • Chapter 4–7


Data Center Switching

A Better Solution
While some vendors claim to support an ISSU-like solution based on a redundant network topology, it is not a true ISSU and
has inherent issues as described on the previous slide. To address this challenge and need in the data center, Juniper
Networks offers, what we consider to be, a better solution using the QFX5100 Series switches.
The QFX5100 Series switches run Junos OS within a virtual machine (VM) on top of a Linux-based host OS. During an ISSU,
Junos OS runs in two separate virtual machines (VMs) in active and standby pairs. The VMs, which represent redundant
Routing Engines, seamlessly move to the newer software version while maintaining operations in the data plane. This true
Topology-Independent ISSU (TISSU), an industry-first software upgrade feature for a fixed-configuration TOR, is supported
across all Layer 2 and Layer 3 protocols and doesn’t need the support of any other switches to perform an image upgrade.

Chapter 4–8 • In-Service Software Upgrades www.juniper.net


Data Center Switching

QFX5100 ISSU Architecture


The QFX5100 features an innovative software architecture, which makes use of a Linux-based hypervisor. This architecture
supports up to four VMs concurrently. TISSU takes advantage of this virtualization technology.
During normal switch operations, Junos OS only runs on one VM (VM-A in the illustrated example on the slide). When TISSU is
initiated, a second VM (VM-B in our example) is launched with the new version of the software. Once VM-B has launched, it
synchronizes protocol states with VM-A. When that synchronization process is complete, VM-B seamlessly takes over switch
operations and VM-A shuts down.
To support the TISSU operations on a QFX5100 Series switch, a number of connections and communications between the two
VMs must exist. The connections, as shown on the slide, are either active or passive. The solid lines represent active
connections while the dotted lines are passive connections. The passive connections become active when the graceful
Routing Engine switchover (GRES) event associated with TISSU is complete.
The em0 and em1 interfaces are management ports. The em2 interface is used by the master Routing Engine (RE) to
communicate with the host machine. The em3 interfaces are used for RE-to-RE communications during a TISSU operation.

www.juniper.net In-Service Software Upgrades • Chapter 4–9


Data Center Switching

Understanding ISSU Requirements


The ISSU requirements on the QFX5100 Series switches are illustrated on the slide. Once the requirements are met, which
include having the required software features (GRES, NSR, and NSB) enabled and access to the target image, you can initiate
an ISSU operation using the request system software in-service-upgrade <package-name> command. If
the referenced command is issued but the ISSU requirements are not met, you will see an error indicating the requirements
are not met. We provide an illustrated example of the steps required to perform an ISSU on subsequent slides.

Chapter 4–10 • In-Service Software Upgrades www.juniper.net


Data Center Switching

ISSU Operations: Part 1


When an ISSU operation is initiated on a QFX5100 Series switch, the system performs a number of tasks. This and the next
slides describe the tasks performed by the system and some of its key processes. This slide highlights the first tasks, which
are as follows:
• Checks to ensure the prerequisite features (GRES, NSR, NSB) are enabled. This function is specifically
performed by the management process (MGD).
• Downloads the target software image, if needed, and validates the target software version.
• Spawns a backup RE with the new software version. The backup RE is spawned by the ISSU state machine.
• Synchronizes the state of the master RE to the new backup RE. This function is manged by the ISSU state
machine and makes use of the kernel synchronization process (KSYNCD).

www.juniper.net In-Service Software Upgrades • Chapter 4–11


Data Center Switching

ISSU Operations: Part 2


This slide highlights the next tasks performed by the system during an ISSU, which are as follows:
• Moves mastership from the current master RE to the backup RE running the new, target software image. The
ISSU state machine moves the devices (for example, forwarding ASIC, FPGA, management port and serial
console) from the master RE to the backup RE. At this point the new master RE controls the PFE and all data
plane operations.
• Renames the slot ID, associated with the new master RE, from 1 to 0.
• Shuts down the old master RE.
Because of the architecture and how the connections and communications work on the QFX5100 during an ISSU operation,
the console and management interfaces are both disconnected toward the end of an TISSU. If you are connected and
monitoring the process using the console connection, you will simply be required to press enter once all activity stops and log
back in to the system. If you are connected to the management interface when an ISSU is initiated, you will have to reconnect
to the system from your management station once the new RE has taken control of the PFE.

Chapter 4–12 • In-Service Software Upgrades www.juniper.net


Data Center Switching

Caveats and Considerations


While ISSU on the QFX5100 Series switches may seem like the perfect solution to a significant challenge faced by operators in
the data center, it does have some restrictions. This slide illustrates some caveats and considerations when performing an
ISSU on the QFX5100 Series switches. You can check the following URL for support and requirement details for ISSU on
QFX5100 Series switches: https://www.juniper.net/techpubs/en_US/junos13.2/topics/reference/requirements/
issu-system-requirements-qfx5100.html.
Note
Because of the additional processing operations required for an ISSU on the QFX5100 Series switches, the
time required for an ISSU is longer than a regular upgrade performed on QFX5100 Series switches.
However, because of the virtualization technologies involved, the overall time for either upgrade operation is
typically less than these same upgrade operations performed on other Junos OS devices. Also note that the
time required for an ISSU varies and is dependent on the size of the system’s configuration.

Note
If the ISSU process stops, you can look at the log files to diagnose
the problem. The log files that include ISSU events during a failed
ISSU attempt are located at /var/log/vjunos-log.tgz.

www.juniper.net In-Service Software Upgrades • Chapter 4–13


Data Center Switching

ISSU in Action: A Working Example


The slide highlights the topic we discuss next.

Chapter 4–14 • In-Service Software Upgrades www.juniper.net


Data Center Switching

Sample Objective and Topology


This slide introduces the objective and topology we use to illustrate a working example of ISSU on a QFX5100 Series switch. In
this example, a QFX5100 Series switch (qfx1) connects two routers, R1 and R2, which have an OSPF adjacency and are IBGP
peers. The qfx1 switch needs to be upgraded to a different software version without adversely affecting the protocol sessions
between R1 and R2 or the transit traffic passing through the illustrated connections. We illustrate the steps to perform an
ISSU on qfx1 and verify the impact of the ISSU on the associated network on the next several slides.

www.juniper.net In-Service Software Upgrades • Chapter 4–15


Data Center Switching

Meeting the Prerequisites


This slide illustrates the configuration requirements that must be met before an ISSU operation can be performed on a
QFX5100 Series switch. As previously mentioned, GRES, NSR, and NSB must all be enabled before the system will perform an
ISSU. When NSR is enabled, you must also enable the commit synchronization feature under the [edit system]
hierarchy as shown on the slide. A warning is generated if you attempt to enable NSR without also enabling the commit
synchronization feature. This point is illustrated below:
{master:0}[edit]
user@qfx1# commit and-quit
[edit routing-options]
'nonstop-routing'
Synchronized commits must be configured with nonstop routing
error: commit failed: (statements constraint check failed)
Similarly, if you attempt to perform an ISSU without having first met the illustrated requirements an error is generated as
shown below:
{master:0}
user@qfx1> request system software in-service-upgrade /var/tmp/
jinstall-qfx-5-13.2X51-D21.1-domestic-signed.tgz
warning: GRES not configured
Continued on the next page.

Chapter 4–16 • In-Service Software Upgrades www.juniper.net


Data Center Switching
Meeting the Prerequisites (contd.)
In the previous example GRES was not enabled, which is the first requirement verified by the system. Assuming GRES was
enabled but NSR and NSB were not, you would see errors at different points during the attempted ISSU operation. The sample
outputs that follow illustrate the errors you might see if NSR and NSB were not also enabled:
{master:0}[edit]
user@qfx1# run request system software in-service-upgrade /var/tmp/
jinstall-qfx-5-13.2X51-D21.1-domestic-signed.tgz
Starting ISSU Tue Sep 2 20:50:10 2014

PRE ISSU CHECK:


---------------
PFE Status : Online
Member Id zero : Valid
VC not in mixed or fabric mode : Valid
Member is single node vc : Valid
BFD minimum-interval check done : Valid
GRES enabled : Valid
NSR not configured : Invalid

error: System not ready for ISSU.

{master:0}[edit]
user@qfx1# set routing-options nonstop-routing

{master:0}[edit]
user@qfx1# commit
configuration check succeeds
commit complete

{master:0}[edit]
user@qfx1# run request system software in-service-upgrade /var/tmp/
jinstall-qfx-5-13.2X51-D21.1-domestic-signed.tgz
Starting ISSU Tue Sep 2 20:50:48 2014

PRE ISSU CHECK:


---------------
PFE Status : Online
Member Id zero : Valid
VC not in mixed or fabric mode : Valid
Member is single node vc : Valid
BFD minimum-interval check done : Valid
GRES enabled : Valid
NSR enabled : Valid

warning: Do NOT use /user during ISSU. Changes to /user during ISSU may get lost!
ISSU: Validating Image
error: 'Non Stop Bridging' not configured
error: aborting ISSU Tue Sep 2 20:50:49 2014
error: ISSU Aborted!
ISSU: IDLE
At this point you would simply need to enable NSB in the configuration. Once all of the required configuration elements are
enabled, the ISSU operations should proceed to the next steps in the ISSU process without any issues.

www.juniper.net In-Service Software Upgrades • Chapter 4–17


Data Center Switching

Initiating and Monitoring ISSU


If you are connected to a system undergoing the ISSU operation and are using the console connection, you should be able to
monitor much of the ISSU operational activity and progress as shown on the slide. If you are connected to the system using
the management Ethernet connection and an access protocol such as Telnet or SSH, you will be disconnected and will not
see the same details.
Once the ISSU state machine has ended the upgrade process and the new master RE is available, you will need to log in to
verify the resulting state and to identify that the new software version is running on the master RE. Remember that once the
new master RE is upgraded and has control of the system components and operations, the RE and associated VM that was
previously active is deactivated. You can perform a basic verification of which version is running on the switch by simply
logging in and noting the displayed version, as shown on the slide, or by using the show version command available in
operational mode.

Chapter 4–18 • In-Service Software Upgrades www.juniper.net


Data Center Switching

Verifying the Impact of an ISSU


This slide illustrates the impact the ISSU performed on the qfx1 switch had on the attached R1 and R2 devices. Note that the
outputs are taken from R1. Here we see that the OSPF adjacency and BGP peering session between the two routers are
unaffected by the ISSU performed on qfx1. Note that while there is no absolute guarantee that transit traffic will not be
impacted in some way, most tests result in little to no noticeable impact.

www.juniper.net In-Service Software Upgrades • Chapter 4–19


Data Center Switching

We Discussed:
• The purpose and value of ISSU;
• The components and operations of ISSU; and
• How to upgrade a QFX5100 Series switch using ISSU.

Chapter 4–20 • In-Service Software Upgrades www.juniper.net


Data Center Switching

Review Questions
1.

2.

3.

www.juniper.net In-Service Software Upgrades • Chapter 4–21


Data Center Switching

In-Service Software Upgrade Lab


The slide provides the objectives for this lab.

Chapter 4–22 • In-Service Software Upgrades www.juniper.net


Data Center Switching
Answers to Review Questions
1.
When TISSU is initiated, a second VM, functioning as a backup RE, is launched with the new version of the software. Once the new
backup RE is launched with the new version of software, it synchronizes protocol states with current master RE (also a VM). When that
synchronization process is complete, the new VM seamlessly takes over switch operations and the old master RE shuts down.
2.
A QFX5100 Series switch must have GRES, NSR, and NSB enabled for an ISSU operation to succeed.
3.
There are a number of caveats and considerations for ISSU on the QFX5100 Series switches. Some examples include:
• ISSU is supported in Junos OS 13.2X51-D15 and later.
• ISSU upgrades take more time than normal upgrades.
• Downgrade operations using ISSU are not supported.
• Rollback operations using ISSU are not supported.
• ISSU should not be used when transitioning from the standard software image to the enhanced-automation software
image and vice versa.
• During an ISSU, the Junos OS CLI is not accessible.

www.juniper.net In-Service Software Upgrades • Chapter 4–23


Data Center Switching

Chapter 4–24 • In-Service Software Upgrades www.juniper.net


Data Center Switching

Chapter 5: Multichassis Link Aggregation


Data Center Switching

We Will Discuss:
• The purpose and value of MC-LAGs;
• The components and operations of an MC-LAG; and
• How to implement an MC-LAG on QFX5100 Series switches.

Chapter 5–2 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Multichassis Link Aggregation Overview


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Multichassis Link Aggregation • Chapter 5–3


Data Center Switching

Data Center Connectivity


Many of today’s data centers carry mission critical traffic associated with mission critical business applications. Because of
the critical nature of the role of the data center in supporting businesses, every step is taken to ensure the supporting
compute and network elements are as resilient and fault-tolerant as possible. To help sustain a functioning data center
environment, design architects ensure redundant connections and paths exist and that as many single-point of failures are
eliminated from the environment as possible.
To ensure a single infrastructure switch does not significantly disrupt operations, redundant paths often with redundant
connections are placed throughout the various tiers in the Layer 2 network. To increase availability and ensure business
continuity continues when a single link, connected to compute resources, fails, multiple links are often bundled together in a
link aggregation group (LAG) using 802.3ad.

Chapter 5–4 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

A Potential Problem
While operational continuity is a top priority, it is not guaranteed simply by adding multiple, bundled connections between the
compute resources (servers) and their attached access switch. This design, while improved over a design with a single link,
still includes potential single points of failure including the access switch and the compute device.
While the survivability of compute resources can be handled through the duplication of the impacted resources on some other
physical device in the network, typically done through virtualization technologies, the access switch, in this deployment model,
remains a single point of failure and prohibits the utilization of the attached resources.

www.juniper.net Multichassis Link Aggregation • Chapter 5–5


Data Center Switching

A Solution
To eliminate the access switch as being a single point of failure in the data center environment, you can use multichassis link
aggregation. Multichassis link aggregation builds on the standard LAG concept defined in 802.3ad and allows a LAG from one
device, in our example a server, to be spread between two upstream devices, in our example two access switches to which
the server connects. Using multichassis link aggregation avoids the single point of failure scenario related to the access
switches described previously and allows operational continuity for traffic and services, even when one of the two switches
supporting the server fails.

Chapter 5–6 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Common Positioning Scenarios


As previously mentioned, multichassis link aggregation groups (MC-LAGs) are very useful in a data center when deployed at
the access layer, which connects to the compute resources. These deployment scenarios at the access layer are performed on
select EX Series switches and the QFX Series switches, which are designed for and positioned in the data centers as top of
rack (TOR) switches. In addition to the access layer, MC-LAGs are also commonly deployed at the core layer, which is
commonly supported by the EX9200 Series switches or the MX Series routers.
While the illustrated scenarios are the most common deployments of MC-LAG in the data center, MC-LAGs have also been
used at the distribution layer and in some cases in both the north and south directions. Our focus throughout this chapter is at
the access layer where the QFX5100 Series switches are well positioned as TORs. Note that the CLI syntax used to deploy
MC-LAGs on MX Series routers and QFX Series switches may differ. We recommend that you consult the technical
documentation for support and deployment details when implementing MC-LAG on the MX Series routers.

www.juniper.net Multichassis Link Aggregation • Chapter 5–7


Data Center Switching

Multichassis Link Aggregation Operations


The slide highlights the topic we discuss next.

Chapter 5–8 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

MC-LAG Overview
An MC-LAG allows two similarly configured devices, known as MC-LAG peers, to emulate a logical LAG interface which
connects to a separate device at the remote end of the LAG. The remote LAG endpoint may be a server, as shown in the
example on the slide, or a switch or router depending on the deployment scenario. The two MC-LAG peers appear to the
remote endpoint connecting to the LAG as a single device.
As previously mentioned, MC-LAGs build on the standard LAG concept defined in 802.3ad and provide node-level redundancy
as well as multi-homing support for mission critical deployments. Using MC-LAGs avoids the single point of failure scenario
related to the access switches described previously and allows for operational continuity for traffic and services, even when
one of the two MC-LAG peers supporting the server fails.
MC-LAGs make use of the Inter-Chassis Control Protocol (ICCP), which is used to exchange control information between the
participating MC-LAG peers. We discuss ICCP further on the next slides.

www.juniper.net Multichassis Link Aggregation • Chapter 5–9


Data Center Switching

Interchassis Control Protocol Overview


ICCP, which uses TCP/IP, replicates control traffic and forwarding states across the MC-LAG peers and communicates the
operational state of the MC-LAG peers. Because ICCP uses TCP/IP to communicate between the MC-LAG peers, the MC-LAG
peers must be connected. ICCP messages exchange MC-LAG configuration parameters and ensure that both peers use the
correct LACP parameters. The connection used to support the ICCP communications is called the interchassis link-protection
link (ICL-PL).
The ICL-PL provides redundancy when a link failure (for example, an MC-LAG trunk or access port) occurs on one of the active
links. The ICL-PL can be a single Ethernet interface or an aggregated Ethernet interface with multiple member links. It is highly
recommended that the connection be no less than a 10-Gigabit Ethernet interface and ideally an aggregated Ethernet
interface with multiple member links to support the potential throughput requirements and incorporate fault tolerance and
high availability. You can configure only one ICL-PL between the two peers, although you can configure multiple MC-LAGs
between them which are supported by the single ICL-PL connection.
The ICL-PL should allow open communications between the associated MC-LAG peers. If the required communications are
prohibited and ICCP exchanges are not freely sent and received, instabilities with the MC-LAG may be introduced.

Chapter 5–10 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

ICL-PL High Availability


Using aggregated links for the ICL-PL, over which the ICCP peering session is established, mitigates the possibility of a
split-brain state. A split-brain state occurs when the ICL-PL configured between the MC-LAG peers goes down. To work around
this problem, you enable backup liveness detection. With backup liveness detection enabled, the MC-LAG peers can
communicate through the keepalive link. Backup liveness detection is disabled by default and requires explicit configuration,
which we illustrate on a subsequent slide. It is recommended that the out of band (OOB) management connection be used as
the keepalive link.
During a split-brain state, the standby peer brings down its local member links in the MC-LAG by changing the LACP system ID.
When the ICCP connection is active, both of the MC-LAG peers use the configured LACP system ID. If the LACP system ID is
changed during failures, the server that is connected over the MC-LAG removes the links associated with the standby MC-LAG
peer from its aggregated Ethernet bundle. Note that split-brain states bring down the MC-LAG link completely if the primary
peer member is also down for other reasons. Recovery from the split-brain state occurs automatically when the ICCP
adjacency comes up between the MC-LAG peers.
When the ICL-PL is operationally down and the ICCP connection is active, the LACP state of the links with status control
configured as standby is set to the standby state. When the LACP state of the links is changed to standby, the server
connected to the MC-LAG makes these links inactive and does not use them for sending data thereby forcing all traffic to flow
through the active MC-LAG peer. This behavior is to avoid any split-brain scenario where both MC-LAG peers operate in an
active role without having knowledge of the other peer.

www.juniper.net Multichassis Link Aggregation • Chapter 5–11


Data Center Switching

MC-LAG Modes
There are two modes in which an MC-LAG can operate: Active/Standby and Active/Active. Each state type has its own set of
benefits and drawbacks.
Active/Standby mode allows only one MC-LAG peer to be active at a time. Using LACP, the active MC-LAG peer signals to the
attached device (the server in our illustrated example) that its links are available to forward traffic. As you might guess, a
drawback to this method is that only half of the links in the server’s LAG are used at any given time. However, this method is
usually easier to troubleshoot than Active/Active because traffic is not hashed across all links and no shared MAC learning
needs to take place between the MC-LAG peers.
Using the Active/Active mode, all links between the attached device (the server in our illustrated example) and the MC-LAG
peers are active and available for forwarding traffic. Because all links are active, traffic has the potential need to go between
the MC-LAG peers. The ICL-PL can be used to accommodate the traffic required to pass between the MC-LAG peers. We
demonstrate this on the next slide. Currently, the QFX5100 Series switches only support the Active/Active mode.

Chapter 5–12 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Traffic Flow Example: Active/Active


This slide illustrates the available forwarding paths for an MC-LAG deployed in the Active/Active mode. In the Active/Active
mode, the ICL-PL trunk port can be used to forward traffic for all VLANs to which it is assigned. This is especially helpful when
one link in the MC-LAG fails and the traffic’s destination is reachable through the peer with the failed link. In this failure
scenario, the traffic is forwarded through the surviving link to the other MC-LAG peer and then over the ICL-PL connection and
on to its intended destination.
To ensure proper forwarding with a functional MC-LAG deployment, you should be aware of some Spanning Tree Protocol (STP)
guidelines exist when deploying MC-LAG. It is recommended that you enable STP globally on the MC-LAG peers to avoid local
mis-wiring loops within a single peer or between both peers. It is also recommended that you disable STP on the ICL-PL link;
otherwise, it might block ICL-PL ports and disable protection. When an MC-LAG is deployed between switches in a tiered
environment; for example between the access and distribution layers or between the distribution and core layers, STP should
be disabled on the MC-AE interfaces participating in the MC-LAG. In the situation shown on the slide where the MC-LAG is
defined on switches in the access layer, you should mark the MC-AE interfaces as edge interfaces and ensure any received
STP BPDUs received from the connected device are blocked. Blocking BPDUs on edge ports will help ensure there are no
unwanted STP topology changes in your environment.
Note
A loop-free state is maintained between MC-LAG peers through
ICCP and through the block filters maintained on each peer. The
ICCP messages exchanged between MC-LAG peers includes state
information and the block filters determine how traffic is forwarded.

www.juniper.net Multichassis Link Aggregation • Chapter 5–13


Data Center Switching

Layer 2 Unicast: MAC Learning and Aging


When a MAC address is learned on a single-homed connection on one of the MC-LAG peers, as shown in the example, that
MAC address is propagated to the other MC-LAG peer using ICCP. The remote peer receiving the MAC address through ICCP,
qfx2 in our example, adds a new MAC entry in its bridge table associated with the corresponding VLAN and associates the
newly added forwarding entry with the ICL-PL link.
Continued on the next page.

Chapter 5–14 • Multichassis Link Aggregation www.juniper.net


Data Center Switching
Layer 2 Unicast: MAC Learning and Aging (contd.)
The MAC addresses learned locally on an MC-LAG peer include the L flag and the MAC addresses learned from the remote
MC-LAG peer through ICCP include the R flag. This is illustrated in the following output:
{master:0}
user@qfx1> show ethernet-switching table

MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
SE - statistics enabled, NM - non configured MAC, R - remote PE MAC)

Ethernet switching table : 3 entries, 3 learned


Routing instance : default-switch
Vlan MAC MAC Age Logical
name address flags interface
v15 4c:96:14:e8:c6:fd DR - ae0.0
v15 4c:96:14:e8:c6:fe DL - et-0/0/51.0
v15 4c:96:14:e8:f0:21 DL - ae1.0

{master:0}
user@qfx2> show ethernet-switching table

MAC flags (S - static MAC, D - dynamic MAC, L - locally learned, P - Persistent static
SE - statistics enabled, NM - non configured MAC, R - remote PE MAC)

Ethernet switching table : 3 entries, 3 learned


Routing instance : default-switch
Vlan MAC MAC Age Logical
name address flags interface
v15 4c:96:14:e8:c6:fd DL - et-0/0/50.0
v15 4c:96:14:e8:c6:fe DR - ae0.0
v15 4c:96:14:e8:f0:21 DR - ae1.0
All learned MAC addresses, regardless if they are learned locally or through ICCP from the remote peer, are removed from the
bridge table when the aging timer expires on both peers. The default aging timer is 300 seconds. When traffic is seen by
either one of the MC-LAG peers from a known MAC address, the aging timer resets.
To prevent an attached device from receiving multiple copies of a frame from both MC-LAG peers, a block mask is used. A
block mask prevents the local peer from forwarding traffic received from the remote peer through the ICL-PL that is destined
to the device attached to the MC-AE interface. The forwarding block mask for a given MC-LAG is cleared when the local
member MC-AE interfaces go down on the peer. To achieve faster convergence, if all local members of the MC-LAG link are
down, outbound traffic on the MC-LAG is redirected to the ICL-PL interface on the data plane.
When all MC-AE interfaces are up and operational, both MC-LAG peers forward packets received through interfaces other than
the ICL-PL and destined to the devices attached through the MC-AE interfaces. This is known as local affinity and is the
preferred forwarding method. If the MC-AE interfaces are down on one of the local peer, packets received through interfaces
other than the ICL-PL are redirected to ICL-PL towards the remote MC-LAG peer, which in turn forwards those packets out its
local MC-AE interface towards the attached device. In this situation the block mask is reset and a filter is created.
When the MC-AE interfaces recover and return to an operational state, the filter is deleted and the block mask is set to ensure
no further redirection occurs over the ICL-PL. In this recovered, steady state, the local peer once again begins to forward traffic
through its local MC-AE interfaces.

www.juniper.net Multichassis Link Aggregation • Chapter 5–15


Data Center Switching

Layer 2 Multicast Support


By default, when multicast traffic is received on an MC-LAG peer (or any Layer 2 switch), it floods that traffic out all interfaces
associated with the VLAN in which the traffic was received. This default behavior is not ideal in most environments because of
the unnecessary resource consumption that occurs. To avoid unnecessary resource consumption, you can enable IGMP
snooping. Note that in an MC-LAG deployment the multicast traffic is always flooded over the ICL-PL connection.
IGMP snooping controls multicast traffic in a switched network. As previously mentioned, when IGMP snooping is not enabled,
a switch floods multicast traffic out all ports assigned to the associated VLAN, even if the hosts on the network do not want
the multicast traffic. With IGMP snooping enabled, a switch monitors the IGMP join and leave messages sent by a switch’s
attached hosts which are destined to a multicast router. This enables the switch to keep track of multicast groups for which it
has interested receivers and the ports assigned to the interested receivers. The switch then uses this information to make
intelligent decisions and to forward multicast traffic to only the interested destination hosts.
In an MC-LAG configuration, IGMP snooping replicates the Layer 2 multicast routes so that each MC-LAG peer has the same
routes. If a device is connected to an MC-LAG peer by way of a single-homed interface, IGMP snooping does not replicate the
join message to its IGMP snooping peer.
To achieve sub second convergence, IGMP snooping is disabled for 255 seconds on MC-LAG peer reboot and ICCP connection
up/down events. This is done to forward traffic when membership information is not synced on the node. IGMP reports are
synced as and when the packets are received.
The clear igmp-snooping membership command clears only the local MC-LAG peer membership information. If you desire all
membership records to be cleared on both peers, you must issue the command on both peers.

Chapter 5–16 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Layer 3 Routing
Layer 3 inter-VLAN routing can be provided through MC-LAG peers using IRBs and VRRP. This allows compute devices to
communicate with other devices on different Layer 3 subnets using gateway access through their first-hop infrastructure
device (their directly attached access switch), which can expedite the required communication process.
For simplified Layer 3 gateway services, where Layer 3 routing protocols are not run on the MC-LAG peers, you simply
configure the same Layer 3 gateway IP address on both MC-LAG peers and enable IRB MAC address synchronization. MAC
address synchronization enables MC-LAG peers to forward Layer 3 packets arriving on MC-AE interfaces with either its own
IRB MAC address or its peer’s IRB MAC address. Each MC-LAG peer installs its own IRB MAC address as well as the peer’s IRB
MAC address in its local forwarding table. Each MC-LAG peer treats the packet as if it were its own packet. If IRB MAC address
synchronization is not enabled, the IRB MAC address is installed on the MC-LAG peer as if it was learned on the ICL-PL.
Control packets destined for a particular MC-LAG peer that arrive on an MC-AE interface of its MC-LAG peer are not forwarded
on the ICL-PL interface. Additionally, using the gateway IP address as a source address when you issue either a ping,
traceroute, telnet, or FTP request is not supported. To enable IRB MAC address synchronization, issue the set vlan
vlan-name l3_interface irb-name mcae-mac-synchronize on each MC-LAG peer.
Continued on the next page.

www.juniper.net Multichassis Link Aggregation • Chapter 5–17


Data Center Switching
Layer 3 Routing (contd.)
Without IRB MAC address synchronization, if one MC-LAG peer sends an ARP request, and the other MC-LAG peer receives the
response, ARP resolution is not successful. With synchronization, the MC-LAG peers synchronize the ARP resolutions by
sniffing the packet at the MC-LAG peer receiving the ARP response and replicating this to the other MC-LAG peer. This ensures
that the entries in ARP tables on the MC-LAG peers are consistent.
When one of the MC-LAG peers restarts, the ARP destinations on its MC-LAG peer are synchronized. Because the ARP
destinations are already resolved, its MC-LAG peer can forward Layer 3 packets out of the MC-AE interface.
For more advanced Layer 3 gateway services, where Layer 3 routing protocols and Layer 3 multicast operations are required
on the MC-LAG peers, you configure unique IRB interfaces on each MC-LAG peer and then configured the virtual router
redundancy protocol (VRRP) between the peers in an active/standby role.
Using VRRP, the MC-LAG peers act as virtual routers. The virtual routers share the virtual IP address that corresponds to the
default route configured on the host or server connected to the MC-LAG. This virtual IP address maps to a VRRP MAC
addresses, which is automatically generated and based on the VRRP group number. The attached compute device uses the
VRRP MAC address to send any Layer 3 upstream packets. At any time, one of the VRRP routers (MC-LAG peers) is the master
(active), and the other is a backup (standby). Both VRRP active and VRRP backup routers forward Layer 3 traffic arriving on
the MC-AE interface. If the master router fails, all the traffic shifts to the MC-AE link on the backup router.
Note
You must configure VRRP on both MC-LAG peers in order for both the
active and standby members to accept and route packets. Additionally,
configure the VRRP backup router to send and receive ARP requests.

If routing protocols are used on the MC-LAG peers, they should be configured to run on the primary IP address on the peer’s
IRB interface. Any configured protocols run independently on each MC-LAG peer rather than in a unified fashion. To help with
some forwarding operations, the IRB MAC address of each peer is replicated on the other peer and is installed as a MAC
address with the forwarding next hop of the ICL-PL. This is achieved by configuring a static ARP entry for the remote peer as
shown in the following output:
{master:0}[edit interfaces]
user@qfx1# show irb unit 15
family inet {
address 172.25.15.101/24 {
arp 172.25.15.102 l2-interface ae0.0 mac dc:38:e1:5d:1c:00;
vrrp-group 15 {
virtual-address 172.25.15.1;
priority 200;
}
}
}

Chapter 5–18 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Deploying Multichassis Link Aggregation


The slide highlights the topic we discuss next.

www.juniper.net Multichassis Link Aggregation • Chapter 5–19


Data Center Switching

Sample Objectives and Topology


This slide introduces the objectives and topology we use to illustrate a working example of MC-LAG. In this example we have a
pair of QFX5100 Series switches functioning as TORs in the access layer of a data center. We will deploy two distinct MC-LAGs
that will support the Layer 2 and Layer 3 traffic from the attached servers, which reside in different VLANs. The slide
highlights all of the topological details associated with this environment as well as the relevant configuration values we will
assign to qfx1 and qfx2, which are serving as MC-LAG peers.

Chapter 5–20 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Creating Aggregated Ethernet Interfaces


This slide illustrates the creation of the aggregated Ethernet (AE) interfaces required on both qfx1 and qfx2. Once the AE
interfaces are created, they should show present in operational outputs, as shown on the slide. These AE interfaces will be
administratively up but operationally down until they have functional member links assigned. You must also add the needed
configuration associated with each AE. We associate member links and add the required configuration for these AE interfaces
on subsequent slides.
Note
The working example illustrated here and throughout the
remainder of this section is from qfx1’s perspective. A similar
configuration is also required on qfx2 but is not included.

www.juniper.net Multichassis Link Aggregation • Chapter 5–21


Data Center Switching

Configuring the ICL-PL: Part 1


The ae0 interface has been designated as the ICL-PL connection between qfx1 and qfx2. This slide illustrates part of the
process of configuring ae0 as the ICL-PL. We first configure ae0 as a trunk interface and associate it with the v100 VLAN. The
v100 VLAN is designated as a control VLAN to which we will later assign an IRB that will be used for establishing an ICCP
session between the two MC-LAG peers.
Next, this slide illustrates the process of enabling LACP in the active role along with the association of two member links on
the ae0 interface. The last command illustrated shows the definition of the v100 VLAN, which again is used as the control
VLAN for the ICL-PL.

Chapter 5–22 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Configuring the ICL-PL: Part 2


After activating the configuration on both qfx1 and qfx2 using the commit command, you should see, as shown on the slide,
the ae0 interface in the operational state. You should also see the et-0/0/48 and et-0/0/49 member links are now
appropriately associated with ae0 and that they are receiving and transmitting LACP packets.

www.juniper.net Multichassis Link Aggregation • Chapter 5–23


Data Center Switching

Configuring the ICL-PL: Part 3


This slide illustrates the final configuration and verification steps to enable the ICL-PL. Here we configure the IRB, which is
then assigned to the v100 VLAN as a Layer 3 interface. After activating the configuration changes, we verify that the IRB
interface is up and operational.

Chapter 5–24 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Enabling ICCP: Part 1


This and the next slide illustrate the process to enable ICCP on the MC-LAG peers. As part of our ICCP configuration, we
illustrate the definition of the backup liveness detection feature, which in this example is configured over the out of band
(OOB) management network.
Note
Configuring the session establishment hold time
helps in faster ICCP connection establishment.
The recommended value is 50 seconds.

Note
The liveness detection intervals determine how often BFD messages are
exchanged and how much time can pass before declaring the remote ICCP
peer as dead. It is recommended that you not use a value less than 200 ms.
The actual value you use will depend on your deployment!

www.juniper.net Multichassis Link Aggregation • Chapter 5–25


Data Center Switching

Enabling ICCP: Part 2


After committing the configuration changes on both MC-LAG peers, you should see an established ICCP peering session with
liveliness detection in the up state and the backup liveness peer status in the up state. If you do not see these status
indicators at this point, you should verify configuration settings on both peers and that a clear communications path exists
over both the ICL-PL and the OOB management network.

Chapter 5–26 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Enabling Multichassis Protection


In preparation for enabling MC-LAGs on qfx1 and qfx2, you must ensure that multichassis protection is enabled and that both
peers reference each other using the IP addresses used for the ICCP peering session. Note that you must also reference the
interface through which the MC-LAG peers will communicate. The referenced interface should be the interface you designate
as the ICL-PL connection.

www.juniper.net Multichassis Link Aggregation • Chapter 5–27


Data Center Switching

Configuring MC-AE 1: Part 1


On this and the next slides we illustrate the configuration required for MC-AE 1, which corresponds to the ae1 interface we
created earlier. This slide shows the definition of the v15 VLAN, which is associated with MC-AE 1. Next, the member link on
qfx1 (et-0/0/50) is associated with ae1. The last command on the slide shows ae1 being enabled for Layer 2 operations, as
an access port, and associated with the v15 VLAN. The qfx2 switch should have a similar configuration, with the key exception
being the member link assigned to the ae1 interface, which, in the case of qfx2, is et-0/0/51 as shown in the topology details
shown on the slide.
The configuration steps to enable ae2 as MC-AE 2 are very similar to the steps for ae1 in this example. The key difference is
the VLAN MC-AE 2 supports (v20) and the MC-AE configuration values, which, in the case of the status control and chassis ID
designations, are reversed from those shown in the MC-AE 1 example here and on the next slides.
Note
In addition to the MC-AE interfaces being associated with their respective
VLANs, you will also need to associate ae0, which is functioning as the
ICL-PL link, with all VLANs for which you want it to carry traffic (v15 and
v20 in our example). This configuration is not illustrated on the slide but
should be done to ensure full reachability through the MC-LAG peers.

Chapter 5–28 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Configuring MC-AE 1: Part 2


This slide shows more of the configuration requirements for the MC-AE 1 interface. Note that some values, specifically the
MC-AE ID and the MC-LAG mode, are common between peers while other values, specifically the chassis ID and status control
designations, must be different between the two peers. In the case of the MC-AE 1, qfx1 is designated with the chassis ID 0
while qfx2 is designated with the chassis ID of 1. In the case of the MC-LAG mode, qfx1 is assigned the active role while qfx2
is assigned the standby role.
Note that the opposite designations are assigned for these two values on the MC-AE 2 configuration, which is not illustrated in
our example. Also note that on the QFX5100 Series switches, only the active-active MC-LAG mode is supported.

www.juniper.net Multichassis Link Aggregation • Chapter 5–29


Data Center Switching

Configuring MC-AE 1: Part 3


This slide shows the remainder of the MC-AE configuration for MC-AE 1 on qfx1. Here we see that LACP has been enabled in
the active role and that the designated admin key and system ID for MC-AE 1 have been defined. Note that these specific
values must be the same on both peers participating in the same MC-AE. The LACP admin key and LACP system ID values
must be unique between each defined MC-AE. While not shown in this example, those values for MC-AE 2 are 2 and
02:02:02:02:02:02 respectively.
Note that because LACP is required on the MC-AE, the attached server device must, by default, also support LACP. With the
new configuration changes activated using the commit command, as shown on the slide, you should now see the MC-AE
interfaces in an operational state. We verify the state of the MC-AE interfaces on the next slide.

Chapter 5–30 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Verifying MC-AE Status


Now that the required configuration has been added and activated, you should see an operational status for the MC-AE
interfaces. Using the show interfaces terse command, you can determine the administrative and operational states.
Using the show interfaces mc-ae command, you can also determine the status control of the local and peers for the
shared MC-LAG in addition to the interface state of the MC-AE interfaces. In the current state, the MC-LAG is operational and
should be able to support Layer 2 traffic for its associated VLAN. Note that the details from the show interfaces mc-ae
command for MC-AE 2 (ae2) have been trimmed for brevity’s sake but should show a similar output as illustrated for MC-AE 1
(ae1).
You can also determine the operational status of these interfaces through other commands. One such command and its
associated output follows:
{master:0}[edit]
user@qfx1# run show vlans

Routing instance VLAN name Tag Interfaces


default-switch v100 100
ae0.0*
default-switch v15 15
ae0.0*
ae1.0*
default-switch v20 20
ae0.0*
ae2.0*

www.juniper.net Multichassis Link Aggregation • Chapter 5–31


Data Center Switching

What If...?
In some deployments you may find that the attached server does not support LACP. Because MC-AE interfaces depend on
state information learned from LACP, this will result in the MC-AE interfaces not becoming operational and therefore will not
be able to support the traffic to and from the server. To work around this specific situation, you can force the member links
associated with an MC-AE to the up position using the command illustrated on the slide. Once the interfaces have been
forced up using the illustrated commands, you should see the operational state transition to the up state as shown in the
following output:
{master:0}[edit]
user@qfx1# commit
configuration check succeeds
commit complete

{master:0}[edit]
user@qfx1# run show interfaces terse | match "ae(1|2)"
et-0/0/50.0 up up aenet --> ae1.0
et-0/0/51.0 up up aenet --> ae2.0
ae1 up up
ae1.0 up up eth-switch
ae2 up up
ae2.0 up up eth-switch

Chapter 5–32 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Configuring InterVLAN Routing: Part 1


One of the objectives of this working example was to ensure that gateway services, facilitating interVLAN routing, are provided
for the attached servers through the MC-LAG peers. To meet this objective, we configure a new IRB interface on each peer
with unique IP addresses in the two subnets associated with the v15 and v20 VLANs. We then configure VRRP for each of the
defined subnets ensuring the master and backup roles within VRRP are assigned to the two peers through priority settings.
The details used to meet this objective are shown on the slide and illustrated through the configuration tasks on this and the
next slide.
This slide specifically shows the creation of the IRB used for MC-AE 1 along with a static ARP definition. The static ARP
definition is typically only required when Layer 3 protocols are used on the MC-LAG peers, which is not the case in this
example. We include the static ARP definition to illustrate how it is created and to future-proof the configuration and
environment for any future protocol control traffic exchanges that may be required between the peers over their ICL-PL
connection.

www.juniper.net Multichassis Link Aggregation • Chapter 5–33


Data Center Switching

Configuring InterVLAN Routing: Part 2


This slide shows the remainder of the configuration tasks required to enable VRRP on the IRB associated with the v15 VLAN
along with the association of the IRB with the VLAN itself. Note that a similar configuration is required on qfx2 with the key
differences being the primary IP address assigned to its IRB and the VRRP priority used for this group.
In addition to the configuration shown on this and the previous slide, you must also include a similar configuration, with
different IP addresses and a different priority setting, to support interVLAN routing for the v20 VLAN. Those configuration
commands are not illustrated in this example for brevity’s sake.

Chapter 5–34 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Verifying InterVLAN Routing


Once the required configuration is added to both MC-LAG peers for both VLANs, the attached servers (Server A and Server B),
which reside on different Layer 3 subnets, should be able to communicate. Note that the gateway address on the servers
must point to the virtual IP address configured for their respective subnet (172.25.15.1 for v15 and 172.25.20.1 for v20).
In this verification example, qfx1’s route table is shown and communications between the servers, which reside on different
subnets, is confirmed. The route table indicates that qfx1 has routes for both of the referenced subnets. These entries are
added based on the IRB configuration performed on the previous slides. To verify communications between the servers, you
can use the ping command. In this example, Server A, which resides on the 172.25.15.0/24 subnet, can successfully ping
Server B, which resides on the 172.25.20.0/24 subnet.

www.juniper.net Multichassis Link Aggregation • Chapter 5–35


Data Center Switching

We Discussed:
• The purpose and value of MC-LAGs;
• The components and operations of an MC-LAG; and
• How to implement an MC-LAG on QFX5100 Series switches.

Chapter 5–36 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Review Questions
1.

2.

3.

www.juniper.net Multichassis Link Aggregation • Chapter 5–37


Data Center Switching

Multichassis Link Aggregation Lab


The slide provides the objectives for this lab.

Chapter 5–38 • Multichassis Link Aggregation www.juniper.net


Data Center Switching
Answers to Review Questions
1.
The primary benefit of MC-LAG is that, when deployed, link-level redundancy is achieved along with node-level redundancy and the
single-point of failure situation, typically found in these deployment scenarios, is eliminated.
2.
ICCP replicates control traffic and forwarding states across the MC-LAG peers and communicates the operational state of the MC-LAG
peers. The ICL-PL is the connection used to support the ICCP communications and also serves as an alternate path when MC-AE links
on one of the peers are unavailable.
3.
When defining an MC-AE, the LACP admin key and system ID must match whereas the status control and chassis ID parameters must
be different on the MC-LAG peers.

www.juniper.net Multichassis Link Aggregation • Chapter 5–39


Data Center Switching

Chapter 5–40 • Multichassis Link Aggregation www.juniper.net


Data Center Switching

Chapter 6: Mixed Virtual Chassis


Data Center Switching

We Will Discuss:
• Key concepts and components of a mixed Virtual Chassis;
• The operational details of a mixed Virtual Chassis; and
• Implementing a mixed Virtual Chassis and verifying its operations.

Chapter 6–2 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Overview of Mixed Virtual Chassis


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Mixed Virtual Chassis • Chapter 6–3


Data Center Switching

A Mix of Switch Types


You can connect two or more EX4300 Series or QFX5100 Series switches together to form one unit and manage the unit as a
single chassis, called a Virtual Chassis. The Virtual Chassis system offers you add-as-you-grow flexibility. A Virtual Chassis can
start with two switches and grow, based on your needs, to as many as ten interconnected switches. This ability to grow and
expand within and across wiring closets is a key advantage in many enterprise environments.
A mixed Virtual Chassis is a Virtual Chassis consisting of a mix of different switch types. Only certain mixtures of switches are
supported. It can consist either of EX4200, EX4500, or EX4550 Series switches or it can consist of EX4300, QFX3500,
QFX3600, or QFX5100 Series switches. This chapter covers the QFX-based mixed Virtual Chassis.

Chapter 6–4 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

CLI Command
For any of the supported switches to support mixed mode, you must enable mixed mode (followed by a reboot) from the CLI.
To enable mixed mode operation, issue the request virtual-chassis mode mixed reboot command. The mode
will be set to mixed and then the switch will reboot so that the change to both the software and hardware take effect.

Behavior Changes
Setting the Virtual Chassis mode to mixed, changes both the software and hardware functionality of a Virtual Chassis. In
general, the maximum scaling numbers are reduced to the lowest maximum scaling numbers between the possible
members. That is, regardless of what actual switch-types (EX4300 or QFX5100) are attached to the Virtual Chassis, the
numbers are scaled down significantly. You should only set the Virtual Chassis to this mode if there will be a mix of switches.

www.juniper.net Mixed Virtual Chassis • Chapter 6–5


Data Center Switching

Benefits of a Mixed Virtual Chassis


The benefits of a mixed Virtual Chassis are the same as those for a non-mixed Virtual Chassis.
In a Virtual Chassis configuration, one of the member switches is elected as the master RE and a second member switch is
elected as the backup RE. This design approach provides control plane redundancy and is a requirement in many enterprise
environments.
Having redundant REs enables you to implement nonstop active routing (NSR), and nonstop bridging (NSB) which allows for a
transparent switchover between REs without requiring restart of supported routing protocols and supported Layer 2 protocols
respectively. Both REs are fully active in processing protocol sessions, so each can take over for the other. We discuss both
NSR and NSB in more detail later in this chapter.
You can connect certain EX Series and/or QFX5100 Series switches together to form a Virtual Chassis system, which you then
manage as a single device. Comparatively speaking, managing a Virtual Chassis system is much simpler than managing up to
ten individual switches. For example, when upgrading the software on a Virtual Chassis system, only the master switch must
have the software upgrade command issued. However, if all members function as standalone switches, all individual
members must have the software upgraded separately. Also, in a Virtual Chassis scenario, it is not necessary to run the
Spanning Tree Protocol (STP) between the individual members because in all functional aspects, a Virtual Chassis system is a
single device.

Chapter 6–6 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Packet Forwarding Engine Scaling


Regardless of what switch types are actually attached to the Virtual Chassis, setting the mode to mixed causes the master
kernel and daemons to set the sizes of hardware devices to the lowest common denominators (LCD). The slide shows some of
the hardware capabilities that are affected.

www.juniper.net Mixed Virtual Chassis • Chapter 6–7


Data Center Switching

Software Features
In general you can expect Junos to support the LCD in regards to software features in a mixed Virtual Chassis. The slide shows
the uniform resource locator (URL) that will allow you to view behavior of many of the software features when they are used in
a mixed scenario.

Chapter 6–8 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Components
You can interconnect one to ten EX4300 and QFX Series switches to form a mixed Virtual Chassis.
Each EX4300 and QFX switch has single PFE. PFEs in a Virtual Chassis can be interconnected, using Virtual Chassis ports
(VCPs). Collectively, the PFEs and their connections constitute the Virtual Chassis backplane.
You can use the built-in QSFP+ VCPs on the rear of the EX4300 switches (or front of QFX5100s) or 10 GbE uplink ports,
converted to VCPs, to interconnect the member switches’ PFEs. To use an uplink port as a VCP, explicit configuration is
required.
Each member switch will take on a role of either the Master RE, Backup RE, or line card. Initially, the two REs are elected to
their role (see RE election slide). Once they are elected to their role, it is the master RE that assigns roles to all of the other
members. In a mixed Virtual Chassis environment using QFX5100 Series switches, QFX5100s will always be elected to the RE
role. In fact, any other type of switch will refuse to become the RE if it is known that there is a QFX5100 in the Virtual Chassis.

www.juniper.net Mixed Virtual Chassis • Chapter 6–9


Data Center Switching

Ring Cabling
This slide illustrates the recommended cabling option and provides some related information. The actual cabling distances
are dependent on the cable or optic type and capabilities. Please refer to the latest documentation for your specific platform
to determine actual maximum distances.

Chapter 6–10 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Master and Backup RE Election


The slide shows the master and backup RE election algorithm that is performed by every switch in the mixed Virtual Chassis.
This is identical to a non-mixed Virtual Chassis except that in a mixed Virtual Chassis an EX4300 can never become an RE.

www.juniper.net Mixed Virtual Chassis • Chapter 6–11


Data Center Switching

Provisioning a Mixed Virtual Chassis


The slide highlights the topic we discuss next.

Chapter 6–12 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Virtual Chassis Configuration


This slide illustrates the hierarchy used and configurable options available when configuring a Virtual Chassis. A more
detailed explanation for some key configuration options follows:
• auto-sw-update: This option enables the auto software update feature which is used to automatically
upgrade member switches that have a software mismatch (for example, the member switch’s version does not
match the version currently used by the master switch elected for a given Virtual Chassis). This feature is not
enabled by default.
• id: This feature allows you to explicitly assign a Virtual Chassis ID so that, if two Virtual Chassis configurations
merge, the ID you assign takes precedence over the automatically assigned Virtual Chassis IDs and becomes the
ID of the newly merged Virtual Chassis configuration.
• mac-persistence-timer: If the master switch is physically disconnected or removed from the Virtual
Chassis, this feature determines how long the backup (new master) switch continues to use the address of the
old master switch. When the MAC persistence timer expires, the backup (new master) switch begins to use its
own MAC address. No minimum or maximum timer limits exist and the default timer is 10 minutes.
• no-split-detection: This feature is used to disable the split and merge feature which is enabled by
default. The split and merge feature provides a method to prevent the two parts of a separated Virtual Chassis
from adversely affecting the network. This feature also allows the two parts to merge back into a single Virtual
Chassis configuration.
Continued on the next page.

www.juniper.net Mixed Virtual Chassis • Chapter 6–13


Data Center Switching
Virtual Chassis Configuration (contd.)
• preprovisioned: This feature is used to deterministically control both the role and the member ID assigned
to each member switch in a Virtual Chassis. A preprovisioned configuration links the serial number of each
member switch to a specified member ID and role. The serial number must be specified in the configuration file
for the member to be recognized as part of the Virtual Chassis. Using this option, you select two member
switches as eligible for mastership election process. When you list these two members in the preprovisioned
configuration file, you designate both members as routing-engine. One member will function as the master
switch and the other will function as the backup switch. You designate all other members with the line card role.

In addition to the previously listed features, you can also enable graceful Routing Engine switchover (GRES) feature as shown
on the slide. GRES enables a device running the Junos OS with redundant REs to continue forwarding traffic even if one RE
fails. GRES preserves interface and kernel information and ensures minimal traffic interruption during a mastership change.
Note that GRES does not preserve the control plane.

Chapter 6–14 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Dynamic Configuration Process: Part 1


When an QFX5100 or EX4300 switch powers on, it receives the default mastership priority value of 128. When a standalone
switch is connected to an existing Virtual Chassis configuration (which implicitly includes its own master), we recommend that
you explicitly configure the mastership priority of the members that you want to function as the master and backup switches.
We recommend the following guidelines for assigning mastership priority:
• Specify the same mastership priority value for the master and backup switches in a Virtual Chassis configuration.
Doing so helps to ensure a smooth transition from master to backup if the master switch becomes unavailable.
This configuration also prevents the original master switch from retaking control from the backup switch when
the original master switch comes back online, a situation sometimes referred to as flapping or pre-emption that
can reduce the efficiency of system operation.
• Configure the highest possible mastership priority value (255) for the master and backup switches. This
configuration ensures that these members continue to function as the master and backup switches when new
members are added to the Virtual Chassis configuration.

www.juniper.net Mixed Virtual Chassis • Chapter 6–15


Data Center Switching

Dynamic Configuration Process: Part 2


Enable the VCP 0/48 and 0/51 ports on member 0. Notice that it goes into the Down state after it is enabled. This is because
member 2 is currently powered off. Next, set the Virtual Chassis mode to mixed and reboot the master RE.

Chapter 6–16 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Dynamic Configuration Process: Part 3


Once the desired backup RE (a QFX5100 Series switch) is powered on, you must enable the VCPs so that it will join the Virtual
Chassis. Notice that after the VCP port is enabled, it joins the Virtual Chassis, it is assigned member ID 1, and the console of
member 1 is redirected to the console of the master RE.

www.juniper.net Mixed Virtual Chassis • Chapter 6–17


Data Center Switching

Dynamic Configuration Process: Part 4


At this point, if you verify the status of the Virtual Chassis with the show virtual-chassis status command, you will
notice the member 1 is placed in the Inactive state. The reason that it is Inactive is because you added an QFX5100
(in the default non-mixed mode state) to a mixed Virtual Chassis (as determined by the settings of the master RE). All
members must be configured for mixed mode before they will become active. At this point, we will not worry about the
Inactive state. Keep in mind that even though a member is Inactive it is still under the control of the master RE. Once
the rest of the members are added to the Virtual Chassis, we will set all of their modes to mixed at the same time and then
reboot them all at the same time. The final step to member 1 to the Virtual Chassis is to enable VCP 0/51 on member 1.

Chapter 6–18 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Dynamic Configuration Process: Part 5


When you power on the desired member 2 (an EX4300) the master RE automatically assigns it member ID 2. Since member 2
is an EX4300, the QSFP+ ports are automatically enabled as VCPs, by default. This fact allows member 2 to join the Virtual
Chassis as soon as it is powered on.

www.juniper.net Mixed Virtual Chassis • Chapter 6–19


Data Center Switching

Dynamic Configuration Process: Part 6


At this point, if you verify the status of the Virtual Chassis with the show virtual-chassis status command, you will
notice the member 1 and member 2 are placed in the Inactive state. The reason that they are Inactive is because the
master RE is set for mixed Virtual Chassis but members 1 and 2 are not. All members must be configured for mixed mode
before they will become active.

Chapter 6–20 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Dynamic Configuration Process: Part 7


When you power on the desired member 3 (an EX4300) the master RE automatically assigns it member ID 3. Since member 3
is an EX4300, the QSFP+ ports are automatically enabled as VCPs, by default. This fact allows member 3 to join the Virtual
Chassis as soon as it is powered on. Notice that all non-master members remain in the Inactive state. This problem will be
fixed in the next few slides.

www.juniper.net Mixed Virtual Chassis • Chapter 6–21


Data Center Switching

Dynamic Configuration Process: Part 8


To activate the linecards and backup RE, you must set the mode of the Virtual Chassis to mixed and then reboot all of the
members.

Chapter 6–22 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Dynamic Configuration Process: Part 9


Notice the as a result of the reboot all the members are now in mixed mode and they are all Prsnt and active.

www.juniper.net Mixed Virtual Chassis • Chapter 6–23


Data Center Switching

Preprovisioning a Mixed Virtual Chassis


This slide illustrates a preprovisioned configuration example along with some related details.

Chapter 6–24 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Software Requirements and Upgrades


The slide highlights the topic we discuss next.

www.juniper.net Mixed Virtual Chassis • Chapter 6–25


Data Center Switching

Software Requirements
All members of a Virtual Chassis must have the same version of Junos OS installed. Notice that even though the bottom
switch is a member of the Virtual Chassis (member 3) it is placed into the Inactive state. This happens because its version
of software does not match the version on the master RE.

Chapter 6–26 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Mismatch Software Example


The master RE detects that mismatched version of Junos so it places member 3 in the Inactive state. In the Inactive
state, it will not forward data. In this state, member 3 is still under the control of the master RE. The slide show that by using
the CLI of the master RE, you can manually upgrade the software of the member 3.

www.juniper.net Mixed Virtual Chassis • Chapter 6–27


Data Center Switching

Software Upgrades Using NSSU: Part 1


Nonstop software upgrade (NSSU) can be used to upgrade the software running on all member switches participating in a
Virtual Chassis while minimizing traffic disruption during the upgrade. NSSU is supported on most EX Series and QFX Series
switches that support Virtual Chassis. Refer to specific documentation for your particular platform to verify that the current
Junos OS version supports NSSU.
Before attempting an NSSU, you must ensure that the Virtual Chassis is set up correctly. First, the Virtual Chassis members
must be connected in a ring topology. The ring topology prevents the Virtual Chassis from splitting during the NSSU process.
Next, the master and backup must be adjacent to each other. Adjacency allows the master and backup switches to always be
in sync. Last, ensure that the line cards in the Virtual Chassis are explicitly preprovisioned. During an NSSU, the members
must maintain their roles—the master and backup must maintain their master and backup roles, although mastership will
change, and the other member switches must maintain their line card roles. When you are upgrading a two-member Virtual
Chassis, no-split-detection must be configured so that the Virtual Chassis does not split when an NSSU upgrades a
member.
All members of the Virtual Chassis must be running the same version of the Junos software. NSR and graceful Routing Engine
switchover (GRES) must be enabled. Optionally, you can enable NSB. Enabling NSB ensures that all NSB-supported Layer 2
protocols operate seamlessly during the Routing Engine switchover that is part of the NSSU. Another step that you might want
to consider is to back up the current system software—Junos OS, the active configuration, and log files—on each member to
an external storage device with the request system snapshot command.

Chapter 6–28 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Software Upgrades Using NSSU: Part 2


The slide describes how to upgrade the software running on all Virtual Chassis members using NSSU. When the upgrade
completes, all members are running the new version of the software. Because a GRES occurs during the upgrade, the original
Virtual Chassis backup is the new master.
The first step is to download the appropriate software packages from Juniper Networks. If you are upgrading the software
running on a mixed Virtual Chassis, download the software packages for both switch types. The next step is to copy the
software package or packages to the Master switch of the Virtual Chassis. We recommend that you copy the file to the 
/var/tmp directory on the master. Next, log in to the Virtual Chassis using the console connection or the virtual
management Ethernet (VME) interface. Using a console connection allows you to monitor the progress of the master switch
reboot.
The following command should be used to upgrade a Virtual Chassis using NSSU:
{master:0}
user@Switch> request system software nonstop-upgrade /var/tmp/package-name.tgz
The following command should be used to upgrade a Virtual Chassis with mixed platforms using NSSU:
{master:0}
user@Switch> request system software nonstop-upgrade set [/var/tmp/package1-name.tgz /
var/tmp/package2-name.tgz]

www.juniper.net Mixed Virtual Chassis • Chapter 6–29


Data Center Switching

Configuring and Monitoring a Mixed Virtual Chassis


The slide highlights the topic we discuss next.

Chapter 6–30 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Show Commands
The following slides cover the usage of most of the commands shown on this slide.

www.juniper.net Mixed Virtual Chassis • Chapter 6–31


Data Center Switching

VCP Status
Issue the show virtual-chassis vc-port command to determine the status of VCPs. In the output of the command,
each member is represented as an FPC (i.e. member 1 equals fpc1). This commands shows you how each VCP was
configured and its status, speed, and Virtual Chassis neighbor. Notice that each VCP is assigned a Trunk ID. If the Trunk ID is
-1, that means that it is a non-aggregated VCP. If the Trunk ID of a VCP is a positive integer, that means that it belongs to an
aggregated VCP (two or more VCPs automatically associated with the same Link Aggregation Group).

Chapter 6–32 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Virtual Chassis Mode


Issue the request virtual-chassis mode command to change the mode of the Virtual Chassis. Any change made
with this command requires a reboot before it will take effect. If you decide not to immediately reboot using the reboot
modifier, you can check the Virtual Chassis’s current and future mode status with the show virtual-chassis mode
command.

www.juniper.net Mixed Virtual Chassis • Chapter 6–33


Data Center Switching

Enabling and Disabling VCPs


The slide shows the commands necessary to either disable or enable a VCP.

Chapter 6–34 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Network Ports
Neither EX4300 Series or QFX5100 Series switches have dedicated VCPs. Instead, any 10Gbps SFP+ or 40Gbps QSFP+
interface can be used as either a VCP or network port (standard routable/switchable interface). Use the commands on the
slide to switch from a VCP to a network port and vice versa.

www.juniper.net Mixed Virtual Chassis • Chapter 6–35


Data Center Switching

We Discussed:
• Key concepts and components of a mixed Virtual Chassis;
• The operational details of a mixed Virtual Chassis; and
• Implementing a mixed Virtual Chassis and verifying its operations.

Chapter 6–36 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Review Questions
1.

2.

3.

www.juniper.net Mixed Virtual Chassis • Chapter 6–37


Data Center Switching

Mixed Virtual Chassis Lab


The slide provides the objective for this lab.

Chapter 6–38 • Mixed Virtual Chassis www.juniper.net


Data Center Switching
Answers to Review Questions
1.
Issue the command request virtual-chassis mode mixed reboot to change a Virtual Chassis to mixed mode.
2.
Use the auto-sw-update command to automatically upgrade a newly member.
3.
Use the show virtual-chassis mode command to view the future mode (after reboot) of a Virtual Chassis.

www.juniper.net Mixed Virtual Chassis • Chapter 6–39


Data Center Switching

Chapter 6–40 • Mixed Virtual Chassis www.juniper.net


Data Center Switching

Chapter 7: Virtual Chassis Fabric


Data Center Switching

We Will Discuss:
• Key concepts and components of a Virtual Chassis Fabric (VCF); and
• The control and forwarding plane of a VCF.

Chapter 7–2 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Overview of VCF
The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Virtual Chassis Fabric • Chapter 7–3


Data Center Switching

What Is a VCF?
The Juniper Networks VCF provides a low-latency, high-performance fabric architecture that can be managed as a single
device. VCF is an evolution of the Virtual Chassis feature, which enables you to interconnect multiple devices into a single
logical device, inside of a fabric architecture. The VCF architecture is optimized to support small and medium-sized data
centers that contain a mix of 1-Gbps, 10-Gbps, and 40-Gbps Ethernet interfaces.
A VCF is constructed using a spine-and-leaf architecture. In the spine-and-leaf architecture, each spine device is
interconnected to each leaf device. A VCF supports up to twenty total devices, and up to four devices can be configured as
spine devices. QFX5100 Series switches can be placed in either the Spine or Leaf location while QFX3500, QFX3600, and
EX4300 Series switches should only be wired as Leaf devices in a mixed scenario.

Chapter 7–4 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

The Big Picture


The slides shows a fully loaded VCF with 4 Spine nodes and 16 Leaf nodes. Remember, to the outside world (anything outside
of the rectangle on the slide) the VCF appears to be a standalone switch. The slides demonstrates the typical connectivity in
the northbound and southbound directions of a VCF.

www.juniper.net Virtual Chassis Fabric • Chapter 7–5


Data Center Switching

Three Stage Clos Fabric


In the 1950s, Charles Clos first wrote about his idea of a non-blocking, multi-stage, telephone switching architecture that
would allow calls to be completed. The switches in his topology are called crossbar switches. A Clos network is based on a
three-stage architecture, an ingress stage, a middle stage, and an egress stage. The theory is that there are multiple paths for
a call to be switched through the network such that calls will always be connected and not "blocked" by another call. The term
Clos “fabric” came about later as people began to notice that the pattern of links looked like threads in a woven piece of cloth.
You should notice that the goal of the design is to provide connectivity from one ingress crossbar switch to an egress crossbar
switch. Notice that there is no need for connectivity between crossbar switches that belong to the same stage.

Chapter 7–6 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

VCF is Based on a Clos Fabric


The diagram shows a VCF using Juniper Networks switches. In a VCF the Ingress and Egress stage crossbar switches are
called Leaf nodes. The middle stage crossbar switches are called Spine nodes. Most diagrams of a VCF do not present the
topology with 3 distinct stages as shown on this slide. Most diagrams show a VCF with the Ingress and Egress stage combined
as a single stage. It would be like taking the top of the diagram and folding it over onto itself with all Spines nodes on top and
all Leaf nodes on the bottom of the diagram (see the next slide).

www.juniper.net Virtual Chassis Fabric • Chapter 7–7


Data Center Switching

Spine and Leaf Architecture: Part 1


To maximize the throughput of the fabric, each Leaf node should have a connection to each Spine node. This will allow for
each network-facing interface is always two hops away from any other network-facing interfaces. This creates a highly resilient
fabric with multiple paths to all other devices. An important fact to keep in mind is that a member switch has no idea of its
location (Spine or Leaf) in a VCF. The Spine or Leaf function is simply a matter of a device physical location in the fabric. It is
highly recommended and best practice to have QFX5100 Series devices (particularly the QFX5100-24q) in the Spine position
because they are designed to handle the throughput load of a fully populated VCF (20 nodes).

Chapter 7–8 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Spine and Leaf Architecture: Part 2


The slide shows that there are four distinct paths (1 path per Spine node) between Host A and Host B across the fabric. In a
VCF, traffic is automatically load balanced over those four paths using a hash algorithm (keeps frames from same flow on
same path). This is unlike Juniper’s Virtual Chassis technology where only one path to the destination is ever chosen for
forwarding of data.

www.juniper.net Virtual Chassis Fabric • Chapter 7–9


Data Center Switching

VCF Benefits
The slide shows some of the benefits (similar to Virtual Chassis) of VCF when compared to managing 20 individual switches.

Chapter 7–10 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Benefits Compared to Virtual Chassis


The slide shows some of the benefits of using VCF over and above the benefits realized by Virtual Chassis.

www.juniper.net Virtual Chassis Fabric • Chapter 7–11


Data Center Switching

VCF Similarities to Virtual Chassis


The slide lists the major similarities between VCF and Virtual Chassis.

Chapter 7–12 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Differences Between VCF and Virtual Chassis


The slide list some of the major differences between VCF and Virtual Chassis.

www.juniper.net Virtual Chassis Fabric • Chapter 7–13


Data Center Switching

VCF Components
You can interconnect up to 20 QFX5100 Series switches to form a VCF. A VCF can consist of any combination of model
numbers within the QFX5100 family of switches. QFX3500, QFX3600, and EX4300 Series switches are also supported in the
line card role.
Each switch has a Packet Forwarding Engines (PFE). All PFEs are interconnected by Virtual Chassis ports (VCPs). Collectively,
the PFEs and their VCP connections constitute the VCF.
You can use the built-in 40GbE QSFP+ ports or SFP+ uplink ports, converted to VCPs, to interconnect the member switches’
PFEs. To use an uplink port as a VCP, explicit configuration is required.

Chapter 7–14 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Spine Nodes
To be able to support the maximum throughput, QFX5100 Series switches should be placed in the Spine positions. It is further
recommended to use the QFX5100-24q switch in the Spine position. Although, any QFX5100 Series switch will work in the
Spine position, it is the QFX5100-24q switch that supports 32 40GbE QSFP+ ports which allows for the maximum expansion
possibility (remember that 16 Leaf nodes would take up 16 QSFP+ ports on each Spine). Spines are typically configured in the
RE role (discussed later).

www.juniper.net Virtual Chassis Fabric • Chapter 7–15


Data Center Switching

Leaf Nodes
Although not a requirement, it is recommended to use QFX5100 Series devices in the Leaf position. Using a non-QFX5100
Series switch (even just one different switch) requires that the entire VCF is placed into “mixed” mode. When a VCF is placed
into mixed mode, the hardware scaling numbers for the VCF as a whole (MAC table size, routing table size, and many more) to
be scaled down to the lowest common denominator between potential member switches. It is recommended that each Leaf
node has a VCP connection to every Spine node.

Chapter 7–16 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Member Roles
The slide shows the different Juniper switches that can participate in a VCF along with their recommended node type (Spine
or Leaf node) as well as their capability to become an RE or line card. It is always recommended to use QFX5100 Series
switches in the Spine position. All other supported switch types should be place in the Leaf position. In a VCF, only a QFX5100
Series device can assume the RE role (even if you try to make another switch type an RE). Any supported switch type can be
assigned the linecard role.

www.juniper.net Virtual Chassis Fabric • Chapter 7–17


Data Center Switching

Master RE
A VCF has two devices operating in the Routing Engine (RE) role—a master Routing Engine and a backup Routing Engine. All
Spine nodes should be configured for the RE role. However, based on the RE election process only two REs will be elected. Any
QFX5100 Series that is configured as an RE but is not elected to the master or backup RE role will take on the linecard role.
A QFX5100 Series configured for the RE role but operating in the linecard role can complete all leaf or spine related functions
with no limitations within a VCF.
The device that functions as the master Routing Engine:
• Should (a “must” for Juniper support) be a spine device.
• Manages the member devices.
• Runs the chassis management processes and control protocols.
• Represents all the member devices interconnected within the VCF configuration. (The hostname and other
parameters that you assign to this device during setup apply to all members of the VCF.)

Chapter 7–18 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Backup RE
The device that functions as the backup Routing Engine:
• Should (a “must” for Juniper support) be a spine device.
• Maintains a state of readiness to take over the master role if the master fails.
• Synchronizes with the master in terms of protocol states, forwarding tables, and so forth, so that it preserves
routing information and maintains network connectivity without disruption when the master is unavailable.

www.juniper.net Virtual Chassis Fabric • Chapter 7–19


Data Center Switching

Linecard Role
The slide describes the functions of the linecard in a VCF.

Chapter 7–20 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Spine Node as a Linecard


If more than two devices are configured into the RE role, not every one of those devices will actually take on the RE role.
Instead, two REs (master and backup) will be elected and any other device will be placed into the line card role. The slide
describes the behavior of a Spine node that has been configured for the RE role but has actually taken on the linecard role.

www.juniper.net Virtual Chassis Fabric • Chapter 7–21


Data Center Switching

Deploying VCF
Once you power up any of the switches that support VCF, they will all default to Virtual Chassis mode. To operate in VCF mode
you must set the mode to “fabric”. Once you’ve enabled “fabric” mode, you then have three choices for provisioning the VCF;
auto-provisioned, preprovisioned, or non-provisioned (dynamic). These provisioning methods are described briefly on the next
slide. The full details of provisioning are covered in another chapter. This brief discussion is to help you understand RE
election algorithm that is covered in the next few slides.

Chapter 7–22 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Setting the Mode of Operation


The slide shows how to set the mode of a switch to “fabric”.

www.juniper.net Virtual Chassis Fabric • Chapter 7–23


Data Center Switching

Provisioning the VCF


The slide provides an overview of the three provisioning methods for a VCF. The full details of each of the provisioning
methods is covered in detail in a different chapter.

Chapter 7–24 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

General RE Election Rules


An important details of the RE election rules of a VCF is that a QFX5100 Series switch is the only switch that can become an
RE in a VCF. Generally you want your Spine nodes to be configured for the RE role and the Leaf nodes configured for the
linecard role but it is possible to configure a Leaf QFX5100 Series as an RE also. Keep in mind that a member switch has no
idea of its location (Spine or Leaf) in a VCF.

RE Election for Preprovisioned or Autoprovisioned VCFs


The slide lists the simple RE election process when the VCF has been preprovisioned or auto-provisioned.

www.juniper.net Virtual Chassis Fabric • Chapter 7–25


Data Center Switching

RE Election for Nonprovisioned—Dynamically Provisioned—VCF


The slide lists the RE election rules for a dynamically provisioned VCF.

Chapter 7–26 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Member ID = FPC #
Every member switch of a VCF will be assigned a member ID which is some value between 0 and 31. Unless you manually
change a member ID through configuration or CLI command, once a member is assigned a member ID the member will keep
that ID forever (even through reboots and power cycles). The member ID value is not only used to identify a member switch but
it is also used to represent the FPC number. The FPC number is important when it comes to configuration interfaces and even
viewing CLI command like show chassis hardware.
lab@qfx1> show chassis hardware clei-models
Hardware inventory:
Item Version Part number CLEI code FRU model number
Routing Engine 0 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
Routing Engine 1 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
FPC 0 REV 05 650-056264 CMMRG00BRA QFX5100-48S-3AFO
PIC 0 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
...
FPC 1 REV 05 650-056264 CMMRG00BRA QFX5100-48S-3AFO
PIC 0 BUILTIN CMMRG00BRA QFX5100-48S-3AFO
...
FPC 2 REV 06 650-044936 IPMVU10FRA EX4300-24T
PIC 0 REV 06 BUILTIN IPMVU10FRA EX4300-24T
...
FPC 3 REV 06 650-044936 IPMVU10FRA EX4300-24T
PIC 0 REV 06 BUILTIN IPMVU10FRA EX4300-24T
...

www.juniper.net Virtual Chassis Fabric • Chapter 7–27


Data Center Switching

Interface Naming
As a standalone switch, member 2’s highlighted interfaces would be named ge-0/0/0 and ge-0/0/1. As a standalone switch,
member 3’s highlighted interfaces would be named et-0/0/22 and et-0/0/23. However, since each of the switches is a
member of a VCF, all interfaces must be named using the member ID as the FPC number as shown in the slide.

Chapter 7–28 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Required for Obtaining Juniper Support


All though it is possible to not follow the best practices as listed on this slide. There are some best practices that are also
required to be in place in order to receive support from JTAC in regards to your VCF. The best practices that are also required
for Juniper support include the following:
• All Spine nodes must be QFX5100 Series switches;
• The RE role must only be assigned to Spine nodes; and
• All Leaf nodes must be configured for the linecard role.

Other Best Practices—Not Required for Juniper Support


The other best practices listed on the slide are highly recommended as part of the design of your VCF.

www.juniper.net Virtual Chassis Fabric • Chapter 7–29


Data Center Switching

VCF Control and Forwarding Plane


The slide highlights the topic we discuss next.

Chapter 7–30 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Integrated Control Plane


A VCF is controlled by a single switch that is elected as the master RE. One switch will be elected to the backup RE role. You
can also configure other switches to take on the backup RE role in case of a master RE failure. All control plane and
forwarding plane data between members traverse the VCPs. To learn the topology as well as ensure a loop free forwarding
plane, the member switch run the Virtual Chassis Control Protocol (VCCP) between each other. You can think of it to a
modified version of the Intermediate System to Intermediate System (ISIS) protocol. VCCP allows for each switch to calculate a
shorted path forwarding path for unicast data as well as form bidirectional multicast distribution trees (MDTs) for the
forwarding of broadcast, unknown unicast, and multicast (BUM) data. Also, by default there are special forwarding classes
(queues) enabled specifically for forwarding VCCP data within the fabric.

www.juniper.net Virtual Chassis Fabric • Chapter 7–31


Data Center Switching

Topology Discovery: Part 1


All switches participating in the VCF system use the VCCP to discover the system’s topology and ensure the topology is free of
loops. Each member exchanges link-state advertisement (LSA) based discovery messages between all interconnected PFEs
within a VCF system. Based on these LSA-based discovery messages, each PFE builds a member switch topology in addition
to a PFE topology map. These topology maps are used when determining the best paths between individual PFEs.
Once the PFE topology map is built, the individual switches run a shortest-path algorithm for each PFE. This algorithm is
based on hop count and bandwidth. The result is a map table for each PFE that outlines all of the paths to all other PFEs
within the Virtual Chassis system. In the event of a failure, a new SPF calculation is performed.

Chapter 7–32 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Topology Discovery: Part 2


The slide illustrates the physical cabling and logical topology of a five-member VCF.

www.juniper.net Virtual Chassis Fabric • Chapter 7–33


Data Center Switching

Topology Discovery: Part 3


Using a modified SPF algorithm, each PFE builds its own loop-free, multi-path tree to all other PFEs, based on hop count and
bandwidth. This process is automatic and is not configurable. The slide illustrates the basics of this process.

Chapter 7–34 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Smart Trunks
There are several types of trunks that you will find in a VCF.
1. Automatic Fabric Trunks - When there are two VCPs between members (2x40G between member 4 and 0) they
are automatically aggregated together to form a single logical connection using Link Aggregation Groups (LAGs).
2. Next Hop Trunks (NH-Trunks) - These are directly attached VCP between the local member and any other
member. In the slide, NHT1, NHT2, NHT3, and NHT4 are the NH-trunks for member 4.
3. Remote Destination Trunks (RD-Trunks) - These are the multiple, calculated paths between one member and a
remote member. These are discussed on the next slide.

www.juniper.net Virtual Chassis Fabric • Chapter 7–35


Data Center Switching

RD-Trunks
The slide shows how member 4 is able to determine (using what it learns in the VCCP LSAs) multiple paths to a remote
member (member 15 in the example). The paths do not need to be equal cost paths. All links between members are 40Gbps
except for the link between member 4 and 0 (80Gbps) and the link between member 3 and 15 (10Gbps). Based on the
minimum bandwidth of the path, member 4 will assign a weight to each path. This is shown in the next slide.

Chapter 7–36 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Weight Calculation Example


The slide shows a sample (not exactly) of how the weight calculation is performed. First notice the 4->0->15 path. Even
though the NHT1 is 80Gbps, the minimum bandwidth along the path is 40Gbps because the link between member 0 and 15
is 40Gbps. The 4->1->15 and 4->2->15 path each have a minimum bandwidth along the path of 40Gbps since all links are
40Gbps. The minimum bandwidth along the 4->3->15 path is 10Gbps because the link between member 3 and 15 is
10Gbps. To determine the weight to assign to the RD trunk, member 4 simply calculate the total minimum bandwidth and
then divides it into the minimum bandwidth for each RD-Trunk. As you can see, an equal amount of traffic will be sent along
RD-Trunks 1, 2, and 3 while RD-Trunk 4 will receive a considerably less amount of traffic. This behavior helps to ensure that
the 0->15 and 3->15 links are not over-saturated with traffic.

www.juniper.net Virtual Chassis Fabric • Chapter 7–37


Data Center Switching

Fabric Header
A fabric header is used to pass frames over VCPs. In the case of layer 2 switching, when an Ethernet frame arrives, the
inbound member will perform an Ethernet switching table lookup (based on MAC address) to determine the destination
member and port. After that, the inbound member encapsulates the incoming frame in the fabric header. The fabric header
specifies the destination member and port (among other things). All members along the path will forward the encapsulated
frame by performing lookups on the fabric header only. Once the frame reaches the destination member, the fabric header is
removed and the Ethernet frame is sent out of the destination port without a second MAC table lookup.

Chapter 7–38 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Load Balancing
The slide lists two sets of inputs to the hash that are used to load balance traffic over RD-trunks. There are also inputs for IPv6
packets which includes Next Header, Source and Destination Ports, and Source and Destination IP address. These inputs to
the hash algorithm are hard coded in the PFE and cannot be modified.

www.juniper.net Virtual Chassis Fabric • Chapter 7–39


Data Center Switching

Multicast Distribution Trees


In a 16 member VCF, 16 bidirectional multicast distribution trees (MDTs) are automatically created to delivery BUM traffic to
each member of the VCF. The slide shows the MDT that is rooted at Member 4. However, there are 15 other MDTs (not shown
on the slide but are rooted at each of the other member switches) that terminate on Member 4. That means that Member 4
has 16 Bidirectional MDTs that it can use to load balance (using a hash based on VLAN ID) any BUM traffic it receives on a
network port and needs to forward onto the fabric. In a VCF, all members receive copies of BUM data.

Chapter 7–40 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Layer 2 Learning Example: Part 1


The slides shows how member 4 reacts to receiving a frame with an unknown source MAC address from Host A.

www.juniper.net Virtual Chassis Fabric • Chapter 7–41


Data Center Switching

Layer 2 Learning Example: Part 2


Once the master RE learns of the MAC binding (Host A’s MAC address to the inbound interface) it programs the rest of the VCF
members with that same binding.

Chapter 7–42 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

BUM Forwarding
An Ethernet frame with an unknown destination MAC address arrives on a network port on Member 4. Member 4 forwards the
frame along one of the 16 Bidirectional MDTs so that all members of the switch receive a copy. Each member then strips the
fabric header and sends the frame out of all interfaces associated with the VLAN.

www.juniper.net Virtual Chassis Fabric • Chapter 7–43


Data Center Switching

Layer 2 Forwarding: Part 1


The slides shows member 4’s response to receiving frame with a known destination MAC address. It is determined from the
MAC lookup (in member 4’s copy of the MAC table) that this frame is destined for member 15. Member 4 must chose from
one of its four next hops over which to send the frame. In this example, using the hash algorithm, member 4 choses NHT3 to
send the frame.

Chapter 7–44 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Layer 2 Forwarding: Part 2


Upon receiving the frame, member 2 performs a lookup on the frame header. The header shows that this frame is destined to
member 15. Member 2 forwards the frame over its single NH-trunk going to member 15.

www.juniper.net Virtual Chassis Fabric • Chapter 7–45


Data Center Switching

Layer 2 Forwarding: Part 3


Member 15 receives the frame that is destined to itself. Member 15 strips the frame header and forwards the Ethernet frame
out of the outbound port (as it was listed in the frame header).

Chapter 7–46 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

We Discussed:
• Key concepts and components of a Virtual Chassis Fabric (VCF); and
• The control and forwarding plane of a VCF.

www.juniper.net Virtual Chassis Fabric • Chapter 7–47


Data Center Switching

Review Questions
1.

2.

3.

Chapter 7–48 • Virtual Chassis Fabric www.juniper.net


Data Center Switching
Answers to Review Questions
1.
It is important to realize that a Spine node is unaware that it is a Spine node. When choosing a switch to act as a Spine ensure that you
chose one that can handle your future traffic needs.
2.
It is highly recommended to have QFX5100 Series devices (particularly the QFX5100-24q) in the Spine position because they are
designed to handle the throughput load of a fully populated VCF (20 nodes).
3.
Instead of using the bandwidth of the NH-trunks to make the forwarding decision, a member switch takes into consideration the
bandwidth of VCPs downstream as well. Applying a weight to RD-trunks ensures that downstream VCPs will not become
over-saturated with traffic.

www.juniper.net Virtual Chassis Fabric • Chapter 7–49


Data Center Switching

Chapter 7–50 • Virtual Chassis Fabric www.juniper.net


Data Center Switching

Chapter 8: Virtual Chassis Fabric Management


Data Center Switching

We Will Discuss:
• How to use the CLI to configure and monitor a Virtual Chassis Fabric (VCF);
• How to provision a VCF using nonprovisioning, preprovisioning, and auto-provisioning;
• The software requirements for a VCF; and
• How to manage a VCF with Junos Space.

Chapter 8–2 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Managing a VCF Using the CLI


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–3


Data Center Switching

Single Virtual Console


All member switches participating in a VCF system run virtual console software. This software redirects all console
connections to the master switch regardless of the physical console port through which the communications session is
initiated.
The ability to redirect management connections to the master switch simplifies VCF management tasks and creates a level of
redundancy. Generally speaking, you can obtain all status-related information for the individual switches participating in a
VCF system through the master switch. It is, however, possible to establish individual virtual terminal (vty) connections from
the master switch to individual member switches.
If needed, you can access individual members of a VCF system using the request session operational command as
shown in the following example:
{master:0}
user@Switch1-VCF> request session member 2

--- JUNOS 13.2X51-D21.1 built 2014-05-29 11:49:28 UTC


{linecard:2}
user@Switch1-VCF>

Chapter 8–4 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Single Management Interface


The management Ethernet ports on the individual member switches are automatically associated with a management VLAN.
This management VLAN uses a Layer 3 virtual management interface called vme which facilitates communication through the
VCF system to the master switch even if the master switch’s physical Ethernet port designated for management traffic is
inaccessible.
When you set up the master switch, you specify an IP address for the vme interface. This single IP address allows you to
configure and monitor the VCF system remotely through Telnet or SSH regardless of which physical interface the
communications session uses.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–5


Data Center Switching

Operational Mode Configuration


Many of the configuration-type commands for VCF actually exist in Operational mode. For example, you can enable and
disable VCP ports from Operational mode. Any configuration made to the VCF in Operational mode will not appear in the
actual candidate or active configuration. You must simply use show commands (next few slides) to view the status of the VCF
and these settings. Any configuration changes made to the VCF in Operational mode will be preserved through reboots.

Configuration Mode
The slide shows the configuration statements that can be applied to and will appear in the active and candidate
configurations.

Chapter 8–6 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Virtual Chassis Mode


By default, QFX5100 and EX4300 Series switches are enabled for non-mixed Virtual Chassis operation. For one of these types
of switches to participate in a different mode, use the request virtual-chassis mode command. In order for a mode
change to take effect, you must also reboot the switch. This can be done automatically by adding the reboot option to the
end of the request virtual-chassis mode command.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–7


Data Center Switching

Split VCF
Every member of a VCF has the ability to detect a split in the fabric. A split occurs when one or more switches are unable to
communicate with any of the other members (1 or more) over VCPs. In the example, member 1 has lost VCP connectivity to
the rest of the VCF. In this case, all members (0, 1, 2, and 3) detect the split and begin to form new VCFs. One fabric will form
that includes members 0, 2, and 3. Another VCF will form consisting of only member 1. Once the new fabrics are formed and
a master RE is elected (per fabric), the master RE for each fabric will determine whether its fabric will remain active or be
deactivated. By default, only one fabric will remain in the active state.

Chapter 8–8 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

To Be Active or To Not Be Active, That is the Question


After the split, member 0 will be elected the master RE for the fabric consisting of members 0, 2, and 3. Member 1 will be
elected as the master RE for the remaining fabric. The slide shows the rules that each master RE uses to determine whether
its fabric will remain active or be deactivated. In this case, member 1’s fabric will be deactivated because it contains the
original backup but does not contain at least half of the original quantity (4) of member switches.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–9


Data Center Switching

Reactivating a VCF
Now that member 1 has had its fabric deactivated, it will remain deactivated until one of two things occurs. First, if the VCP
connectivity is fixed between member 1 and the rest of the original VCP (detected using VCF ID), then member 1 will
automatically reactivate. Second, a deactivated fabric can be reactivated by issuing the request virtual-chassis
reactivate command. Prior to reactivating an inactive fabric, please make sure that no forwarding loops will be formed
once the fabric becomes active.

Chapter 8–10 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Reusing Old Member IDs


This slide provides the recommended steps and some related details for replacing a member switch within a VCF when using
the dynamic installation method. If you are using the preprovisioned installation and configuration method, you will simply
modify the configuration on the master switch to account for the new switch’s serial number before adding the replacement
switch. We cover the dynamic, preprovisioned, and auto-provisioned installation methods later in this chapter.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–11


Data Center Switching

Renumber a Member
The master switch typically assumes a member ID of 0 because it is the first switch powered on. Member IDs can be assigned
manually using the preprovisioned configuration method or dynamically from the master switch.
If assigned dynamically, the master switch assigns each member added to the VCF a member ID from 1 through 9, making
the complete member ID range 0–9. The master assigns each switch a member ID based the sequence that the switch was
added to the VCF system. The member ID associated with each member switch is preserved, for the sake of consistency,
across reboots. This preservation is helpful because the member ID is also a key reference point when naming individual
interfaces. The member ID serves the same purpose as a slot number when configuring interfaces.
The slide shows how you can use the request virtual-chassis renumber command to renumber a particular
member.

Chapter 8–12 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Disable a VCP
The slide shows how you can disable an individual VCP.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–13


Data Center Switching

Converting a Network Port to a VCP


All QSFP+ and SFP+ interfaces on QFX5100 and EX4300 Series switches can be configured to act as a VCP. By default, all
QSFP+ ports on EX4300 Series switches are configured as a VCP. Use the request virtual-chassis vc-port set
command to convert or enable a port as a VCP.

Chapter 8–14 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Convert a VCP to a Network Port


Use the request virtual-chassis vc-port delete command to convert a VCP to a network port.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–15


Data Center Switching

Disable Split Detection


As discussed earlier, when a fabric split occurs, only one fabric will remain active by default. The other fabric(s) will be
deactivated based on the predetermined rules. To disable, the automatic deactivation function that occurs during a fabric
split, configure the no-split-detection command.

Chapter 8–16 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Show Commands
The slide shows the various show commands that are available to determine the status of a VCF. Most of these commands
will be used in the example provisioning slides later in this chapter.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–17


Data Center Switching

VCF Provisioning
The slide shows that various options for provisioning a VCF. The following slides will discuss each method in detail.

Chapter 8–18 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Dynamic Provisioning of a VCF


The slide highlights the topic we discuss next.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–19


Data Center Switching

Dynamic Provisioning Process, Part 1


Dynamic provisioning occurs when all members are assigned member IDs based upon the order in which they are added to
the VCF. The slide shows the master RE (RE0), backup RE (RE1), two linecards, and their desired member IDs. The first step in
the dynamic provisioning process is to power on the desired master RE. Once powered up, it will automatically assume
member ID 0. You should then go into the configuration and configure the highest possible mastership priority value (255) for
the master and backup switches. This configuration ensures that these members continue to function as the master and
backup switches when new members are added to the Virtual Chassis configuration. Finally, you should configure a
management IP address (vme) and an appropriate hostname for the fabric.

Chapter 8–20 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Dynamic Provisioning Process, Part 2


In the example, QFX5100 Series switches will be used as Spine nodes while EX4300 Series switches will be used a Leaf
nodes. To support this configuration on the master RE, the virtual chassis mode must be set to fabric mixed and then the
chassis must be rebooted. Once the master RE is reboot, enable the VCP ports as shown in the slide. Notice that the VCPs
appear to be in the Down state. They are in this state because the EX4300 Series switches are off.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–21


Data Center Switching

Dynamic Provisioning Process, Part 3


The next switch to power on is the desire member 2 switch. However, once it is powered on, it will be assigned member ID 1
and automatically become a linecard of the fabric. This happens because, by default, EX4300 Series switches have their
QSFP+ ports enabled as VCPs. The slide shows that the EX4300 switch is now a linecard (fpc1) of the VCF.

Chapter 8–22 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Dynamic Provisioning Process, Part 4


Prior to powering on the desired member 1 switch, you should renumber the desired member 2 switch as shown on the slide.
Notice that member 2 remains in the Inactive state during the entire process. Can you think of a reason why?
Notice that member 2 is set to run in non-mixed virtual chassis mode (which conflicts with the master RE). You will eventually
need to change the mode of operation of member 2. However, we will wait until the very end of the process so that we can
change the mode for all non-master members at the same time.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–23


Data Center Switching

Dynamic Provisioning Process, Part 5


Next, power on the desired member 1 switch. Since it is a QFX5100 Series switch, its QSFP+ ports are not automatically set to
be used as VCPs. Use the commands on the slide to enable the VCPs. Once the VCPs are enabled, it will be automatically
assigned member ID 1 become a member of the fabric.

Chapter 8–24 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Dynamic Provisioning Process, Part 6


Notice that because of the mode conflict, member 1 is also placed in the Inactive state.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–25


Data Center Switching

Dynamic Provisioning Process, Part 7


Power on the desired member 3 switch. Because it is an EX4300 Series switch, its QSFP+ ports have been automatically
enabled as VCPs. Again, the new member is placed in the Inactive state because of the mode conflict.

Chapter 8–26 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Dynamic Provisioning Process, Part 8


The slides shows the command that you can use to enable mixed fabric mode on all members of the fabric. A mode
change requires a reboot which can be performed automatically by modifying the request virtual-chassis command
with reboot.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–27


Data Center Switching

Dynamic Provisioning Process, Part 9


Once all of the members of the chassis are rebooted into mixed fabric mode, they will become active members of the
fabric. Notice the member 1 has changed from a linecard to backup RE.

Chapter 8–28 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Preprovisioning and Autoprovisioning a VCF


The slide highlights the topic we discuss next.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–29


Data Center Switching

Preprovisioning Process, Part 1


Preprovisioning lets you control each member’s ID assignment and role (RE or linecard) by associating these details with a
switch’s serial number. In preprovisioning, every member’s serial number must be explicitly listed in the configuration. If a
potential member’s serial number is not listed in the configuration, it will not be added to the fabric regardless of it role.
Prior to powering on any members, you should take note of each switch’s serial number. You will find the serial number,
somewhere on its exterior, usually on a sticker from the factory. Next, power on the desired master RE and configure it to run
in fabric mode (mixed mode also if using non-QFX5100 Series devices).

Chapter 8–30 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Preprovisioning Process, Part 2


After the master RE has rebooted, enter configuration mode and configure each of the four members specifying their member
ID, role, and serial number. Also, specify preprovisioned as the provisioning method.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–31


Data Center Switching

Preprovisioning Process, Part 3


Configure the management interface (vme) for the fabric as well as an appropriate hostname.

Chapter 8–32 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Preprovisioning Process, Part 4


The default configuration for QFX5100 and EX4300 Series devices have the Link Layer Discovery Protocol (LLDP) protocol
enabled on all network ports (non-VCPs). LLDP is enables by default to allow for the auto-conversion of network ports to VCPs.
We discuss auto-conversion of VCPs in some of the later slides.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–33


Data Center Switching

Preprovisioning Process, Part 5


Next, enable the VCPs on the master RE. Then, power on the member 2 switch.

Chapter 8–34 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Preprovisioning Process, Part 6


Because member 2 is an EX4300 with its QSFP+ enabled as VCPs, by default, it is automatically added as member 2 of the
fabric. Notice that member 2 is set to run in non-mixed virtual chassis mode (which conflicts with the master RE). You will
eventually need to change the mode of operation of member 2. However, we will wait until the very end of the process so that
we can change the mode for all non-master members at the same time.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–35


Data Center Switching

Preprovisioning Process, Part 7


Next, power on the desired member 1 switch. Since it is a QFX5100 Series switch, its QSFP+ ports are not automatically set to
be used as VCPs. Use the commands on the slide to enable the VCPs. Once the VCPs are enabled, it will be automatically
assigned member ID 1 become a member of the fabric. Notice that member 1 is set to run in non-mixed virtual chassis mode
(which conflicts with the master RE).

Chapter 8–36 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Preprovisioning Process, Part 8


Power on the member 3 switch and enable its VCP ports, if necessary.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–37


Data Center Switching

Preprovisioning Process, Part 9


At this point, all non-master members of the fabric are in the Inactive state. The next few slides show how to make the
active.

Chapter 8–38 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Preprovisioning Process, Part 10


The slides shows the command that you can use to enable mixed fabric mode on all members of the fabric. A mode
change requires a reboot which can be performed automatically by modifying the request virtual-chassis command
with reboot.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–39


Data Center Switching

Preprovisioning Process, Part 11


Once all of the members of the chassis are rebooted into mixed fabric mode, they will become active members of the
fabric. Notice the member 1 has changed from a linecard to backup RE.

Chapter 8–40 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Autoprovisioning
Auto-provisioning is similar to preprovisioning except that only the Spines need to be preprovisioned. Once the spines are
provisioned, leaves can by added without any changes necessary in configuration mode. Essentially, this allows you to plug
and play with Leaf nodes (like dynamic provisioning). You can optionally enable LLDP on VCP interfaces (enabled on all
network ports in factory default state) to enable the members to automatically convert network ports to VCPs.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–41


Data Center Switching

Automatic VCP Conversion, Part 1


The following example shows a combination of auto-provisioning and automatic VCP conversion. Automatic VCP conversion
can be used when auto-provisioning or preprovisioning a VCF.
Take note of the serial number of each of the Spine nodes before you begin. First, you should power on all switches which will
be in Virtual Chassis mode, by default.

Chapter 8–42 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Automatic VCP Conversion, Part 2


If not already in a factory default state, you can load the factory default configuration. Although this step is not absolutely
necessary, it will cause LLDP to be enabled on all network ports.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–43


Data Center Switching

Automatic VCP Conversion, Part 3


Ensure LLDP is enabled on all network ports that you would like to be auto-converted to VCPs. Issue the show lldp
neighbor to view the LLDP neighbor relationships with locally attached switches. If there is a neighbor relationship with
another member of the fabric on an interface that you do not want auto-converted to a VCP then disable LLDP on that
interface.

Chapter 8–44 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Automatic VCP Conversion, Part 4


Log on to the desired master RE and configure it for either auto-provisioning or preprovisioning.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–45


Data Center Switching

Automatic VCP Conversion, Part 5


On the master RE, configure the management interfaces (vme) and an appropriate hostname for the VCF.

Chapter 8–46 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Automatic VCP Conversion, Part 6


Configure the master RE for fabric mode (and mixed if necessary) and then reboot.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–47


Data Center Switching

Automatic VCP Conversion, Part 7


The slide shows how to verify that status of the VCF. Remember, VCPs are not auto-converted unless all switches are in fabric
mode.

Chapter 8–48 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Automatic VCP Conversion, Part 8


Login to member 2 and configure for fabric mode (mixed if necessary) and reboot.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–49


Data Center Switching

Automatic VCP Conversion, Part 9


The slide shows that the network interfaces between member 0 and member 2 have auto-converted to VCPs. Member 2 is
now a linecard in the fabric.

Chapter 8–50 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Automatic VCP Conversion, Part 10


Repeat steps 12 through 14 for member 1 and member 3.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–51


Data Center Switching

Software Requirements and Upgrades


The slide highlights the topic we discuss next.

Chapter 8–52 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Software Upgrade
Any member can be upgrade by issuing upgrade commands on the master RE.

Software Requirements
All members of a Virtual Chassis must have the same version of Junos OS installed. If the master RE detects a new member
that has a mismatched version of software, it places the new member into the Inactive state. This happens because its
version of software does not match the version on the master RE.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–53


Data Center Switching

Automatic Upgrade
Instead of manually upgrading member switches as they are added to a Virtual Chassis, you can have the master RE upgrade
newly added switches automatically as the moment they are added. The slide shows the configuration necessary to
automatically upgrade newly added member switches.

Chapter 8–54 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Software Upgrades Using NSSU: Part 1


Nonstop software upgrade (NSSU) can be used to upgrade the software running on all member switches participating in a
VCF while minimizing traffic disruption during the upgrade. NSSU is supported on most EX Series and QFX Series switches
that support Virtual Chassis. Refer to specific documentation for your particular platform to verify that the current Junos OS
version supports NSSU.
Before attempting an NSSU, you must ensure that the VCF is set up correctly. First, the VCF members must be connected in a
spine and leaf topology. Also, ensure that the linecards in the VCF are explicitly preprovisioned. During an NSSU, the members
must maintain their roles—the master and backup must maintain their master and backup roles, although mastership will
change, and the other member switches must maintain their line card roles. When you are upgrading a VCF,
no-split-detection must be configured so that the Virtual Chassis does not split when an NSSU upgrades a member.

All members of the VCF must be running the same version of the Junos software. NSR and graceful Routing Engine switchover
(GRES) must be enabled. Optionally, you can enable NSB. Enabling NSB ensures that all NSB-supported Layer 2 protocols
operate seamlessly during the Routing Engine switchover that is part of the NSSU. Another step that you might want to
consider is to back up the current system software—Junos OS, the active configuration, and log files—on each member to an
external storage device with the request system snapshot command.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–55


Data Center Switching

Software Upgrades Using NSSU: Part 2


The slide describes how to upgrade the software running on all VCF members using NSSU. When the upgrade completes, all
members are running the new version of the software. Because a GRES occurs during the upgrade, the original backup RE is
the new master.
The first step is to download the appropriate software packages from Juniper Networks. If you are upgrading the software
running on a mixed VCF, download the software packages for all switch types. The next step is to copy the software package or
packages to the Master switch of the Virtual Chassis. We recommend that you copy the file to the /var/tmp directory on the
master. Next, log in to the VCF using the console connection or the virtual management Ethernet (VME) interface. Using a
console connection allows you to monitor the progress of the master switch reboot.
The following command should be used to upgrade a VCF using NSSU:
{master:0}
user@Switch> request system software nonstop-upgrade /var/tmp/package-name.tgz
The following command should be used to upgrade a VCF with mixed platforms using NSSU:
{master:0}

user@Switch> request system software nonstop-upgrade set [/var/tmp/package1-name.tgz /


var/tmp/package2-name.tgz]

Chapter 8–56 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Managing a VCF with Junos Space


The slide highlights the topic we discuss next.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–57


Data Center Switching

Junos Space: A Management Option


In addition to the CLI, you can use Junos Space to manage and monitor a VCF. Junos Space is a network management
platform that, along with added applications, can be used to manage devices that use Junos OS as well as other devices
running other operating systems. In particular you use the Network Director application in the Junos Space network
management platform to manage a VCF. Complete coverage of Junos Space and the Network Director application are outside
the scope of this course but can be obtained through other Juniper Networks Education Services course offerings.

The remainder of this section provides details of how to discover a VCF through Junos Space along with some of the key
functional areas within the Network Director application used to manage and monitor a VCF.

Chapter 8–58 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Network Director User Interface


Junos Space Network Director provides a simple to use, HTML5-based, Web 2.0 user interface that you can access through
standard Web browsers. The user interface is task-oriented, using task-based workflows to help you accomplish
administrative tasks quickly and efficiently. It provides you the flexibility to work with single devices or with multiple devices
grouped by logical relationship, location, or device type. You can filter, sort, and select columns in tables, making looking for
specific information easy.
Use the Network Director banner to select the working mode. You can also use the Network Director banner to perform other
global tasks, such as setting up your preferences or accessing Junos Space. The working modes in Network Director are
discussed in greater detail on subsequent slides.
In the View pane, Network Director provides you a unified, hierarchal view of your wired, wireless, and virtual networks in the
form of a expand tree that is expandable and collapsible. For wired and wireless networks, you can choose from five views, or
perspectives, of your network—Logical view, Location view, Device view, Virtual view, and Custom Group view. By selecting both
a view and a node in the tree, you indicate the scope over which you want an operation or task to occur. For example:
• By selecting the Access node in Logical view, you indicate that the scope for a task is all access switches under
the Access node.
• By selecting a floor node in Location view, you indicate that the scope for a task is all devices belonging to that
floor.
• By selecting the EX4200 node in Device view, you indicate that the scope for a task is all EX4200 switches in
your network.
Continued on the next page.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–59


Data Center Switching
Network Director User Interface (contd.)
The Tasks pane is available in every mode and lists tasks specific to that mode. In addition to varying according to mode,
tasks listed in the Tasks pane can change as you select different scopes in the View pane. For example, some tasks are
appropriate only at the device level and thus appear only when you have selected an individual device. Clicking a task brings
up task-specific content in the main window. In general, to perform a task in Network Director, you select your mode, your
scope, and then your task.
Note that the location of the Tasks pane changes with mode. In Build and Deploy mode, it is adjacent to the View pane. In
Monitor, Fault, and Report mode, it located to the right of the main window.
The Alarms pane provides a quick summary of how many critical, major, minor, and info alarms are currently active in the
network and is visible in every mode. To display more information about alarms, click the Expand icon in the upper right
corner. You are automatically placed in Fault mode and the Fault mode monitors are displayed.
The main window or workspace displays the content relevant to the mode, scope, and task you have selected. When you log in
to Network Director, this pane displays the Device Inventory page. The Device Inventory page is the default landing page for
Build and Deploy modes. It contains a list of the devices for your current scope. It includes pie charts that permit you to see at
a glance the connection states, configuration synchronization states, and device-type distribution for your devices.

Chapter 8–60 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Configuration Requirements for Managed Devices


This slide shows the configuration requirements for a VCF to be managed by Junos Space.
Also note that using SNMP as a probing method during discovery process is not strictly required but can provide some
advanced details related to the system’s topology.

In addition to the configuration elements shown on the slide, you must also ensure that a VCF and the Junos Space server can
communicate for proper discovery and maintenance tasks.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–61


Data Center Switching

Discovering a VCF: Part 1


This and the next few slides illustrate the process for discovering a VCF through the Network Director application. Note that
you can also discover a VCF directly through the Junos Space Management Platform.

To initiate the discovery process in Network Director you select the Discover Devices task option and click on the Add
button as shown on the slide.

Chapter 8–62 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Discovering a VCF: Part 2


After clicking on the Add button, you can then add one or more VCFs by selecting the appropriate option. In the example on
the slide we add a single VCF by inserting the appropriate IP address. Once the desired target information has been added,
you click on the Add button. You can then verify the target information details in the resulting window and, when satisfied,
click on the Next button to proceed.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–63


Data Center Switching

Discovering a VCF: Part 3


The next step is to add the user credentials for the target device or devices and optionally specify the probe types (Ping, SNMP
or both) to use during the discovery process. Once the desired details have been specified, click the Next button to proceed.
When specifying probe options, it is helpful to understand that a if the probe options are selected and the remote target
device cannot be reached through ping or contacted through SNMP, the discovery process will fail. If no probe options are
selected, Network Director will immediately attempt to form a NETCONF connection over SSH.

Chapter 8–64 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Discovering a VCF: Part 4


The next step is to schedule the discovery operation. As shown on the slide the discovery operation can be scheduled for a
later time or run immediately. Once the desired option is selected, click the Next button to proceed.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–65


Data Center Switching

Discovering a VCF: Part 5


The final step before initiating the discovery process is to review your settings. Once the details used to discover the target
device or devices have been verified, you click on the Finish button to initiate the discovery operation.

Chapter 8–66 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

The Result!
Once the discovery operation has finished you will see a report showing the results of the attempted discovery operation. As
shown on the slide, the sample discovery operation has succeeded.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–67


Data Center Switching

Build Mode
This slide and the next few slides illustrate the various modes in Network Director. This slide specifically covers the Build
mode, which is where you perform device discovery, inventory verification, configuration creation, configuration validation as
well as other tasks that relate to adding or enhancing a device’s functionality through the Network Director application.

Chapter 8–68 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Deploy Mode
This slide covers the Deploy mode, which is where you deploy configurations, reconcile configuration issues that may exist,
manage software images for the managed devices as well as any other tasks that relate to system configuration and software
image management performed through Network Director.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–69


Data Center Switching

Monitor Mode
This slide covers the Monitor mode, which is where you monitor traffic, system utilization, sessions, system status. You can
also perform some troubleshooting operations in this mode as well as any other system verification tasks performed through
Network Director.

Chapter 8–70 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Fault Mode
This slide covers the Fault mode, which is where you verify and manage fault management of the system and its
components through Network Director.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–71


Data Center Switching

Report Mode
This slide covers the Report mode, which is where you generate, manage, and run reports for a VCF and its components
through Network Director.

Chapter 8–72 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

We Discussed:
• How to use the CLI to configure and monitor a Virtual Chassis Fabric (VCF);
• How to provision a VCF using nonprovisioning, preprovisioning, and autoprovisioning;
• The software requirements for a VCF; and
• How to manage a VCF with Junos Space.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–73


Data Center Switching

Review Questions
1.

2.

3.

Chapter 8–74 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Virtual Chassis Fabric Lab


The slide provides the objective for this lab.

www.juniper.net Virtual Chassis Fabric Management • Chapter 8–75


Data Center Switching
Answers to Review Questions
1.
When you log into the console of a leaf node, your session is redirected to the master RE.
2.
To stop a VCF from deactivating during a split, configure no-split-detection.
3.
The three methods of provisioning a VCF are nonprovisioning, preprovisioning, and autoprovisioning.

Chapter 8–76 • Virtual Chassis Fabric Management www.juniper.net


Data Center Switching

Resources to Help You Learn More


The slide lists online resources available to learn more about Juniper Networks and technology. These resources include the
following sites:
• Pathfinder: An information experience hub that provides centralized product details.
• Content Explorer: Junos OS and ScreenOS software feature information to find the right software release and
hardware platform for your network.
• Feature Explorer: Technical documentation for Junos OS-based products by product, task, and software release,
and downloadable documentation PDFs.
• Learning Bytes: Concise tips and instructions on specific features and functions of Juniper technologies.
• Installation and configuration courses: Over 60 free Web-based training courses on product installation and
configuration (just choose eLearning under Delivery Modality).
• J-Net Forum: Training, certification, and career topics to discuss with your peers.
• Juniper Networks Certification Program: Complete details on the certification program, including tracks, exam
details, promotions, and how to get started.
• Technical courses: A complete list of instructor-led, hands-on courses and self-paced, eLearning courses.
• Translation tools: Several online translation tools to help simplify migration tasks.

www.juniper.net
Data Center Switching

www.juniper.net
Acronym List
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
GRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . graceful Routing Engine switchover
GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface
ICCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inter-Chassis Control Protocol
ICL-PL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .interchassis link-protection link
ISSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unified in-service software upgrade
JNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Certification Program
LACP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Aggregation Control Protocol
LCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lowest common denominator
LLDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Layer Discovery Protocol
LSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link-state advertisement
MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multichassis Link Aggregation
MDT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multicast distribution tree
NSSU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .nonstop software upgrade
OOB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . out-of-band
PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Packet Forwarding Engine
RE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routing Engine
STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree Protocol
URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . uniform resource locator
VCCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis Control Protocol
VCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis Fabric
VCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis port
VME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual management Ethernet interface
vty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual terminal
ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . zero touch provisioning

www.juniper.net Acronym List • ACR–1


ACR–2 • Acronym List www.juniper.net

S-ar putea să vă placă și