Sunteți pe pagina 1din 190

Network Automation Using

Contrail Cloud
2.a

Student Guide
Volume 1 of 2

Worldwide Education Services

1133 Innovation Way


Sunnyvale, CA 94089
USA
408-745-2000
www.juniper.net

Course Number: EDU-JUN-NACC


This document is produced by Juniper Networks, Inc.
This document or any part thereof may not be reproduced or transmitted in any form under penalty of law, without the prior written permission of Juniper Networks Education
Services.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The
Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service
marks are the property of their respective owners.
Network Automation Using Contrail Cloud Student Guide, Revision 2.a
Copyright © 2016 Juniper Networks, Inc. All rights reserved.
Printed in USA.
Revision History:
Revision 2.a—March 2016.
The information in this document is current as of the date listed above.
The information in this document has been carefully verified and is believed to be accurate for software Release 2.21. Juniper Networks assumes no responsibilities for any
inaccuracies that may appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or consequential damages
resulting from any defect or omission in this document, even if advised of the possibility of such damages.

Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents

Chapter 1: Course Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1

Chapter 2: Contrail Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


SDN Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Contrail Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19

Chapter 3: Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1


Contrail Components and Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Contrail Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Deployment Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-37

Chapter 4: OpenStack and Contrail Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Tenant Creation Walkthrough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Creating and Managing Network Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26
Implementing Floating IPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-33
Using Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-46
Using OpenStack and Contrail APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-58
Lab: Tenant Implementation and Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-83

Acronym List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACR-1

www.juniper.net Contents • iii


iv • Contents www.juniper.net
Course Overview

This two-day course is designed to provide students with the knowledge required to work with the Juniper Contrail
software-defined networking (SDN) solution. Students will gain in-depth knowledge of how to use the OpenStack and
Contrail Web UIs and APIs to perform the required tasks. Through demonstrations and hands-on labs, students will gain
experience with the features of Contrail. This course is based on Contrail Release 2.21.
Course Level
Network Automation Using Contrail Cloud is an intermediate-level course.
Intended Audience
This course benefits individuals responsible for working with software-defined networking solutions in data center,
service provider, and enterprise network environments.
Prerequisites
The prerequisites for this course are as follows:
• Basic TCP/IP skills;
• General understanding of data center virtualization;
• Basic understanding of the Junos operating system;
• Attendance of the Introduction to the Junos Operating System (IJOS) and Juniper Networks SDN
Fundamentals (JSDNF) courses prior to attending this class; and
• Basic knowledge of object-oriented programming and Python scripting is recommended.
Objectives
After successfully completing this course, you should be able to:
• Define basic SDN principles and functionality.
• Define basic OpenStack principles and functionality.
• Define basic Contrail principles and how they relate to OpenStack.
• List and define the components that make up the Contrail solution.
• Explain where Contrail fits into NFV and SDN.
• Describe the functionality of the Contrail control and data planes.
• Describe Nova Docker support in Contrail.
• Describe extending Contrail cluster with physical routers.
• Describe support for TOR switches and OVSDB.
• Describe the OpenStack and Contrail WebUIs.
• Create a tenant project.
• Create and manage virtual networks.
• Create and manage policies.
• Create and assign floating IP addresses.
• Add an image and launch an instance from it.
• Describe how a tenant is created internally.
• Use Contrail's API to configure OpenStack and Contrail.
• Describe service chaining within Contrail.
• Set up a service chain.
• Explain the use of Heat Templates with Contrail.

www.juniper.net Course Overview • v


• Manipulate the WebUI monitoring section.
• Extract key information regarding the Contrail infrastructure.
• Extract key information regarding traffic flows and packet analysis.
• Create various types of filters and queries to generate data.
• Describe Ceilometer support in a Contrail Cloud.
• Perform TTL Configuration for Analytics Data.
• Use various troubleshooting tools for debugging Contrail.

vi • Course Overview www.juniper.net


Course Agenda

Day 1
Chapter 1: Course Introduction
Chapter 2: Contrail Overview
Chapter 3: Architecture
Chapter 4: Basic Configuration
Lab: Tenant Implementation and Management
Day 2
Chapter 5: Service Chaining
Lab: Service Chains
Chapter 6: Contrail Analytics
Chapter 7: Troubleshooting
Lab: Performing Analysis and Troubleshooting in Contrail
Appendix A: Installation
Lab: Installation of the Contrail Cloud (Optional)

www.juniper.net Course Agenda • vii


Document Conventions

CLI and GUI Text


Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from standard
text according to the following table.

Style Description Usage Example

Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.

Courier New Console text:


commit complete
• Screen captures
• Noncommand-related syntax Exiting configuration mode
GUI text elements:
• Menu names Select File > Open, and then click
Configuration.conf in the Filename
• Text field entry
text box.

Input Text Versus Output Text


You will also frequently see cases where you must enter input text yourself. Often these instances will be shown in the
context of where you must enter them. We use bold style to distinguish text that is input versus text that is simply
displayed.

Style Description Usage Example

Normal CLI No distinguishing variant. Physical interface:fxp0, Enabled


Normal GUI View configuration history by clicking
Configuration > History.

CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type config.ini
in the Filename field.

Defined and Undefined Syntax Variables


Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax
variables where the value is already assigned (defined variables) and syntax variables where you must assign the value
(undefined variables). Note that these styles can be combined with the input style as well.

Style Description Usage Example

CLI Variable Text where variable value is already policy my-peers


assigned.
GUI Variable Click my-peers in the dialog.

CLI Undefined Text where the variable’s value is the Type set policy policy-name.
user’s discretion or text where the
ping 10.0.x.y
variable’s value as shown in the lab
GUI Undefined guide might differ from the value the Select File > Save, and type filename in
user must input according to the lab the Filename field.
topology.

viii • Document Conventions www.juniper.net


Additional Information

Education Services Offerings


You can obtain information on the latest Education Services offerings, course dates, and class locations from the World
Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.
About This Publication
The Network Automation Using Contrail Cloud Student Guide was developed and tested using software Release 2.21.
Previous and later versions of software might behave differently so you should always consult the documentation and
release notes for the version of code you are running before reporting errors.
This document is written and maintained by the Juniper Networks Education Services development team. Please send
questions and suggestions for improvement to training@juniper.net.
Technical Publications
You can print technical manuals and release notes directly from the Internet in a variety of formats:
• Go to http://www.juniper.net/techpubs/.
• Locate the specific software or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.
Juniper Networks Support
For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC
(within the United States) or 408-745-2121 (outside the United States).

www.juniper.net Additional Information • ix


x • Additional Information www.juniper.net
Network Automation Using Contrail Cloud

Chapter 1: Course Introduction


Network Automation Using Contrail Cloud

We Will Discuss:
• Objectives and course content information;
• Additional Juniper Networks, Inc. courses; and
• The Juniper Networks Certification Program.

Chapter 1–2 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Introductions
The slide asks several questions for you to answer during class introductions.

www.juniper.net Course Introduction • Chapter 1–3


Network Automation Using Contrail Cloud

Course Contents
The slide lists the topics we discuss in this course.

Chapter 1–4 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Prerequisites
The slide lists the prerequisites for this course.

www.juniper.net Course Introduction • Chapter 1–5


Network Automation Using Contrail Cloud

General Course Administration


The slide documents general aspects of classroom administration.

Chapter 1–6 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Training and Study Materials


The slide describes Education Services materials that are available for reference both in the classroom and online.

www.juniper.net Course Introduction • Chapter 1–7


Network Automation Using Contrail Cloud

Additional Resources
The slide provides links to additional resources available to assist you in the installation, configuration, and operation of
Juniper Networks products.

Chapter 1–8 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Satisfaction Feedback
Juniper Networks uses an electronic survey system to collect and analyze your comments and feedback. Depending on the
class you are taking, please complete the survey at the end of the class, or be sure to look for an e-mail about two weeks
from class completion that directs you to complete an online survey form. (Be sure to provide us with your current e-mail
address.)
Submitting your feedback entitles you to a certificate of class completion. We thank you in advance for taking the time to
help us improve our educational offerings.

www.juniper.net Course Introduction • Chapter 1–9


Network Automation Using Contrail Cloud

Juniper Networks Education Services Curriculum


Juniper Networks Education Services can help ensure that you have the knowledge and skills to deploy and maintain
cost-effective, high-performance networks for both enterprise and service provider environments. We have expert training
staff with deep technical and industry knowledge, providing you with instructor-led hands-on courses in the classroom and
online, as well as convenient, self-paced eLearning courses.

Courses
You can access the latest Education Services offerings covering a wide range of platforms at
http://www.juniper.net/training/technical_education/.

Chapter 1–10 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Juniper Networks Certification Program


A Juniper Networks certification is the benchmark of skills and competence on Juniper Networks technologies.

www.juniper.net Course Introduction • Chapter 1–11


Network Automation Using Contrail Cloud

Juniper Networks Certification Program Overview


The Juniper Networks Certification Program (JNCP) consists of platform-specific, multitiered tracks that enable participants
to demonstrate competence with Juniper Networks technology through a combination of written proficiency exams and
hands-on configuration and troubleshooting exams. Successful candidates demonstrate a thorough understanding of
Internet and security technologies and Juniper Networks platform configuration and troubleshooting skills.
The JNCP offers the following features:
• Multiple tracks;
• Multiple certification levels;
• Written proficiency exams; and
• Hands-on configuration and troubleshooting exams.
Each JNCP track has one to four certification levels—Associate-level, Specialist-level, Professional-level, and Expert-level. The
Associate-level, Specialist-level, and Professional-level exams are computer-based exams composed of multiple choice
questions administered at Pearson VUE testing centers worldwide.
Expert-level exams are composed of hands-on lab exercises administered at select Juniper Networks testing centers. Please
visit the JNCP website at http://www.juniper.net/certification for detailed exam information, exam pricing, and exam
registration.

Chapter 1–12 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Preparing and Studying


The slide lists some options for those interested in preparing for Juniper Networks certification.

www.juniper.net Course Introduction • Chapter 1–13


Network Automation Using Contrail Cloud

Junos Genius
The Junos Genius application takes certification exam preparation to a new level. With Junos Genius you can practice for
your exam with flashcards, simulate a live exam in a timed challenge, and even build a virtual network with device
achievements earned by challenging Juniper instructors. Download the app now and Unlock your Genius today!

Chapter 1–14 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Find Us Online
The slide lists some online resources to learn and share information about Juniper Networks.

www.juniper.net Course Introduction • Chapter 1–15


Network Automation Using Contrail Cloud

Any Questions?
If you have any questions or concerns about the class you are attending, we suggest that you voice them now so that your
instructor can best address your needs during class.

Chapter 1–16 • Course Introduction www.juniper.net


Network Automation Using Contrail Cloud

Chapter 2: Contrail Overview


Network Automation Using Contrail Cloud

We Will Discuss:
• Basic software-defined networking (SDN) principles and functionality;
• The four planes of networking software;
• The functions of orchestration; and
• The basic components of Contrail.

Chapter 2–2 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

SDN Overview
The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Contrail Overview • Chapter 2–3


Network Automation Using Contrail Cloud

How Are Networks Currently Built?


Networking software has been a drag on innovation across our industry. Because each network device must be configured
individually, usually manually; literally from a keyboard—networks cannot keep pace with the on-the-fly changes required by
modern cloud systems. Internet companies that dedicate hundreds of engineers to their cloud systems have built their own
solution to network configuration. But this is not a reasonable approach for most companies to build their private cloud. As
virtualization and the cloud have revolutionized computing and storage, the network has lagged behind.
In the service provider world, carriers struggle to configure and manage their networks. They have built operational support
systems to configure their networks but these systems are often over 20 years old and they are crumbling from the burden
placed upon them by networking software. For a service provider, the network is their business, so they must look to
networking vendors to introduce new capabilities in order to enable new business opportunities. Here again, networking
software is failing the industry—it is developed as a monolithic, embedded system and there is no concept of an application.
Every new capability requires an update of the entire software stack. Imagine needing to update the OS on your smart phone
every time you load a new application. Yet that is what the networking industry imposes on its customers. What is worse is
that each update often comes with many other changes, and these changes sometimes introduce new problems. Service
providers must carefully and exhaustively test each and every update before they introduce it into their networks.
Another problem that we face with the current state of networks today is the decentralized nature of the network. Each
network device retains its own control plane and has no idea of what the rest of the network really looks like. Granted, there
are dynamic protocols that help network devices gain somewhat of an understanding of what the network looks like.
However, these protocols can at best provide a fragmented picture of the network that results in inefficient traffic forwarding
and inefficient use of bandwidth.

Chapter 2–4 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

What Is SDN?
Enterprises and service providers are seeking solutions to their networking challenges. They want their networks to adjust
and respond dynamically, based on their business policy. They want those policies to be automated so that they can reduce
the manual work and personnel cost of running their networks. They want to quickly deploy and run new applications within
and on top of their networks so that they can deliver business results. And they want to do this in a way that allows them to
introduce these new capabilities without disrupting their business. This list is a tall order, but SDN has the promise to deliver
solutions to these challenges. How can SDN do this? To decode and understand SDN, we must look inside networking
software. From this understanding, we can derive the principles for fixing the problems.
The SDN solution must also solve the current challenges of the network. Networks must adjust and respond dynamically to
changes in the network. This network agility can be accomplished through a decoupling of the control and forwarding planes
on individual network devices. The decoupling of the control and forwarding plane also lends to alleviating the need for
manually configuring each and every network device.

www.juniper.net Contrail Overview • Chapter 2–5


Network Automation Using Contrail Cloud

Separation of Control and Forwarding Planes


One of the key principles of SDN is the separation of the control plane from the network devices. As we discussed in previous
slides, the current state of the network is such that the individual control plane and forwarding plane resides on each
network device. This is problematic because each network device at best only has a partial understanding of the network.
This partial understanding of the network leads to inefficient traffic forwarding and bandwidth utilization.
When you move the control plane to a logically centralized position in the network, and leave the forwarding plane
distributed, you suddenly have an entity that has a complete view of the network. From a high level perspective, this
centralized device is the SDN controller. The SDN controller is able to make the control plane decisions. In other words, it
tells each network device how and where to forward the traffic. Then, each network device is able to focus on forwarding the
traffic. The end result is efficient traffic forwarding and use of network bandwidth.
Another benefit of decoupling the control plane and the forwarding plane is that the compute intensive control plane
functions, that are largely redundant on each networking device, are moved to the SDN controller. This movement of control
functions permits the network devices resources to primarily focus on forwarding.

Chapter 2–6 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

The Reality of SDN Today


As has been discussed in JSDNF class, because of the vagueness and differing opinions of exactly what constitutes a
software-defined network, both vendors and customers alike have stretched and skewed the original definition of SDN.

SDN Vendors
Many vendors are now offering their “SDN solution” which might fit the classic definition of SDN, might borrow certain
aspects of it, or might stretch the definition so far that it probably shouldn’t even be called SDN.

SDN Flavors
The following is an overview of the three flavors of SDN. While these aren’t officially a standard, they are used here to
illustrate the general differences between the three most common types of solutions that have the software defined
networking name attached to them.
• Open SDN
• SDN as an Overlay
• SDN via API

www.juniper.net Contrail Overview • Chapter 2–7


Network Automation Using Contrail Cloud

Overlay Networking
VNs are implemented using two networks—a physical underlay network and a virtual overlay network. This overlay networking
technique has been widely deployed in the Wireless LAN industry for more than a decade but its application to data center
networks is relatively new. It is being standardized in various forums such as the Internet Engineering Task Force (IETF)
through the Network Virtualization Overlays (NVO3) working group and has been implemented in open source and
commercial network virtualization products from a variety of vendors.
The role of the physical underlay network is to provide an IP fabric—its responsibility is to provide unicast IP connectivity from
any physical device (server, storage device, router, or switch) to any other physical device. An ideal underlay network provides
uniform low-latency, non-blocking, high-bandwidth connectivity from any point in the network to any other point in the
network.
The vRouters running in the hypervisors of the virtualized servers create a virtual overlay network on top of the physical
underlay network using a mesh of dynamic tunnels amongst themselves. In the case of Contrail these overlay tunnels can be
MPLS over generic routing encapsulation (GRE), MPLS over User Datagram Protocol (UDP), or VXLAN tunnels.
The underlay physical routers and switches do not contain any per-tenant state: they do not contain any media access
control (MAC) addresses, IP address, or policies for virtual machines. The forwarding tables of the underlay physical routers
and switches only contain the IP prefixes or MAC addresses of the physical servers. Gateway routers or switches that connect
a VN to a physical network are an exception—they do need to contain tenant MAC or IP addresses.
Continued on the next page.

Chapter 2–8 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud
Virtual Overlay
The vRouters, on the other hand, contain per tenant state. They contain a separate forwarding table (a routing instance) per
VN. That forwarding table contains the IP prefixes (in the case of Layer 3 overlays) or the MAC addresses (in the case of Layer
2 overlays) of the virtual machines. No single vRouter needs to contain all IP prefixes or all MAC addresses for all virtual
machines in the entire data center. A given vRouter only needs to contain those routing instances for VMs that are locally
present on the server.

www.juniper.net Contrail Overview • Chapter 2–9


Network Automation Using Contrail Cloud

Juniper’s SDN Strategy: 6-4-1


Juniper has a very complete SDN strategy which is described as the 6-4-1 SDN strategy. This 6-4-1 SDN strategy consists of
six general principles, four steps, and one licensing model. We describe these ideas in detail over the next few slides.

Chapter 2–10 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Six Principles of SDN—One Through Three


The slide lists the first three principles of Juniper’s SDN strategy that discuss how an SDN solution can benefit your network.
Traditionally network software contains four planes—management, services, control, and forwarding. Cleanly separating
these four planes permits the optimization of each plane within the network. We discuss separating these four planes in
detail later in this chapter.
Earlier in this chapter, we discussed how centralizing the control plane can be beneficial. To take the concept of
centralization a little further, we can centralize certain aspects of the management and services plane. Centralizing the
management, services, and control plane can help simplify network design and lower operating costs. We discuss
centralizing these planes in detail later in this chapter.
SDN is a huge benefit to Cloud providers in that it can provide elastic scale and provide flexible deployment of services. For
example, if a customer is using a firewall service from a Cloud provider, they might only need 1 Gbps of firewall throughput
for most of the month. However, one or two days out of the month they might need 5 Gbps of firewall throughput. Without
elastic scaling, the customer needs to purchase 5 Gpbs of firewall throughput for the entire month. Whereas with elastic
scaling, the customer receives 1 Gbps of firewall throughput, until the time comes that more throughput is necessary. Then,
they are provided with 5 Gbps of firewall throughput. When the customer no longer needs the 5 Gbps of firewall throughput,
only 1 Gbps of firewall throughput is provided.
You can accomplish elastic scaling with SDN in the previous example by spinning up a virtual machine (VM)-based firewall
that provides 1 Gbps of throughput. Then, by setting a certain threshold which spins up additional VMs based firewalls when
a specific firewall throughput is reached, provides that additional throughput that the customer needs. Additionally, you
would need to set a threshold that would spin down the additional VM-based firewalls when they are no longer needed. This
permits you to provide the customer with only what they need, when they need it.

www.juniper.net Contrail Overview • Chapter 2–11


Network Automation Using Contrail Cloud

Six Principles of SDN—Four Through Six


The slide lists the next three principles of Juniper SDN strategy. The fourth principle, which discusses creating a platform,
refers to the Juniper Contrail product. Contrail permits you to develop applications in the form of software development kit
(SDK) application programming interfaces (APIs) to tailor Contrail to your SDN solution. We discuss Contrail SDK APIs in
detail in another chapter of this course.
The fifth principle, which discusses standardizing protocols, is important because nobody wants to experience vendor lock-in
with their SDN solution. To avoid vendor lock-in it is important that standardized protocols are used which have
heterogeneous support across vendors. An example of this principle is how Juniper uses standardized and mature protocols
to provide communication between the SDN controller and the managed devices—such as the Extensible Messaging and
Presence Protocol (XMPP). Adhering to this principle allows you choice and an overall reduction in the cost of your SDN
deployment.
The last principle describes how broadly applying SDN principles to all areas of networking can better those areas. For
example, SDN can provide improvements to the security area of your network by virtualizing your firewall which allows you to
become more agile to new and existing threats. SDN can help a service provider by allowing them to dynamically implement
service chaining based on the needs of each customer they provide service to. (Note that service chaining is discussed in
detail in another chapter of this course.) The examples of how SDN can help your network are almost endless, suffice it to
say that SDN can optimize and streamline your network no matter which sector of networking your are in.

Chapter 2–12 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

The Four Planes of Networking Software


Inside every networking and security device, every switch, router, and firewall, you can separate the software into four layers
or planes. As we move to SDN, these planes need to be clearly understood and cleanly separated. This is absolutely
essential in order to build the next generation, highly scalable network.
The bottom plane, forwarding, does the heavy lifting of sending the network packets on their way. It is optimized to move
data as fast as it can. The forwarding plane can be implemented in software but it is typically built using application-specific
integrated circuits (ASICs) that are designed for that purpose. Third-party vendors supply ASICs for some parts of the
switching, routing, and firewall markets. For high performance and high scale systems, the forwarding ASICs tend to be
specialized and each vendor provides their own, differentiated implementation. Some have speculated that SDN will
commoditize switching, routing, and firewall hardware. However, the seemingly insatiable demand for network capacity
generated by thousands of new consumer and business applications creates significant opportunity for differentiation in
forwarding hardware and networking systems. In fact by unlocking innovation, SDN will allow further differentiation from the
vendors who build these systems.
If the forwarding plane is the brawn of the network, the control plane is the brains. The control plane understands the
network topology and makes the decisions on where the flow of network traffic should go. The control plane is the traffic
officer that understands and decodes the networking protocols and ensures that the traffic flows smoothly. Importantly, the
control plane learns everything it needs to know about the network by talking to its peer in other devices. This is the magic
that makes the Internet resilient to failures, keeping traffic flowing even when a major natural event, such as a large
earthquake, brings down thousands of networking devices.
Continued on the next page.

www.juniper.net Contrail Overview • Chapter 2–13


Network Automation Using Contrail Cloud
The Four Planes of Networking Software (contd.)
Sometimes network traffic requires more processing and for this, the services plane does the job. Not all networking devices
have a services plane—you will not find this plane in a simple switch or router. But for many routers and all firewalls, the
services plane does the deep thinking, performing the complex operations on networking data that cannot be accomplished
by the forwarding hardware. The services plane is the place where firewalls stop the bad guys and parental controls are
enforced. This plane enables your smart phone to browse the Web or stream a video, all the while ensuring you are correctly
billed for the privilege.
Like all computers, network devices need to be configured, or managed. The management plane provides the basic
instructions of how the network device should interact with the rest of the network. Where the control plane can learn
everything it needs from the network itself, the management plane must be told what to do. The networking devices of today
are often configured individually. Frequently, they are manually configured using a command-line interface (CLI), understood
by a small number of network specialists. Because the configuration is manual, mistakes are frequent and these mistakes
sometimes have serious consequences—cutting off traffic to an entire data center or stopping traffic on a cross-country
networking highway. Service providers worry about backhoes cutting fiber optic cables but more frequently, their engineers
cut the cable in a virtual way by making a simple mistake in the complex CLI used to configure their network routers or
security firewalls.
While the forwarding plane uses special purpose hardware to get its job done, the control, services, and management planes
run on one or more general purpose computers. These general purpose computers vary in sophistication and type, from very
inexpensive processors within consumer devices to what is effectively a high-end server in larger, carrier-class systems. But
in all cases today, these general purpose computers use special purpose software that is fixed in function and dedicated to
the task at hand.
If you crawled through the software inside a router or firewall today, you’d find all four of the networking planes. But with
today’s software that networking code is built monolithically without cleanly defined interfaces between the planes. What
you have today are individual networking devices, with monolithic software, that must be manually configured. This makes
everything harder than it needs to be.

Chapter 2–14 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

SDN in Your Network


So how do we go from today’s fully decentralized networks to a new world where some things are centralized with SDN? You
cannot start with a clean sheet of paper because networks are actively running and must continue to function as SDN is
introduced. SDN is like a remodel—you need to do it one step at a time. Like most remodels, there is more than one way to
get to the SDN result, but here is a reasonable set of steps to reach the goal:
1. Centralize management: Management is the best place to start as this provides the biggest bang for the buck.
The key is to centralize network management, analytics, and configuration functionality to provide a single
master that configures all networking devices. This lowers operating cost and allows customers to gain
business insight from their networks.
Centralizing management does several things, each of which provides significant value. You start by creating a
centralized management system. Similar to cloud applications, this centralized management system is
packaged in x86 virtual machines running on industry standard servers. These VMs are orchestrated using one
of the commonly available orchestration systems such as VMware’s vCloud Director, Microsoft System Center,
or OpenStack.
In the case of the service provider, their operational and business systems connect to the centralized
management VMs which configure the network. Similarly within a data center, that same data center
orchestration system (VMware vCloud Director, OpenStack, and so forth) can now directly manage the network.
Continued on the next page.

www.juniper.net Contrail Overview • Chapter 2–15


Network Automation Using Contrail Cloud
SDN in Your Network (contd.)
1. Centralize management (continued from previous page):
Configuration is performed through published APIs and protocols—where possible these protocols are
industry-standard. As SDN is still a new technology, industry standard protocols are still emerging but it is very
important that moving forward these standards get created.
Networking and security devices generate huge amounts of data about what is happening across the network.
Much can be learned by analyzing this data and like other aspects of business, analytic techniques applied to
networking and security data can transform our understanding of business.
Pulling management from the network device into a centralized service provides the first step to creating an
application platform. Of greatest urgency is simplifying the connection to the operational systems used by
enterprises and service providers. However, as this platform takes shape, new applications will emerge. The
analytics provides insight into what is happening within the network, enabling better business decisions and
new applications which will dynamically modify the network based on business policy. Centralized management
enables changes to be performed quickly—enabling service providers to try out new applications, packages and
plans, quickly expanding those that work and dropping those that do not. In fact, like other new platforms we
have seen over the years, the possibilities are endless and the most interesting applications will only emerge
once that platform is in place.
2. Extract services to VMs: Extracting services from network and security devices by creating service VMs is a
great next step because services are an area that is terribly underserved by networking. This enables network
and security services to independently scale using industry-standard, x86 hardware based on the needs of the
solution.
Creating a platform that enables services to be built using modern, x86 VMs opens up a whole new world of
possibility. For example, the capacity of a security firewall today is completely limited by the amount of
general-purpose processing power you put into a single networking device—the forwarding plane is faster many
times over. So if you can pull the security services out of the device and then run them on a bank of inexpensive
x86 servers, you dramatically increase capacity and agility.
As a first step, you can tether, or connect these services back to a single networking device. You can put the x86
servers in a rack next to the networking device or they can be implemented as server blades within the same
networking device. Either way, this step opens up the possibilities for a whole new set of network applications.
3. Centralize the controller: The centralized controller enables multiple network and security services to connect in
series across devices within the network. This is called SDN service chaining—using software to virtually insert
services into the flow of network traffic. Service chaining functionality is physically accomplished today using
separate network and security devices. Today’s physical approach to service chaining is quite crude, separate
devices are physically connected by Ethernet cables. Then, each device must be individually configured to
establish the service chain. With SDN service chaining, networks can be reconfigured on the fly, allowing them
to dynamically respond to the needs of the business. SDN service chaining will dramatically reduce the time,
cost, risk for customers to design, and the testing required to deliver new network and security services.
4. Optimize the hardware: The final step of optimizing network and security hardware can proceed in parallel with
the other three. As services are disaggregated from devices and SDN service chains are established, network
and security hardware can be used to optimize performance based on the needs of the solution. Network and
security hardware will continue to deliver ten times or better forwarding performance then can be accomplished
in software alone. The combination of optimized hardware together with SDN service chaining allows customers
to build the best possible networks.
The separation of the four planes helps to identify functionality that is a candidate for optimization within the forwarding
hardware. This unlocks significant potential for innovation within the ASICs and system design of networking and security
devices. While an x86 processor is general purpose, the ASICs within networking devices are optimized to forward network
traffic at extreme speeds. This hardware will evolve to become more capable—every time you move something from software
into an ASIC, you can achieve a massive performance improvement. This requires close coordination between ASIC design,
hardware systems, and the software itself. As SDN becomes pervasive, the ability to optimize the hardware will create many
opportunities for networking and security system vendors.

Chapter 2–16 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Data Center Orchestration


In the data center, the orchestrator (OpenStack, CloudStack, VMware, Microsoft System Center, and so forth) manages many
critical aspects of the data center:
• Compute (virtual machines)
• Storage
• Network
The SDN controller role is to orchestrate the network and networking services like load balancing and security based on the
needs of the application it’s assigned compute and storage resources.
The orchestrator uses the northbound interface of the SDN controller to orchestrate the network at a very high level of
abstraction, for example:
• Create a virtual network for a tenant, within a data center or across data centers.
• Attach a VM to a tenant’s virtual network.
• Connect a tenant’s virtual network to some external network, e.g, the Internet or a virtual private network (VPN).
• Apply a security policy to a group of VMs or to the boundary of a tenant’s network.
• Deploy a network service (for example, a load balancer) in a tenant’s virtual network.
Continued on the next page.

www.juniper.net Contrail Overview • Chapter 2–17


Network Automation Using Contrail Cloud
Data Center Orchestration (contd.)
The SDN controller is responsible for translating these requests at a high level of abstraction into concrete actions on the
physical and virtual network devices such as:
• Physical switches, e.g., Top of Rack (ToR) switches, aggregation switches, or single-tier switch fabrics.
• Physical routers.
• Physical service nodes such as firewalls and load balancers.
• Virtual services such as virtual firewalls in a VM.

Chapter 2–18 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Contrail Overview
The slide highlights the topic we discuss next.

www.juniper.net Contrail Overview • Chapter 2–19


Network Automation Using Contrail Cloud

What Is Contrail?
Juniper’s Contrail is a simple, open, and agile SDN solution that automates and orchestrates the creation of highly scalable
virtual networks. These virtual networks let you harness the power of the cloud—for new services, increased business agility,
and revenue growth. Contrail adheres the following concepts:
• Simple: Creates virtual networks that integrate seamlessly with existing physical networks, and that are easy to
manage and orchestrate.
• Open: Avoids expensive vendor-lock with an open architecture that interoperates with a wide range hypervisors,
orchestration systems, and physical networks.
• Agile: Speeds time to market for new services by automating the creation of virtual networks that interconnect
private, hybrid, and public clouds.

Service providers can use Contrail to enable a range of innovative new services, including cloud-based offerings and virtualized
managed services. For enterprises, Contrail can increase business agility by enabling the migration of applications and IT
resources to more flexible private or hybrid cloud environments.
Network virtualization enables programmatic network and device provisioning and management by abstracting a network
layer that consists of both physical and virtual infrastructure elements. This network virtualization simplifies physical and
virtualized device management model resulting in increased network agility and overall cost reduction.
Continued on the next page.

Chapter 2–20 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud
What Is Contrail? (contd.)
Network programmability uses SDN as compiler to understand and translate abstract commands into specific rules and
policies that automate provisioning of workloads, configure network parameters, and enable automatic chaining of services.
This concept hides complexities and low level details of underlying elements (ports, virtual LANs (VLANs), subnets, and
others) through abstraction to allow for effortless extensibility and simplified operational execution.
Network Function Virtualization (NFV) provides dynamic service insertion by automatically spinning up and chaining together
Juniper and third party service instantiation that dynamically scales out with load balancing. This concept reduces service
time-to-market, improving business agility and mitigating risk by simplifying operations with a more flexible and agile virtual
model.
Open system architecture supports standards-based protocols and open orchestration platforms to enable ultimate vendor
agnostic interoperability and automation. We discuss more on open source with the OpenContrail in the next page.
Big data analytics queries, ingests, and interprets structured and unstructured data to expose network knowledge using
representational state transfer (REST) APIs and graphical user interfaces (GUIs). This concept enables better insight,
proactive planning and predictive diagnostics of infrastructure issues employing both near-real-time and historical
information on application usage, infrastructure utilization, system logs, network statistics like flows, latencies, jitter, and
other data.
Visualization provides exception-based dashboard and user interface with hierarchical (virtual networks to individual flows)
presentation of real-time and historical network data. This concept simplifies operations and decision-making by providing
simple, yet comprehensive, view into infrastructure to help efficient correlation and orchestration across physical and overlay
network components.
Contrail is an extensible system that can be used for multiple networking use cases but there are two primary drivers of the
architecture:
• Cloud Networking—Private clouds for enterprises or service providers, Infrastructure as a Service (IaaS) and
Virtual Private Clouds (VPCs) for Cloud service providers
• NFV in service provider network—This provides Value Added Services (VAS) for service provider edge networks
such as business edge networks, broadband subscriber management edge networks, and mobile edge
networks.
The Private Cloud, the VPC, and the IaaS use cases all involve a multi-tenant virtualized data centers. In each of these use
cases multiple tenants in a data center share the same physical resources (physical servers, physical storage, physical
network). Each tenant is assigned its own logical resources (virtual machines, virtual storage, virtual networks). These logical
resources are isolated from each other, unless specifically allowed by security policies. The virtual networks in the data
center may also be interconnected to a physical IP VPN or L2 VPN.
The NFV use case involves orchestration and management of networking functions such as a firewalls, intrusion detection or
preventions systems (IDS/IPS), deep packet inspection (DPI), caching, WAN optimization in virtual machines instead of on
physical hardware appliances. The main drivers for virtualization of the networking services in this market are time to market
and cost optimization.

www.juniper.net Contrail Overview • Chapter 2–21


Network Automation Using Contrail Cloud

OpenContrail
OpenContrail is an Apache 2.0-licensed project that is built using standards-based protocols and provides all the necessary
components for network virtualization–SDN controller, virtual router, analytics engine, and published northbound APIs. It has
an extensive REST API to configure and gather operational and analytics data from the system. Built for scale, OpenContrail
can act as a fundamental network platform for cloud infrastructure.
OpenContrail is designed to operate in an open source cloud environment to provide a fully integrated end-to-end solution:
• The OpenContrail System is integrated with open source Kernel-based Virtual Machines (KVM) hypervisor.
• The OpenContrail System is integrated with open source virtualization orchestration systems such as
OpenStack and CloudStack.
• The OpenContrail System is integrated with open source physical server management systems such as Chef,
Puppet, Cobbler, and Ganglia.
OpenContrail is available under the permissive Apache 2.0 license—this essentially means that anyone can deploy and
modify the OpenContrail System code without any obligation to publish or release the code modifications.
Juniper Networks also provides commercial versions of the OpenContrail system, that are discussed on one of the following
slides. The open source version of the OpenContrail System is not a teaser—it provides the same full functionality as the
commercial version both in terms of features and in terms of scaling.
OpenContrail does also support VMware ESXi hypervisor (note that it is not open source). It is discussed in more details in
the Appendix A.

Chapter 2–22 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Contrail Controller
The Contrail system consists of two main components—the Contrail controller and the Contrail vRouter.
The Contrail controller is a logically centralized but physically distributed SDN controller that is responsible for providing the
management, control, and analytics functions of the virtualized network. The Contrail controller provides the logically
centralized control plane and management plane of the system and orchestrates the vRouters.

Contrail vRouter
The Contrail vRouter is a forwarding plane (of a distributed router) that runs in the hypervisor of a virtualized server. It
extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the
virtualized servers. The Contrail vRouter is conceptually similar to existing commercial and open source vSwitches such as
the Open vSwitch (OVS). It also provides routing and higher layer services including distributed firewall capabilities to
implement policies between virtual networks (hence vRouter instead of vSwitch).

www.juniper.net Contrail Overview • Chapter 2–23


Network Automation Using Contrail Cloud

Contrail Within a Data Center


The slide provides a high-level example of Contrail within a data center. Nothing is pre-provisioned. This absence of
pre-provisioning means that no routes are present in the vRouter and the tunnels have not been setup between the
vRouters. The installment of the routes and the setup of the tunnels does not occur until a VM is deployed. In the example on
the slide, the two VMs that belong to the VN-A are deployed, then the routes are installed in the vRouter and an overlay
tunnel such as MPLS over GRE tunnel is formed between the two vRouters. Note that exactly the same process occurs when
the two VMs that belong to the VN-B are deployed, except for the creation of a new GRE tunnel. Only one GRE tunnel is
created between the vRouters, even though each vRouter connects to VMs from different VNs. This tunnel is reused by
different tenants, this is possible because they will use a different MPLS label. Next chapter discusses tunneling in more
detail.
Security polices must be defined to permit traffic to flow between the VMs in different VNs. However, security policies are
unnecessary for traffic that flows between VMs of the same VN.

Chapter 2–24 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Inter-Data Center Example


The concepts of the intra-data center example that we discussed—VM deployment, vRouter route insertion, and tunnel
creation apply to the concepts discussed on this slide. The example on the slide takes it to the next level by adding in a third
VN—VN-C. The key points to take home from this example is that each vRouter only requires a point-to-point tunnel, and the
VMs for each VN do not need to be contained within a single data center.

www.juniper.net Contrail Overview • Chapter 2–25


Network Automation Using Contrail Cloud

Different Options for Contrail


As we have already seen, OpenContrail is an Apache-2.0 licensed open source product that can be used for free by anyone.
Juniper Networks also provides commercial versions of OpenContrail system named Contrail Networking and Contrail Cloud
Platform.
The difference between the two is that Contrail Cloud Platform includes commercial support for the entire software stack
(both Contrail SDN controller and OpenStack orchestrator) while Contrail Networking only includes support for Contrail itself.

Chapter 2–26 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Contrail Service Orchestration


Contrail Service Orchestration is a comprehensive management and orchestration platform that runs on top of Contrail
Cloud and delivers virtualized network services built on an open framework. By allowing service providers to selectively
centralize or distribute the service creation process, Contrail Service Orchestration addresses the needs of small to midsize
businesses as well as large enterprises with a single and elegant point-and-click interface. Product managers get a clean
and polished service design experience; service management and troubleshooting are streamlined for administrators; and
customers have a personalized self-service portal to select the services that best meet their evolving business
requirements.
Juniper Networks Contrail Service Orchestration empowers service providers to drastically reduce service delivery times,
transforming a several month truck roll experience into a near real-time mouse-click experience by automating the entire
service delivery life cycle. It also reduces the operational costs associated with creating new services while significantly
enhancing customer satisfaction, leading to long-term revenue growth.
Contrail Service Orchestration is built from the ground up to seamlessly integrate with Contrail Cloud Platform, creating a
vertically integrated Network Functions Virtualization (NFV) management system and orchestration software stack that
addresses a multitude of NFV use cases. Multiple NFV use cases are supported, including Juniper’s Cloud CPE solution, in a
centralized, distributed, or overlay deployment model extending to the customer premise.
Contrail Service Orchestration must be installed on a separate server from Contrail Cloud Platform. More details are
available on the Juniper Networks website.

www.juniper.net Contrail Overview • Chapter 2–27


Network Automation Using Contrail Cloud

The Juniper Solution: Cloud CPE


Juniper automates service creation and delivery with the first commercially scalable cloud CPE solution. At a high level,
Juniper Networks Cloud CPE solution involves three main components:
• Management and orchestration (MANO): Juniper Networks Contrail Networking and Contrail Cloud Platform;
• VNFs: Juniper Networks vSRX virtual firewall and vMX virtual router, which deliver a wide range of routing and
security VNFs; and
• NFV infrastructure (NFVI): Compute, storage, and networking.

Juniper Cloud CPE Deployment Models


Juniper’s Cloud CPE solution consists of two flexible deployment models: distributed (uCPE) and centralized (vCPE).
The uCPE model consists of an off-the-shelf x86 appliance that is installed at the customer site. The appliance supports
VNFs on demand from an existing service catalog. Enterprise IT staff has substantial visibility into and control over the
service, including the ability to set policies, role-based access, security, and quality of service (QoS).
Distributed CPE optimizes service deployment. A single device can support multiple VNFs; this flexibility eliminates
traditional service silos and enables enterprise customers to evolve their network services without new appliances while
addressing business requirements.
Centralized CPE abstracts network services from the on-premise equipment and automates the entire service delivery chain
in the service provider’s network. New services can be ordered through a self-care portal or triggered on demand. vCPE
dramatically simplifies the deployment of managed services, allowing service providers to rapidly offer differentiated,
scalable, and tiered services.

Chapter 2–28 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

We Discussed:
• Basic software-defined networking (SDN) principles and functionality;
• The four planes of networking software;
• The functions of orchestration; and
• The basic components of Contrail.

www.juniper.net Contrail Overview • Chapter 2–29


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

Chapter 2–30 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud
Answers to Review Questions
1.
The vRouter installs networking information in its routing table to service any VMs and VNs that are connected to it. It also creates
any necessary tunnels that connect any VMs and VNs to ones that are on remote vRouters.
2.
The main benefit of decoupling the control and forwarding planes from the networking devices is that one device, an SDN controller,
understands the complete view of the network and can make better routing decisions based on this view.
3.
A tunnel is only created after VM deployment has occurred, not before VM deployment.

www.juniper.net Contrail Overview • Chapter 2–31


Network Automation Using Contrail Cloud

Chapter 2–32 • Contrail Overview www.juniper.net


Network Automation Using Contrail Cloud

Chapter 3: Architecture
Network Automation Using Contrail Cloud

We Will Discuss:
• Contrail components, in depth;
• Contrails control and data planes; and
• Additional options for deploying Contrail.

Chapter 3–2 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Components and Building Blocks


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net Architecture • Chapter 3–3


Network Automation Using Contrail Cloud

Sample Logical Topology for Contrail


Because of Contrail’s complexities, it is often easier to understand the overall solution if it is first broken down into building
blocks. You can then use these building blocks to devise a strategy that fits your network or accomplishes your main goals.
Within a typical environment, there are virtual machines (VMs), which could be anything from tenant VMs to VMs designed
for some network function such as firewalling or deep packet inspection. Virtual networks are an integral building block as
they connect the aforementioned VMs together. Finally, gateway devices are used to bridge virtual networks to physical
networks.
Before we get into details, this slide shows what we want to achieve with SDN. Logically, we have virtual machines (VMs)
connected to virtual networks (VNs). VNs can be connected to each other by routing or through a service chain. Connections
to physical network through a gateway device is also possible.

Chapter 3–4 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Solution Overview


What is the role of Contrail in this stack? In the example, you have a typical orchestration system where the orchestrator is at
the top and communicates with the various parts within the orchestration system: network, compute, and storage. Typically,
when the network application programming interface (API) is called, a Layer 2 infrastructure is built. What Contrail does is
intercept that call to build the network. However, Contrail builds its own kind of network because, at the abstracted layer in
the orchestration system, the request is defined as asking for a network without describing how that network should be built.
It is up to the underlying systems to decide how that network should be built. For the most part, the orchestration system
does not care how a network is built, just that it is built.

www.juniper.net Architecture • Chapter 3–5


Network Automation Using Contrail Cloud

Contrail Controller Components


The Contrail Controller is logically centralized but physically distributed. What this means is that the overall Contrail solution
is made up of multiple types of nodes and each type of node can have multiple instances for increased high availability (HA)
and scaling. Deployment of these node instances is extremely flexible. That is, they can be deployed on separate virtual
machines or physical servers and you can combine them as necessary. In fact, it is possible to run the entire Contrail
solution on a single server. The three main Contrail Controller node types are as follows:
• Configuration Nodes: Responsible for the management layer. Using a northbound representational state
transfer (REST) API, this type of node provides the interface that is used to configure the system or to retrieve
operational status data from the controller. Configuration nodes utilize a transformation engine (compiler) to
convert the high-level service data model you configure into a low-level technology data model. This technology
data model is then published to the control nodes using the Interface to Metadata Access Point (IF-MAP)
protocol.
• Control Nodes: Implement the logically centralized portion of the control plane. The control nodes take the
technology data model from the configuration nodes and use it to make the actual state of the network equal to
the desired state of the network. Several protocols are used by the control nodes depending on if it is
communicating with a Contrail vRouter or an actual physical router. The Extensible Messaging and Presence
Protocol (XMPP) is used to control the Contrail vRouters and a combination of the BGP and the Network
Configuration (NETCONF) protocols is used to communicate with physical routers. Furthermore, the control
nodes also use BGP for state synchronization among each other when there are multiple instances of the
control node for HA and scaling.
Continued on the next page.

Chapter 3–6 • Architecture www.juniper.net


Network Automation Using Contrail Cloud
Contrail Controller Components (contd.)
• Analytic Nodes: Collect, process, and present information for use in troubleshooting and understanding what is
going on in the network. Event records are stored in databases using a format that is optimized for time-series
analysis and queries. This data includes statistics, logs, events, and errors and is provided through a
northbound REST API.
Although not shown in the slide, the separate Database Node can be used for storage of analytics and configuration data in
the Contrail system.

www.juniper.net Architecture • Chapter 3–7


Network Automation Using Contrail Cloud

Other Contrail Components


Several other node types make up the overall Contrail solution. They are as follows:
• Compute Nodes: General-purpose servers which host VMs. The hosted VMs can be of any type, i.e., customer
VMs, service VMs such as virtual firewalls, etc. Within each Compute Node, Contrail inserts a vRouter kernel
module. This vRouter implements the forwarding plane and the distributed part of the control plane.
• Gateway Nodes: Physical routers or switches that connect virtual networks to physical networks. That is, they
connect virtual networks to such things as the Internet, customer VPNs, another data center, or to other,
non-virtualized servers.
• Service Nodes: Physical devices which provide any number of network services including, but not limited to,
deep packet inspection (DPI), Intrusion Detection and Prevention (IDP), and load balancing. Service nodes can
be combined into what is known as a service chain and these chains can be a combination of virtual services
(deployed as VMs on a Compute Node) or physical services (deployed on the Service Nodes themselves).

Chapter 3–8 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Gateway to L3VPN
In this use case, tenants connect to the Internet or an enterprise network through a virtual private network (VPN). The VPN
can be a Layer 3 VPN, Layer 2 VPN, an Secure Sockets Layer (SSL) VPN, an IP Security (IPsec) VPN, and so forth. The data
center gateway function is responsible for connecting the tenant networks to the Internet or the VPNs. The gateway function
can be implemented in software or in hardware (for example, using a gateway router).

www.juniper.net Architecture • Chapter 3–9


Network Automation Using Contrail Cloud

Comparison of the Contrail System to MPLS VPNs


In the previous slide you saw a gateway router terminating both overlay tunnels and traditional MPLS VPNs. The architecture
of the Contrail system is, in many respects, similar to the architecture of MPLS VPNs, as shown in the present slide.
The parallels between the two architectures include the following:
• Underlay switches in the Contrail system correspond to provider (P) routers in an MPLS VPN. Since the Contrail
system uses MPLS over GRE or VXLAN as the encapsulation protocol, there is no requirement that the underlay
network supports MPLS. The only requirement is that it detects how to forward unicast IP packets from one
physical server to another.
• vRouters in the Contrail system correspond to provider edge (PE) routers in an MPLS VPN. They have multiple
routing instances just like physical PE routers.
• VMs in the Contrail system correspond to customer edge (CE) routers in an MPLS VPN. In the Contrail system
there is no need for a PE-CE routing protocol because CE routes are discovered through other mechanisms.
• MPLS over GRE tunnels and VXLAN tunnels in the Contrail system correspond to MPLS over MPLS in MPLS
VPNs.
• The XMPP protocol in the Contrail system combines the functions of two different protocols in an MPLS VPN: a)
XMPP distributes routing information similar to what IBGP does in MPLS VPNs; b) XMPP pushes certain kinds of
configuration (for example, routing instances) similar to what DMI does in MPLS VPNs.
Continued on the next page.

Chapter 3–10 • Architecture www.juniper.net


Network Automation Using Contrail Cloud
Comparison of the Contrail System to MPLS VPNs (contd.)
• Centralized control is similar to a BGP route reflector in an MPLS VPN.
• Contrail supports both layer 3 overlays, which are the equivalent of MPLS L3-VPNs and Layer 2 overlays, which
are the equivalent of MPLS EVPNs.
The fact that the Contrail system uses control plane and data plane protocols that are very similar to the protocols used for
MPLS L3VPNs and EVPNs has multiple advantages. These technologies are mature and known to scale, and they are widely
deployed in production networks and supported in multivendor physical gear that allows for seamless interoperability
without the need for software gateways.
Another analogy for a Contrail system (with a different set of imperfections) is to compare the control VM to a routing engine
and to compare a vRouter to a line card.

www.juniper.net Architecture • Chapter 3–11


Network Automation Using Contrail Cloud

The Importance of Abstraction: Part 1


These slides demonstrate the importance of abstraction when manipulating complex scenarios such as what is depicted on
the slide. Traditionally, setting up such complex scenarios involves lots of configuration on lots of devices. This can cause
headaches and can be very prone to error. In a word, you are telling the system how you want to configure something. If you
abstract it, you tell the system what you want to configure and then let the system do the work for you.

Chapter 3–12 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

The Importance of Abstraction: Part 2


This is much easier! Instead of worrying about how something is achieved, you only worry about what you want to achieve.

www.juniper.net Architecture • Chapter 3–13


Network Automation Using Contrail Cloud

Contrail Stack
The slide highlights the topic we discuss next.

Chapter 3–14 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Stack: OpenStack


The slide shows the overall stack of all the components, and we start by reviewing the components of OpenStack
orchestrator.

www.juniper.net Architecture • Chapter 3–15


Network Automation Using Contrail Cloud

OpenStack Architecture Review


OpenStack is a number of small projects interacting with each other to make up the cloud orchestration system. OpenStack
components are discussed in detail in the JSDNF class, so here we just review them.
• Horizon (dashboard): Graphical interface for OpenStack.
• Neutron (networking): Provides networking capabilities for OpenStack. Used for configuring and managing
connectivity. Used to create virtual networks (VNs).
• Nova (compute): Primarily used for deploying and managing virtual machines.
• Glance (image storage): Stores virtual hard disk images for deploying VM instances. Stores images such as
QCOW2 IMG, VMware VMDK, etc.
• Keystone (identity): Manages user roles and their mappings to various services.
• Cinder (block storage): Stores data in a similar way to a traditional disk drive, used for high speed data
transactions.
• Swift (object storage): Horizontally scalable storage of individual objects.
• Ceilometer (not shown on the slide): Provides telemetry services, keeps a count of each user’s system usage.
Used for metering and reporting.
• Heat (not shown on the slide): Orchestration component, allows for “templating” of an application and its
dependencies for deployment of that application.

Chapter 3–16 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Stack: Compute Node and vRouter


The slide show the overall stack of all the components, and we are going to have a detailed look at a compute node and its
vRouter.

www.juniper.net Architecture • Chapter 3–17


Network Automation Using Contrail Cloud

Compute Node: Overview


A Compute node, which runs the vRouter, is a system with a base operating system running a hypervisor — typically KVM or
HyperV. The vRouter kernel module is implemented in place of the traditional Linux bridge or OVS module in the hypervisor
kernel. Each VM is mapped into one or more Virtual Routing and Forwarding tables (VRFs) on the vRouter and the vRouter
Agent populates the VRFs using information learned over XMPP. The interface that connects the VM and the vRouter is the
network tap interface of the virtual network interface card (vNIC) that was created when the VM was spun up. The vNIC is
associated with a particular routing instance, or virtual network, based on which virtual network was chosen when the VM
was instantiated. That information is passed over to Contrail through Neutron and is used to build the network around the
VM.
The vRouter performs some additional functionalities, including enforcement of security policies, multicast, packet mirroring
(useful for either providing services, diagnostics, or debugging an application environment) and load balancing. It also
performs Network Address Translation (NAT), which is required to do things like floating IP addresses. All this functionality is
embedded within the vRouter. Thus, there is no need for extra Service Nodes or L2/L3 gateways. Furthermore, even though
it’s called a vRouter and does routing duties, the vRouter is not something you log into and configure manually.
Regarding Layer 3 VPNS (L3VPNs), routing occurs between them without having to go through an external gateway. The
reason for this is that every L3VPN has a route target assigned to it. Essentially, an extranet is built between two or more
routing instances simply by importing the route target at the end of the virtual network into which traffic is sent. For example,
assume Tenant A wants to route to Tenant B. All that needs to happen is for the Tenant A to import the route target of
Tenant B and vice versa. Routes are then shared between those particular VRFs. When Tenant A wants to send traffic to
Tenant B, it sends the traffic to the vRouter because the vRouter is its default route. The routing instance within the vRouter
would have the routes for the Tenant B VRF so it does a simple lookup and sends the traffic along.
Continued on the next page.

Chapter 3–18 • Architecture www.juniper.net


Network Automation Using Contrail Cloud
Compute Node: Overview (contd.)
Now, you might ask yourself, “How does one determine route targets and the like?” This is where the aforementioned
abstraction comes in. The person defining the network communication policies doesn’t necessarily need to know anything
about route targets. They just need to know how they want their VMs to connect and, if they are on different virtual networks,
how would they then connect those VMs. In other words, “We need Tenant A to communicate bi-directionally with Tenant B.”
Within Contrail, this is accomplished by creating a simple policy using the Web user interface (UI). That policy is then
transformed by the Configuration Node into a schema that the underlying infrastructure understands, which is essentially to
import route target A into VRF B and import route target B into VRF A.

www.juniper.net Architecture • Chapter 3–19


Network Automation Using Contrail Cloud

Compute Node: Forwarding


This slide describes how IP traffic flows between VMs. First, the guest OS ARPs for the destination address. The vRouter
receives the ARP request and responds back to the VM. In other words, the vRouter acts as a proxy Address Resolution
Protocol (ARP). Guest OS now sends traffic to the MAC address.
Next, the vRouter gets that packet, does a lookup and finds the route for destination IP address or MAC address (depending
on the overlay type) which has a next hop of the physical address of the other server. The vRouter then tunnels the packet -
in case of MPLSoGRE encapsulation shown in the slide, it adds an MPLS shim header with MPLS label and a generic routing
encapsulation (GRE) header to the packet and forwards it on.
At this point, the packet traverses the physical infrastructure until it arrives at the destination server where, essentially, the
packet is decapsulated. That is, the GRE header is stripped, the MPLS label is checked, a table lookup performed by the
destination vRouter, and the packet is forwarded on to the target virtual network and VM.
This event flow is also discussed in greater detail in one of the subsequent slides.

Chapter 3–20 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Forwarding Plane


The forwarding plane is implemented using an overlay network. The overlay network can be a Layer 3 (IP) overlay network or
a Layer 2 (Ethernet) overlay network. For Layer 3 overlays, both IPv4 and IPv6 are supported. Layer 3 overlay networks
support both unicast and multicast. Proxies are used to avoid flooding for DHCP, ARP, and certain other protocols.
The system supports multiple overlay encapsulations. This slide shows packet format for L2 overlay (MPLS over GRE and
VXLAN encapsulations). Packet format for L3 overlay and MPLS over GRE encapsulation was shown previously.
MPLS L3VPNs and EVPNs typically use MPLS over MPLS encapsulation, but they can use MPLS over GRE as well if the core
is not MPLS enabled. Contrail uses the MPLS over GRE (or UDP) and not the MPLS over MPLS for several reasons. First,
underlay switches and routers in a data center often don’t support MPLS. Second, even if they did, the operator might not
want the complexity of running MPLS in the data center. Third, there is no need for traffic engineering inside the data center
because the bandwidth is overprovisioned.
For L2 overlays, Contrail also supports VXLAN encapsulation. One of the main advantages of the VXLAN encapsulation is that
it has better support for multipath in the underlay by virtue of putting entropy (a hash of the inner header) in the source UDP
port of the outer header.
Contrail’s implementation of VXLAN differs from the VLAN IETF draft in two significant ways. First, it only implements the
packet encapsulation part of the IETF draft—it does not implement the flood-and-learn control plane. Instead, it uses the
XMPP-based control plane described in this section. As a result, it does not require multicast groups in the underlay. Second,
the Virtual Network Identifier (VNI) in the VXLAN header is locally unique to the egress vRouter instead of being globally
unique.
Continued on the next page.

www.juniper.net Architecture • Chapter 3–21


Network Automation Using Contrail Cloud
Contrail Forwarding Plane (contd.)
Contrail supports a third encapsulation, namely MPLS over UDP. It is a cross between the MPLS over GRE and the VXLAN
encapsulation. It supports both L2 and L3 overlays, uses an “inner” MPLS header with a locally significant MPLS label to
identify the destination routing instance (similar to MPLS over GRE), but it uses an outer UDP header with entropy for
efficient multipathing in the underlay (like VLXAN).

Chapter 3–22 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Control Stack: Control Nodes and Protocols


The next layer in the stack is the control layer.

www.juniper.net Architecture • Chapter 3–23


Network Automation Using Contrail Cloud

Control Node: Overview


Control node subscribes to information that is provided by the Configuration Nodes but it also needs to get that information
to other parts of the Contrail solution. As shown in the slide, Control Nodes use several communication protocols as follows:
• XMPP: Used for communication with the vRouters (Compute Nodes).
• BGP + NETCONF: Use for communication with other Control Nodes, Service Nodes, and physical gateway
devices.
• IF-MAP: Used for communication with Configuration Nodes.
One item to note with the control plane is that BGP is used extensively as it provides a well-established scaling that is akin to
a typical BGP route reflector mesh. Because the control plane is completely federated using BGP, the model easily extends
outside the data center as well as across data centers. Furthermore, each Control Node connects to multiple Configuration
Nodes for redundancy.
Finally, because shutting down a data center is, typically, not an option, rolling upgrades are commonplace. For this reason,
when it comes time to update Contrail, additional Control Nodes running newer versions of software can be launched and
introduced into the environment that current Control Nodes continue to run the existing code version. Load is then moved
onto the new Control Nodes and the older Control Nodes are simply shut down and decommissioned.

Chapter 3–24 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Route Distribution
The slide demonstrates how the route distribution works. A general flow of the events involved is as follows:
1. After a VM is instantiated, the vRouter advertises a route to the VM’s IP address with a next hop of the physical
server. A label is also included. In our example, VM-A has been instantiated and a route to 10.1.1.1, a next hop
of 70.10.10.1, and a VRF label of 39 has been communicated by the vRouter, through XMPP, to the Control
Nodes.
2. VM-B has been turned up and its route of 10.1.1.2, next hop of 150.10.10.1, and VRF label of 17 has been
distributed to the Control Nodes.
3. Now, let’s assume VM-A wants to get a packet to VM-B. This is a standard IPv4 packet with a source address,
destination address, and some payload.
4. The vRouter gets this IPv4 packet and does a lookup for the destination address. It finds a next hop of
151.10.10.1 and a label to use of 17.
5. The vRouter pushes a GRE header onto the IPv4 packet using the public IP address, 70.10.10.1 and
151.10.10.1, for source and destination, respectively, and an MPLS label of 17.
6. The packet then traverses the GRE overlay tunnel and arrives at Server 2. Server 2 performs a decapsulation of
the GRE header and does lookup on the 17 label.
7. At this point, Server 2 does a standard IPv4 lookup against 10.1.1.2 and forwards the packet on to the correct
destination VM.

www.juniper.net Architecture • Chapter 3–25


Network Automation Using Contrail Cloud

Route Distribution: L3VPN


This slide shows the case when Contrail cluster spawns two separate data centers (DC-1 and DC-2). Although there are
several possible ways to connect data centers in such a scenario, here we consider the case when a transit MPLS provider is
used. This particular scenario is referred to as Multi-AS Backbone, option B (RFC 4364, “EBGP redistribution of labeled
VPN-IPv4 routes from AS to neighboring AS”).
Route distribution in the scenario is the following:
1. vRouter in DC-2 advertises a route to the VM-B’s IP address 10.1.1.2 with a next hop of physical server
151.10.10.1 and MPLS label 17 through XMPP to Contrail’s Control Node in DC-1.
2. The Control Node works as a route reflector and sends the route, without the change of next-hop and label, to
the gateway router GW2 through I-MBGP.
3. GW2 sends the route to Service Provider’s edge PE2 router via E-MBGP. Next hop is changed to address of
GW2, 160.20.20.1, and the label is changed to some other value, which is 117 in our case.
4. Service Provider propagates the route through its AS via I-MBGP. In our case, label changes to a value of 217
and next-hop is 100.1.1.1 (address of PE2).
5. Service Provider’s edge router connected to a DC-1, PE1, receives the route and announces it to gateway router
GW1, changing both label (to 317) and next-hop (to 200.1.1.1, which is the address of PE1).
6. GW1 sends the route to Control Node with next-hop of 80.20.20.1 which is its own address, and label 417.
7. The Control node reflects the route to DC-1’s vRouter. Now this vRouter has a route to VM-B in DC-2.
Continued on the next page.

Chapter 3–26 • Architecture www.juniper.net


Network Automation Using Contrail Cloud
Route Distribution: L3VPN (contd.)
Note that route target (RT) and route distinguisher (RD) are not modified during the route propagation process, which allows
vRouter to install the route in a correct VRF.
Note also, that at every step, BGP protocol next-hop is resolved to forwarding next-hop which shows where to forward packets
that match this route. In particular, vRouter in DC-1 and GW2 router resolve next-hop to a GRE tunnel. PE1 will resolve it to an
LSP connecting PE1 and PE2 routers, and the packet will be forwarded using that LSP (hence “MPLS Outer Label” in the
slide). PE2 and GW1 use Direct next-hops.
Forwarding happens as follows:
1. VM-A sends the packet to 10.1.1.2, route lookup happens, the packet is encapsulated in GRE and label 417 is
added.
2. The GRE tunnel gets the packet from vRouter in DC-1 to GW1.
3. GW1 strips GRE encapsulation and swaps MPLS label 417 to 317, and then forwards the packet to PE1.
4. PE1 swaps label 317 to 217 and then pushes another label on top of the label stack (“MPLS Outer Label” in the
slide).
5. The packet is forwarded between PE1 and PE2 in MPLS LSP, and outer label is swapped across the MPLS
network as in a normal service provider environment. However, inner label 217 is still the same when packet
reaches PE2.
6. PE2 swaps the label 217 to 117 and forwards the packet to GW2 which swaps MPLS label to 17.
7. From here, GW2 uses the GRE tunnel to forward the packet to the destination Compute Node and, ultimately, to
the destination, that is, VM-B.
Although not shown in this slide, routing and forwarding in the reverse direction happens in a similar way.

www.juniper.net Architecture • Chapter 3–27


Network Automation Using Contrail Cloud

Route Distribution: E-VPN


The Contrail system is inspired by and conceptually very similar to standard MPLS L3VPNs (for L3 overlays) and MPLS EVPNs
(for L2 overlays).
The route distribution with E-VPN is very similar to the case of L3 overlay considered previously, except it uses MACs instead
of IP addresses. As before, a GRE header is pushed onto the IP version 4 (IPv4) packet, sent to the destination,
decapsulated, and a VPN label lookup performed. However, instead of a destination IP address, a destination MAC address
is used to forward the packet on to the destination VM.
Note that E-VPN should not be confused with VPLS. One of the enhancements that EVPN brings over VPLS is that MAC
learning is performed in the control plane, rather than in data plane.

Chapter 3–28 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Extending a Contrail Cluster to Physical Routers


Device Manager is a configuration node daemon used to manage physical routers in the Contrail system. The Device
Manager daemon listens to configuration events from the API server, creates any necessary configurations for all physical
routers it is managing, and programs those physical routers.
In Contrail Release 2.10 and later, it is possible to extend a cluster to include physical Juniper Networks MX Series routers
and other physical routers that support the Network Configuration (NETCONF) Protocol. You can configure physical routers to
be part of any virtual networks configured in the Contrail cluster, facilitating communication between the physical routers
and the Contrail control nodes. Contrail policy configurations can be used to control this communication. The Contrail Web
user interface can be used to configure a physical router into the Contrail system.
Continued on the next page.

www.juniper.net Architecture • Chapter 3–29


Network Automation Using Contrail Cloud
Extending a Contrail Cluster to Physical Routers (contd.)
On a Juniper MX Series device and other physical routers, Device Manager in Contrail Release 2.10 and later can do the
following:
• Create configurations for physical interfaces and logical interfaces as needed.
• Create VRF table entries as needed by the configuration.
• Add interfaces to VRF tables as needed.
• Create public VRF tables corresponding to external virtual networks.
• Create BGP protocol configuration for internal or external BGP groups as needed, adding iBGP and eBGP peers
in appropriate groups.
• Program route-target import and export rules as needed by policy configurations.
• Create policies and firewalls as needed.
• Configure Ethernet VPNs (EVPNs).
Device Manager is discussed in more details in the following chapter of this class.

Chapter 3–30 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Stack: Configuration Nodes


The next layer in the stack is the Configuration Node.

www.juniper.net Architecture • Chapter 3–31


Network Automation Using Contrail Cloud

Configuration Node: Overview


The slide shows a detailed look at the inner workings of a typical Configuration Node. Multiple Configuration Nodes in
Active-Active High Availability mode are supported.
As before, the API server provides a northbound REST interface and the orchestration system uses this for provisioning. That
is, this mechanism converts the schema from a high-level data model to a low-level data model. After the data model is
published, that data is converted into more schemas that other parts of the system can understand. At this point, an IF-MAP
server publishes the appropriate schemas to the Control Nodes. The Control Nodes, in turn, use that information to actually
distribute the intent, defined up in the orchestration system, to the actual devices doing the packet forwarding.
Some software components of Configuration nodes include:
• Redis message bus that facilitates communications among internal components.
• Cassandra database provides persistent storage of configuration. Cassandra is a fault-tolerant and horizontally
scalable database. Note that in the slide “DHT” stands for “Distributed hash table”.
• ZooKeeper is used for allocation of unique object identifiers and to implement transactions.

Chapter 3–32 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

SDN as a Compiler: Part 1


This slide shows a broad overview of how software-defined networking (SDN) can be thought of as a compiler. We go into
greater detail on the next slide.

www.juniper.net Architecture • Chapter 3–33


Network Automation Using Contrail Cloud

SDN as a Compiler: Part 2


You can think of the Configuration Node as a large schema transformer. In other words, it acts much like a compiler does
when programming. Typically, a compiler takes source code and transforms it into something that is machine executable.
The Configuration Node does much the same thing. It takes an abstract data model you have designed using a language
that’s common and easy to understand (the source code) and transforms (compiles) that information into something the
underlay infrastructure can understand. For instance, let’s assume you have defined a network within the OpenStack UI. You
give the network a name and associate a net block with it and you click, more or less, a create button. It’s important to note
that you never describe HOW the network gets built. You simply define some of the parameters and click create.
Now, when you click that create button, the Configuration Node intercepts that request, interprets it, and creates all the
necessary schema needed to create, for example, an L3VPN, which is a virtual network. The key idea here is that the person
configuring this network entity has no idea how it is being created—the creator just needs to know that it was created, so
VMs can be attached to that entity. In short, the job of the Configuration Node is to take information from northbound and
transform it into something southbound can understand.

Chapter 3–34 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Stack: Analytics Engine


The next part of the stack is the analytics engine.

www.juniper.net Architecture • Chapter 3–35


Network Automation Using Contrail Cloud

Analytics Node: Overview


The Analytics Nodes are responsible for the collection of system state information, usage statistics, and debug information
from all of the software modules across all of the nodes of the system. The Analytics Nodes store the data gathered across
the system in a database that is based on the Apache Cassandra open source distributed database management system.
The database is queried by means of an SQL-like language and REST APIs. The Sandesh protocol, which is XML over TCP, is
used by all the nodes to deposit data in the NoSQL database. For example, all vRouters dump their data into the database in
this manner. Every packet, every byte, and every flow, is collected so there is 100% visibility of all traffic going through the
vRouters. At any point, it is possible to view all live flows traversing a vRouter. Furthermore, once the flow data has been
dumped into the Cassandra database, you can use the aforementioned query interfaces to mine the data for historical or
debugging purposes, capacity management, billing, heuristics, etc. System state information collected by the analytics
nodes is aggregated across all of the nodes, and comprehensive graphical views allow the user to get up-to-date system
usage information easily.
Debug information collected by the analytics nodes includes the following types:
• System log (syslog) messages: Informational and debug messages generated by system software components.
• Object log messages: Records of changes made to system objects such as virtual machines, virtual networks,
service instances, virtual routers, BGP peers, routing instances, and the like.
• Trace messages: Records of activities collected locally by software components and sent to analytics nodes only
on demand.
• Statistics information related to flows, CPU and memory usage, and the like is also collected by the analytics
nodes and can be queried at the user interface to provide historical analytics and time-series information. The
queries are performed using REST APIs.

Chapter 3–36 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Deployment Options
The slide highlights the topic we discuss next.

www.juniper.net Architecture • Chapter 3–37


Network Automation Using Contrail Cloud

Nova Docker Support in Contrail


Operating-system-level virtualization is a server-virtualization method where the kernel of an operating system allows for
multiple isolated user-space instances, instead of just one. Such instances (often called containers) may look and feel like a
real server from the point of view of its users and applications.
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an
additional layer of abstraction and automation of operating-system-level virtualization on Linux. Docker uses resource
isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent containers to run within
a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Starting with Contrail Release 2.20, it is possible to configure a compute node in a Contrail cluster to support Docker
containers.
OpenStack Nova Docker containers can be used instead of virtual machines for specific use cases. DockerDriver is a driver
for launching Docker containers. Contrail creates a separate Nova availability zone (nova/docker) for compute nodes
deployed with DockerDriver.
The following are limitations with the deployment of Nova Docker and Contrail.
• The Docker containers cannot be seen in the virtual network controller console because console access is not
supported with Nova Docker.
• Docker images must be saved in Glance using the same name as used in the “docker images” command. If the
same name is not used, you will not be able to launch the Docker container.

Chapter 3–38 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Contrail Storage with Cinder and Ceph DB


Starting with Contrail Release 2.00, Contrail provides a storage support solution using OpenStack Cinder configured to work
with Ceph. Ceph is a unified, distributed storage system whose infrastructure provides storage services to Contrail. The
Contrail solution provides a validated Network File System (NFS) storage service, however, it is not the Ceph FS distributed
file system.
The Contrail storage solution has the following features:
• Provides storage class features to Contrail clusters, including replication, reliability, and robustness.
• Uses open source components.
• Uses Ceph block and object storage functionality.
• Integrates with OpenStack Cinder functionality.
• Does not require virtual machines (VMs) to configure mirrors for replication.
• Allows nodes to provide both compute and storage services.
• Provides easy installation of basic storage functionality based on Contrail roles.
• Provides services necessary to perform virtual machine migrations between compute nodes, and supports both
migratable and non-migratable virtual machines.
• Provides a Contrail-integrated user interface from which the user can monitor Ceph Components and drill down
for more information about components.
Continued on the next page.

www.juniper.net Architecture • Chapter 3–39


Network Automation Using Contrail Cloud
Contrail Storage with Cinder and Ceph DB (contd.)
The following are basic interaction points between Contrail and the storage solution:
• Cinder volumes must be manually configured prior to installing the Contrail storage solution. The Cinder
volumes can be attached to virtual machines (VMs) to provide additional storage.
• The storage solution stores virtual machine boot images and snapshots in Glance, using Ceph object storage
functionality.
• All storage nodes can be monitored through a graphical user interface (GUI).
• It is possible to migrate virtual machines that have ephemeral storage in Ceph.
The Contrail storage solution is only supported with the Ubuntu operating system. For installing compute storage node,
ensure the following software is downloaded:
• The storage Debian package: contrail-storage-packages_x.xx-xx~xxxxxx_all.deb.
• NFS VM qcow2 image from Juniper.

Chapter 3–40 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Support for TOR Switches and OVSDB


Contrail Release 2.1 and later supports extending a cluster to include bare metal servers and other virtual instances
connected to a top-of-rack (ToR) switch that supports the Open vSwitch Database Management (OVSDB) Protocol. The bare
metal servers and other virtual instances can belong to any of the virtual networks configured in the Contrail cluster,
facilitating communication with the virtual instances running in the cluster. Contrail policy configurations can be used to
control this communication.
OVSDB protocol is used to configure the ToR switch and to import dynamically-learned addresses. VXLAN encapsulation is
used in the data plane communication with the ToR switch.
A new node, the ToR services node (TSN), is introduced and provisioned as a new role in the Contrail system. The TSN acts as
the multicast controller for the ToR switches. The TSN also provides DHCP and DNS services to the bare metal servers or
virtual instances running behind ToR ports.
The TSN receives all the broadcast packets from the ToR, and replicates them to the required compute nodes in the cluster
and to other EVPN nodes. Broadcast packets from the virtual machines in the cluster are sent directly from the respective
compute nodes to the ToR switch.
The TSN can also act as the DHCP server for the bare metal servers or virtual instances, leasing IP addresses to them, along
with other DHCP options configured in the system. The TSN also provides a DNS service for the bare metal servers. Multiple
TSN nodes can be configured in the system based on the scaling needs of the cluster.
A ToR agent provisioned in the Contrail cluster acts as the OVSDB client for the ToR switch, and all of the OVSDB interactions
with the ToR switch are performed by using the ToR agent. The ToR agent programs the different OVSDB tables onto the ToR
switch and receives the local unicast table entries from the ToR switch. The typical practice is to run the ToR agent on the
TSN node.

www.juniper.net Architecture • Chapter 3–41


Network Automation Using Contrail Cloud

Using OVSDB with Contrail


The routes to MAC addresses are stored in OVSDB tables in each physical switch; OVSDB has a built-in replication capability
that is used to copy routes between switches via a new Contrail component called the ToR Service Node (TSN). Each TSN
mediates between Contrail’s XMPP-based protocol and the OVSDB protocol on switches so that MAC routes can be
exchanged between Contrail virtual networks and physical networks configured in switches.
The Contrail TSN provides mediation for route exchanges between OVSDB (used on network devices) and XMPP (used by
Contrail). In addition, the destination for broadcast frames in the VTEPs in each switch is set to the IP address of a TSN,
which provides proxy services for ARP, DHCP, and DNS. Such packets are sent to the TSN in VXLAN encapsulation. Since
TSNs only deal with control plane interactions and not data plane traffic, they scale well. A single TSN can support OVSDB
sessions with up to 128 switches. The scalability of the TSN solution contrasts with software gateway implementations that
pass user traffic.
Continued on the next page.

Chapter 3–42 • Architecture www.juniper.net


Network Automation Using Contrail Cloud
Using OVSDB with Contrail (contd.)
A TSN contains four types of software components:
• OVSDB Client: Each client maintains a session with the OVSDB server on a switch. Route updates and
configuration changes are sent over the OVSDB session.
• ToR Agent: The ToR agent maintains an XMPP session with the Contrail Controller and mediates between the
Contrail XMPP messages and OVSDB.
• vRouter Forwarder: The forwarding component of a vRouter traps and responds to broadcast packets that
VTEPs in switches direct towards the IP address of a TSN inside VXLAN encapsulation.
• ToR Control Agent: The ToR control agent is a vRouter that provides proxy services (DHCP, DNS, and ARP) for
broadcast traffic arriving over VXLAN from servers attached to switches. Response data is provided by either the
ToR Control Agent itself, or by the Contrail Controller via an XMPP session.
Each TSN instance is implemented on an OpenStack compute node with Contrail Networking installed, but with the Nova
compute service disabled, preventing VMs from being launched on these servers.
When a switch learns a new MAC address on an interface that is configured in a VTEP, it creates a bridge table entry for the
MAC on that interface. A corresponding entry in the OVSDB table is created, which causes a MAC route to be sent via OVSDB
protocol to the TSN. The route specifies a VXLAN encapsulation tunnel where the next hop is the IP address of the switch,
and the VNI will be that of the VTEP on the switch to which the server is connected. When a route arrives at the TSN, it is
converted to an XMPP message that is sent to the Contrail Controller, which sends the route to vRouters that have VRFs with
matching VNI. The TSN also sends the routes to other switches that are running OVSDB and have VTEPs with the same VNI.
Similarly, when VMs are created using OpenStack, routes to the new VMs are sent by the Contrail Controller via the TSN to
each switch with a VTEP with matching VNI. The routes specify VXLAN tunnels with the next hop being the IP address of the
vRouter where the destination VM is running, and where the VNI value of the Contrail virtual network is being used. When
VXLAN traffic arrives at a vRouter, the VNI is used to identify which VRF should be used for MAC lookup in order to find the
virtual interface to which the inner Ethernet frame should be sent.

www.juniper.net Architecture • Chapter 3–43


Network Automation Using Contrail Cloud

We Discussed:
• Contrail components, in depth;
• Contrails control and data planes; and
• Additional options for deploying Contrail.

Chapter 3–44 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

www.juniper.net Architecture • Chapter 3–45


Network Automation Using Contrail Cloud
Answers to Review Questions
1.
The three main components of Contrail are the configuration node, control node, and the analytics node.
2.
MPLS over GRE, MPLS over UDP or VXLAN can be used for the overlay tunnels.
3.
Horizon (dashboard), Neutron (networking), Nova (compute), Glance (image storage), Keystone (identity), Cinder (block storage),
Swift (object storage), Ceilometer, Heat.

Chapter 3–46 • Architecture www.juniper.net


Network Automation Using Contrail Cloud

Chapter 4: OpenStack and Contrail Configuration


Network Automation Using Contrail Cloud

We Will Discuss:
• How a tenant is created internally;
• How to manage network policies;
• How to create and assign floating IPs;
• The Device Manager; and
• Using OpenStack and Contrail APIs.

Chapter 4–2 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Tenant Creation Walkthrough


The slide lists the topics we will discuss. We discuss the highlighted topic first.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–3


Network Automation Using Contrail Cloud

OpenStack and Contrail GUI Review


We start with reviewing the fact that Contrail solution uses two GUIs, OpenStack Dashboard and the proper Contrail web GUI.
Some of the tasks can be performed from both GUIs, and that raises the question—which one to use? There is no common
answer to this question, however because the Contrail GUI typically has more features for overlapping functionality, it is
reasonable to use it for all tasks that cannot be performed otherwise. For example, network policies can be managed from
both GUIs, but options for enabling Services or Mirroring are only available in the Contrail GUI.

Chapter 4–4 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

VM Creation Zoom-In
Contrail and OpenStack work together in the overall SDN solution. The slide shows details about component interaction
during the process of VM creation.
Note that because OpenStack’s Neutron is built to be a pluggable open architecture, plug-ins can be used to add
functionality to OpenStack's networking platform. One example of a vendor plug-in is Contrail, which interfaces directly with
Neutron.
The steps of VM creation are as follows:
1. When a user requests a new VM, OpenStack submits a VM creation request to the Nova agent on the server
where the VM is to reside.
2. The Nova agent, in turn, requests network configuration via the Neutron API in OpenStack.
3. This request is passed to the Contrail Controller via the Contrail plug-in for Neutron. The plug-in translates the
request from the OpenStack object model and API to the object model and API of Contrail. Each API call
specifies the host (physical server) on which the VM will be created, and the network to which each VM
interface should belong.
4. The Contrail Controller determines whether a new VRF must be created to support a new network on a host,
configures each VRF to be attached to specific virtual interfaces of the new VM, and installs routes in each VRF
to which a new interface has been attached. The installed routes provide connectivity to other systems as
allowed by network policies defined in the Contrail Controller.
5. Once the virtual machine is created, the Nova Agent informs the vRouter agent, who configures the virtual
network for the newly created virtual machine (e.g. new routes in the routing-instance).

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–5


Network Automation Using Contrail Cloud

Tenant Creation: Logical Topology


The slide shows the logical topology that we are going to be using over the next couple of slides. Note how there are multiple
VMs for VN A and VN B, and also how a virtual firewall separates the two VNs. Then, finally we will discuss how the VNs are
connected to the physical gateway router to reach external physical networks.
This topology is common in data centers that use a multi-tier topology to provide services. For example, the VMs in VN B
might be the front-end servers that host the Web server portion of the service, and the VMs in VN A are the back-end servers
that might house the databases that provide database services.
Also note that you must create network polices to connect VNs together.

Chapter 4–6 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Tenant Creation: Physical Topology


We begin with a physical topology, which includes virtualized servers with hypervisors that have a Contrail vRouter. These
virtualized servers are basically compute nodes from Contrail’s perspective. Next, we have the OpenStack and Contrail
controller nodes.
The underlay network, which includes Layer 2 switches in our example, provides connectivity between the virtualized
servers, OpenStack and Contrail controller nodes, and the gateway router. Note that although the diagram on the slide
shows that the underlay switches are only connecting the bottom two virtualized servers physically and the gateway router,
they are actually providing physical connectivity for all devices in the diagram. Additional underlay switches and connections
were left out of the diagram for brevity.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–7


Network Automation Using Contrail Cloud

Mapping of Logical to Virtual Topology


Now that we have established the physical and logical topologies, we must map the logical topology virtually within the
physical topology. The next couple of pages go into detail on the internal workings of how Contrail completes this mapping.

Chapter 4–8 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Creating VN A
We start with an empty topology, and we must first create VN A. As you will see later in this chapter, you can define VNs using
the Contrail Web UI. An important detail to be aware of at this point is that you cannot launch a VM instance without creating
a VN first, but you do not instantiate the VN anywhere in the infrastructure until that VM instance has been deployed. This
concept means that you must define the topology ahead of time, the whole topology and all the rules that connect it. Then, it
is not programmed into the infrastructure until there is a VM that is spun up that needs it. Notice how the slide shows that
even though we have defined VN A, it is not programmed on any of the virtualized servers in the topology, but it is in the
Contrail controller because you must reference it when you deploy a VM instance. This concept applies to the entire topology,
no matter which VM instance you are deploying.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–9


Network Automation Using Contrail Cloud

Spinning Up VM A-1
Now that you have created VN A, you can create VM A-1. The slide shows how the process of deploying a VM instance occurs.
First, you must create a VM and attach a VN to it through OpenStack. You cannot instantiate a VM instance using the Contrail
Web UI—you must use the OpenStack Web UI. Once you launch a VM instance through OpenStack, the Nova component,
which handles the compute orchestration, goes through its iterations, and chooses a server to spin up the VM. Note that
Nova uses an internal algorithm to choose a virtualized server that has more resources than the other available servers.
However, you can manually specify the name of the compute node to choose which virtualized server spins up the VM.

Chapter 4–10 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Assigning the VN
OpenStack now communicates with the Contrail controller, through the Neutron plug-in, and tells it to associate VM A-1 with
VN A. Then, through the Extensible Messaging and Presence Protocol (XMPP), Contrail configures the vRouter to associate
VM A-1 with VN A. Currently, the only routing information in VN A is how to get to VM A-1.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–11


Network Automation Using Contrail Cloud

Spinning Up VM A-2
Now VM A-2 is spun up. This process is exactly the same as spinning up VM A-1 in which you must launch a VM instance and
associate it with a network. In this case, you are spinning up VM A-2 and associating it with VN A. The only difference in this
case is that OpenStack uses the Nova plug-in to assign the VM to a virtualized server, or compute node, that does not have
any VMs currently running.

Chapter 4–12 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Assigning the VN to VM A-2


Just as with VM A-1, OpenStack communicates with the Contrail controller, through the Neutron plug-in, and tells it to
associate VM A-2 with VN A. Then, Contrail uses XMPP to configure the vRouter to associate VM A-2 with VN A. However, VM
A-1 and VM A-2 cannot communicate with each other just yet.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–13


Network Automation Using Contrail Cloud

Exchanging Routes and Creating Tunnels


Now that two VMs in VN A have been instantiated, they exchange routes between their respective vRouters through XMPP by
advertising those routes to the Contrail control node. In turn, the Contrail control node advertises the corresponding routes
to the necessary vRouter through XMPP. Next, the vRouters create a tunnel connecting the two vRouters together to facilitate
communication between the VMs. Note that the tunnel is a stateless tunnel which only requires connectivity between the
two vRouters. This connectivity is provided by the physical underlay network.
Note that communication within a VN does not require a policy that permits the traffic. A policy is only required when traffic
must travel between VNs. At this point, VM A-1 and VM A-2 can communicate with each other without further steps being
taken.

Chapter 4–14 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Spinning Up VM A-3
Now VM A-3 is spun up. This process is exactly the same as spinning up VM A-1 and VM A-2 in which you must launch a VM
instance and associate it with a network. In this case, you are spinning up VM A-3 and associating it with VN A. Again,
OpenStack uses Nova to assign the VM to a virtualized server, or compute node, that does not have any VMs currently
running.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–15


Network Automation Using Contrail Cloud

Assigning the VN to VM A-3


Just as with VM A-1 and VM A-2, OpenStack communicates with the Contrail controller, through the Neutron plug-in, and tells
it to associate VM A-3 with VN A. Then, Contrail uses XMPP to configure the vRouter to associate VM A-3 with VN A. However,
VM A-3 cannot communicate with VM A-1 and VM A-2 yet. Just as with the process to setup communication between VM A-1
and VM A-2, routes must be exchanged and tunnels must be created before communication can occur.

Chapter 4–16 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Exchanging Routes and Creating Tunnels


Now that all three VMs in VN A have been instantiated, they exchange routes between their respective vRouters through
XMPP by advertising those routes to the Contrail control node. In turn, the Contrail control node advertises the corresponding
routes to the necessary vRouters through XMPP. Next, the vRouters create tunnels connecting the vRouters of VM A-1 to VM
A-3 and VM A-2 to VM A-3 to facilitate communication between the VMs.
Note how the Logical portion of the slide now shows that VN A, and it’s associated VMs, are now setup and ready to
communicate with other devices. Note that even if VN B was up and running, VN A cannot communicate with VN B without a
policy in place that permits it to do so. This current lack of communication means that VN A cannot communicate with
gateway router, and the outside world either.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–17


Network Automation Using Contrail Cloud

Creating Customer B
Customer B is much like customer A; you must first create VN B, then launch the associated VMs. The VMs associated with
VN B are placed on the compute node, or virtualized servers. Then, routes are exchanged through XMPP, and the tunnels are
created to provide reachability between the VMs in VN B.
Note that even at this point a policy has not been created to facilitate communication between the two VNs. This current
absence of a policy stops VN A from communicating with VN B and hosts on the Internet, but it does not stop VN B from
communicating with hosts on the Internet. A network policy is used to permit inter-VN communication and to modify intra-VN
traffic—intra-VN communication does need a policy to permit the traffic. However, further steps, that do not relate to a policy
must be enacted to facilitate the communication between VN B and hosts on the Internet.

Chapter 4–18 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Connecting Virtual Networks with the Firewall VM


Now that both A and B virtual networks have been spun up on the compute nodes, you must spin up the virtualized firewall—
VM FW. The process for spinning up VM FW is similar to spinning up any other VM in this scenario in which you launch the VM
instance and associate VNs with the VM—VN A and VN B.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–19


Network Automation Using Contrail Cloud

Attaching the VNs to the VM FW Instance


Now that the VM FW instance has been launched, the Neutron plug-in directs the Contrail controller to attach the instance to
VN A and VN B. Then, the Contrail controller creates the necessary routing instance on the vRouter that is associated with
the VM FW instance. These actions result in the VM FW instance having two interfaces—one in VN A, and one in VN B.
However, traffic cannot flow through the VM FW instance yet.

Chapter 4–20 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Directing Traffic Through the VM FW Instance


To direct traffic through the VM FW instance you must create a policy that says, if traffic needs to go between VN A and VN B,
the traffic must first go through the service instance called VM FW. Sending inter-VN traffic through a service like this is
called service chaining (the topic of service chaining is covered in detail in another chapter of this course).
The slide depicts the application of the policy that permits traffic between VN A and VN B, and applies the VM FW service.
The application of the VM FW service means that any traffic that passes between the two networks must go through the
VM FW instance first.
Once the policy is in place, the vRouters on each of the compute nodes is programmed to create the necessary tunnels and
exchange the necessary routes. These procedures basically mean that for any VM in VN A to send traffic to another VM in
VN B, the next hop is the VM FW instance.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–21


Network Automation Using Contrail Cloud

Traffic Flows Between VNs


Now that the routes have been exchanged and all the necessary tunnels have been created, traffic can flow between the two
VNs. The slide shows an example of sending traffic from the VM A-1 instance to the VM B-3 instance. The traffic is first sent
to the VM FW instance before it is sent on to the VM B-3 instance. Note how complex this process is when looking at how
these actions are accomplished on the Physical section of the slide, and how simple this process is when examining how it
works on the Logical section of the slide. This example shows how important abstraction is. Manually configuring each
tunnel and all route exchanges for this example would take an incredible amount of time and work, whereas using the
automated features of Contrail reduces the time and amount of work required immensely.
Another huge benefit of this type of abstraction, is that VMs can be physically moved from one virtualized server to the next,
and the physical topology moves with the VMs. This concept means that even though there is a move in the physical network,
no such move occurs in the logical network. This flexibility is a massive benefit to all the people who are involved anytime a
change must occur in the network.

Chapter 4–22 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Connect Virtual Networks with the Physical Network


The last part of example is to connect VN A and VN B to the physical Network. The slide depicts beginning this step by first
configuring a gateway router as a BGP neighbor on the Contrail controller. Next, you must configure the dynamic GRE tunnel
information and the routing instance. Note that the routing instance should be a virtual-routing and forwarding (VRF) routing
instance. This allows the gateway router to receive the MPLS traffic that is being sent through the overlay network. We
discuss this configuration in more detail in a later section of this chapter.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–23


Network Automation Using Contrail Cloud

Exchanging Routes and Tunnel Creation


The gateway router forms a BGP session with the Contrail controller and exchanges BGP routes with it. Next, the gateway
router begins to form the tunnels with the necessary vRouters. In our example, the gateway router only needs to form tunnels
with the vRouters that are directly connected to VM instances of VN B.

Chapter 4–24 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Exchanging Routes Through XMPP


The Contrail controller exchanges those routes it learned from the gateway router with the vRouters that are associated with
VN B. Then, the tunnel formation that the gateway router begin can finish. Now, the VM instances in both VNs can
communicate with hosts on the Internet.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–25


Network Automation Using Contrail Cloud

Creating and Managing Network Policies


The slide highlights the topic we discuss next.

Chapter 4–26 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Network Policies
A network policy describes which traffic is permitted, or not permitted, to pass between virtual networks. All traffic between
VNs is denied by default, and intra-VN traffic is allowed. Note that when you create a network policy you must then associate
it with a VN to have any effect (several policies may be associated with one VN at the same time). Each policy contains a list
of rules that are evaluated in the top-down fashion, and evaluation ends when the first match is found (this is also known as
“terminal behavior”).

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–27


Network Automation Using Contrail Cloud

Creating Network Policies in OpenStack


To begin creating a network policy within the OpenStack Web UI, first make sure you select the correct project. Network
polices are project specific. Then, under the Networking workspace, you must select the Network Policies tab—as
shown on the slide. Next, you can click the Create Network Policy button to begin. Note that when you click the
Create Network Policy button, you are only able to name the policy in the resulting window. After you create the
policy, you can add rules to tell the policy how to treat traffic that matches the policy.

Chapter 4–28 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Adding Rules
Once you create a policy you can add rules to the policy by clicking the Edit Rules button. Then, you are presented with
the Edit Network Policy Rules window. Now you can create a rule that matches traffic to pass, or deny it based on
the criteria in the rule. Note that you can develop very specific rules and add multiple rules to each policy. The rules in a
policy are processed in a top down order and if the traffic matches one rule, further rule processing does not occur. For
example, if traffic passes through a policy with two rules and matches on the first rule, the action of the first rule is applied to
the traffic and the traffic is not processed by the second rule.
A description of each of the fields for a rule is listed below.
• Sequence Id: This field lets you define the order in which to apply the current rule.
• Action: Define the action to take with traffic that matches the current rule. The options of Pass and Deny permit
or silently drop the traffic, respectively.
• Direction: Define the direction in which to apply the rule, for example, to traffic moving in and out, or only to
traffic moving in one direction. The options of Bidirectional or Unidirectional are available.
• IP Protocol: Select from a list of available protocols: ANY, TCP, UDP, ICMP.
• Source Network: Select the source network for this rule. Choose Local (any network to which this policy is
associated), Any (all networks created under the current project) or select from a list of all sources available
displayed in the drop-down list, in the form: domain-name:project-name:network-name.
Continued on the next page.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–29


Network Automation Using Contrail Cloud
Adding Rules (contd.)
• Source Ports: Accept traffic from any port or enter a specific port, a list of ports separated with commas, or a
range of ports in the form nnnn-nnnnn.
• Destination Network: Select the destination network for this rule. Choose Local (any network to which this policy
is associated), Any (all networks created under the current project) or select from a list of all destinations
available displayed in the drop-down list, in the form: domain-name:project-name:network-name.
• Destination Ports: Send traffic to any port or enter a specific port, a list of ports separated with commas, or a
range of ports in the form nnnn-nnnnn.

Chapter 4–30 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Network Policies in Contrail


Creating network polices in the Contrail Web UI is very similar to creating network policies in the OpenStack Web UI, with a
few additional features.
To begin, you first must click the Configure button, then navigate to the Policies workspace, which is under the
Networking workspace. Network polices are project specific, so make sure you are within the correct project before you
begin creating any network polices.
With network polices in the Contrail Web UI, you might notice that there are two additional fields that are not present in the
OpenStack Web UI—the Services and Mirror fields. The Services field allows you to select from a list of available
services to apply to this policy. The services are applied in the order in which they are selected. There is a restricted set of
options that can be selected when applying services. To implement service chaining, you must use this field. Service
chaining is covered in detail in another chapter of this course. The Mirror field allows you to port mirror traffic to another
flow collector device to analyze the traffic at a later time. We cover this feature in more detail in another chapter of this
course.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–31


Network Automation Using Contrail Cloud

Security Groups
Security groups are another security mechanism of the OpenStack/Contrail solution that are used to filter traffic to or from a
VM instance in addition to policies. They can be configured in either Contrail or OpenStack GUI.
A security group is a named collection of network access rules that are used to limit the types of traffic that have access to
VM instances. When you launch an instance, you can assign one or more security groups to it. If you do not create security
groups, new instances are automatically assigned to the default security group, unless you explicitly specify a different
security group.
The associated rules in each security group control the traffic to instances in the group. Any incoming traffic that is not
matched by a rule is denied access by default. You can add rules to or remove rules from a security group, and you can
modify rules for the default and any other security group.
Instances that use the default security group cannot, using default settings, be accessed from any IP address outside of the
cloud. If you want those IP addresses to access the instances, you must modify the rules for the default security group.
The slide shows the predefined rules in the default security group (such group is created automatically for every new project).
Selecting a security group as the source allows any other instance in that security group access to any other instance via this
rule, thus, any external hosts will not be able to access VM instances by default.

Chapter 4–32 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Implementing Floating IPs


The slide highlights the topic we discuss next.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–33


Network Automation Using Contrail Cloud

What Are Floating IPs?


Floating IPs is where Network Address Translation (NAT) comes in, in most cases, although we can use NAT for other things
as well. But in a data center environment, typically you have a private IP address assigned to the actual interfaces on your
VMs. But what happens if you actually need to route that traffic on a particular interface in a VM to the outside world? Do you
actually want to assign an external, publicly routable, IP address to that VM, and then have to go log into that VM to change it
all the time, or do you want something that is essentially going to float? In this case, when you want something to float, you
assign it virtually to that interface that is on a particular VM, but the IP address is essentially held in the VRF on the vRouter.
That external IP address is advertised to the upstream neighbor, typically another autonomous system (AS), which allows the
route associated with the floating IP to be advertised to the Internet. Then external, Internet based, traffic that needs to
reach the floating IP address now has the routing information to do so. Any traffic that is received by the vRouter that is sent
to that floating IP address and is then translated to the private IP address that is assigned to the VM. This NAT mapping is
performed in a static one-to-one manner, which means that not only can the external hosts reach the VM by initiating a
traffic flow to the floating IP address, but the VM can also use the floating IP address to initiate traffic flows to external hosts.
This advertisement of the floating IP address is sent from the vRouter to the gateway router using BGP. To this end, you must
configure a L3VPN routing instance on the gateway router, and dynamic GRE tunnels (this can be done manually or by
Contrail Device Manager). We cover the configuration of the L3VPN routing instance and the dynamic GRE tunnels in the
Junos OS in a few pages.
One caveat that you need to be aware of is that it is best practice to only advertise the floating IP address to the gateway
router and not the internal IP address associated with the VM.

Chapter 4–34 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Managing Floating IPs Case Study


The slide gives the requirements for the case study that involves using a floating IP to reach the VM instance. We cover these
steps in detail over the next few slides. Then, we verify the operation of the floating IP.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–35


Network Automation Using Contrail Cloud

Creating the Public VN


The first step that you must complete is creating the Public VN, which is configured under the Networks workspace. The
Public VN is a special VN in that it is not associated with any VM instance. Also, do not associate a policy with this VN. This
VN is very similar to defining a NAT pool on a security device. The NAT pool is defined and internal hosts are associated with
that NAT pool. Remember that VNs are project specific, so make sure that you select the correct project before proceeding.
Begin by naming the VN. Then define the 100.100.100.0/24 IP block. Next, define the floating IP pool of Public, and
associate it with the Customer-A project. Floating IP pools are project specific, but you can add multiple projects to a
single floating IP pool. Then the IPs in the pool can be used among those projects.
Next, add the route target that you will configure on the gateway router in the L3VPN routing instance. Route target is an
extended BGP community attribute that is used to tag routes so that they can be properly matched and imported on provider
edge (PE), or gateway, routers. Typically, each customer will have its own route target assigned. For simplicity sake, we are
just using the route target of 111:111 in this example.
Finally, remember to hit the Save button. If you forget to click Save, you lose all of your work.

Chapter 4–36 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Allocating Floating IPs


To allocate a floating IP address from the Public pool you must navigate to the Manage Floating IPs workspace, as
shown on the slide. Then, make sure that you are within the Customer-A project, and click the Allocate Floating
IP button. After you click the Allocate Floating IP button you are presented with the Allocate Floating IP
window. Then, the drop-down menu displays the available floating IP pools by listing the project, VN name, and floating IP
pool name, separated by colons. In our case, the project name is Customer-A, the VNs name is Public, and the floating
IP pool’s name is Public.
The Allocate Floating IP window also allows you to select the Allocation Type: Dynamic or Specific IP. If
the Dynamic option is selected, you must enter the number of IP addresses to allocate in the Number of IP
Addresses field. If the Specific IP option is selected, you must enter an IP address from the floating IP pool’s subnet
to allocate.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–37


Network Automation Using Contrail Cloud

Associating Floating IPs


The next step is to associate the floating IP address to the VM instance. Note how the slide displays the floating IP address of
100.100.100.3, but the Mapped Fixed IP Address field is blank. You can begin associating the floating IP with the VM
instance by clicking the gear icon for the floating IP address, and selecting the Associate Port option. Then, you are
presented with the Associate Floating IP window. In this window you will see a dropdown box that lists the available
VM instances. The contents of the dropdown box might seem a bit confusing because only the IP address and the universal
unique identifier (UUID) of the VM instance is shown. However, in our case study, we can easily determine that the 10.0.0.3
address belongs to the VM instance in question.

Chapter 4–38 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

BGP Router Configuration


You must configure the BGP peering with the control nodes through the Control Web UI. You can accomplish this step by
navigating to the BGP Routers workspace as shown on the slide. Then, you must configure the BGP peering parameters
for the gateway. In our example we are using the AS of 64512, peering address of 10.10.10.240, the default BGP port of
179, and both control nodes for BGP peering.
Note that the control nodes simply act as BGP route reflectors and reflect BGP routes between the gateway router and the
vRouters.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–39


Network Automation Using Contrail Cloud

Gateway Router BGP Configuration


The slide shows the BGP configuration for the gateway router in the Junos OS command-line interface (CLI). Some points of
interest, as called out on the slide, are the internal BGP (IBGP) configuration, the local IBGP peering address, the INET VPN
BGP family, and the prefix used to peer with the control nodes. Note that instead of using the allow command, you can
specifically list each control node by using the neighbor command followed by the IP address of the control node.

Chapter 4–40 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Gateway Router Dynamic GRE Tunnel Configuration


The slide displays the dynamic GRE tunnel configuration in the Junos OS CLI. Some points of interest, as pointed out on the
slide, are the source address used by the gateway router for dynamic GRE tunnel formation, the GRE encapsulation, and the
prefix used for the compute nodes, which houses the vRouters and VM instances.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–41


Network Automation Using Contrail Cloud

Gateway Router Routing Instance Configuration


The slide displays that configuration for the routing instance on the gateway router in the Junos OS CLI. Some points of
interest, as noted on the slide, are the VRF instance type, the interface pointing toward the external hosts, and the route
target configuration. Of special interest is the routing information, which can be provided through static route configuration,
as shown on the slide, or through a dynamic routing protocol, such as BGP or OSPF. The routing information that points
toward the next upstream neighbor is vitally important because without it the next hop of the route advertised to the control
nodes through BGP is not resolvable.

Chapter 4–42 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Verifying Routing Information


You should verify that the routing information does appear on the gateway router within the correct routing table. In the
output on the slide we can see that the route that represents the floating IP is present in the Customer-A.inet.0 routing table.
Note that there are two versions of the route, one from each control node, remember that the control nodes act as BGP route
reflectors. Also, look at the next hops associated with the routes. From examining the next hops you can see that the gateway
router is using the GRE interface and is pushing an MPLS label of 16 on the packet when it is sent toward the vRouter.
If the route did not show up in the Customer-A.inet.0 routing table, you would want to examine your BGP and dynamic GRE
tunnel configurations to make sure that they are correct.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–43


Network Automation Using Contrail Cloud

Security Groups Configuration


As we discussed earlier, if you want external IP addresses to access the instances, you must modify the rules for the default
security group.
The slide shows how we modified the predefined rule in the default security group, changing the source address from
“default” (meaning any host in the default security group for this tenant) to 0.0.0.0/0. Only after this modification pings on
the following slide will start going through.

Chapter 4–44 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Verifying Communication
As with any network related situation, no client or customer really cares what happens in the background, they just want the
application to work well. To that end, before you can call any project a success you must verify that communication can occur
the way you expect it to. By examining the routing information and testing communication on the remote host we can
determine that communication can occur between the remote host and the internal VM instance using the floating IP
address.
To confirm the communication further, you could examine the current flows for the specific vRouter that is servicing the VM.
If the expected flows are present you can confirm that communication is occurring.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–45


Network Automation Using Contrail Cloud

Using Device Manager


The slide highlights the topic we discuss next.

Chapter 4–46 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Device Manager Review


We have already mentioned in Architecture chapter that there is a configuration node daemon named Device Manager, used
to manage physical routers in the Contrail system. In Contrail Release 2.10 and later, it is possible to extend a cluster to
include physical Juniper Networks MX Series routers and other physical routers that support the Network Configuration
(NETCONF) Protocol. You can configure physical routers to be part of any of the virtual networks configured in the Contrail
cluster.
Contrail’s Device Manager uses NETCONF protocol to remotely configure such parameters as BGP, VRFs and routing policies
on the physical router.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–47


Network Automation Using Contrail Cloud

Device Manager Case Study


In our case study we will configure a physical router (Juniper MX Series router) for BGP and announce floating IP address to
it. The configuration requirements are the same as in “Floating IP Case Study” earlier in this chapter. It assumes that
Public subnet (100.100.100.0/24) and floating IP have already been configured the same way as was done in “Floating IP
Case Study”. What is left is the configuration of the Gateway router. We want Contrail to configure it for us.

Chapter 4–48 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Using Device Manager


First of all, in Global Config settings, configure IP Fabric Subnets – a list of subnets in which your overlay tunnels may
terminate. These will be used for automatic configuration of dynamic GRE on the routers.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–49


Network Automation Using Contrail Cloud

Create a Physical Router


The Contrail Web user interface can be used to configure a physical router into the Contrail system. Select Configure >
Physical Devices > Physical Routers to create an entry for the physical router and provide the router's
management IP address, user credentials, VTEP (overlay tunnel termination point). Also provide Vendor and Model name for
the device (you can use mx for Juniper MX; be careful as Device Manager will refuse to work properly with the wrong device
model!).
Before using Device Manager to manage the configuration for an MX Series device, use the following Junos CLI commands
to enable NETCONF on the device:
set system services netconf ssh
set system services netconf traceoptions file nc
set system services netconf traceoptions flag all
(the last two commands enable traceoptions and are optional).
If there is any failure during a Device Manager configuration, the failed configuration is left on the MX Series device as a
candidate configuration. An appropriate error message is logged in the local system log by the Device Manager. The log level
in the Device Manager configuration file should be set to INFO for logging NETCONF XML messages sent to physical routers.
The Device Manager config file is /etc/contrail/contrail-device-manager.conf and the log file is
/var/log/contrail/contrail-device-manager.log.

Chapter 4–50 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Create a BGP router


Now create a BGP Router. In this case, this is going to be a logical entity that is tied to a physical router, and the actual config
will be posted to the physical router using NETCONF.
You must configure Gateway IP, Router Id, Autonomous System, BGP address families, physical router and select BGP peers
(they are going to be your Control nodes or other BGP routers).
When Device Manager detects BGP router configuration and its association with a physical router, it configures BGP groups
on the physical router. After completing this step, BGP and dynamic tunnel configuration is pushed to the device. The
complete config pushed to the device is shown in a couple of slides.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–51


Network Automation Using Contrail Cloud

Set External Option for Public Network


When a public network is extended to a physical router, a static route is configured on the MX Series router. The configuration
copies the next-hop from the public.inet.0 routing table to the inet.0 default routing table and a forwarding table filter from
the inet.0 routing table to the public.inet.0 routing table. The filter is applied to all packets being looked up in the inet.0
routing table and matches destinations that are in the subnet(s) for the public virtual network. The policy action is to perform
the lookup in the public.inet.0 routing table.

Chapter 4–52 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

The Resulting Config


Main parts of the final configuration pushed to the router in this example are shown in the slide. It is seen that
dynamic-tunnels, policies, routing-instances and BGP have all been configured automatically by Contrail using NETCONF
protocol. Some static routes and a firewall filter have also been configured, they are used to forward traffic between the VRF
and the master routing instance.
All configuration information is placed in the configuration group __contrail__. It is then applied at the global level.
The complete configuration uploaded by Contrail to Juniper MX80 router in this example is as follows:
groups {
__contrail__ {
forwarding-options {
family inet {
filter {
input redirect_to_public_vrf_filter;
}
}
}
routing-options {
static {
route 100.100.100.0/24 discard;
}
Continued on the next page.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–53


Network Automation Using Contrail Cloud
The Resulting Config (contd.)
router-id 10.10.10.240;
route-distinguisher-id 10.10.10.240;
autonomous-system 64512;
dynamic-tunnels {
__contrail__ {
source-address 10.10.10.240;
gre;
destination-networks {
10.10.10.0/24;
10.10.10.240/32;
10.10.10.231/32;
10.10.10.232/32;
}
}
}
}
protocols {
bgp {
group __contrail__ {
type internal;
multihop;
local-address 10.10.10.240;
hold-time 90;
keep all;
family inet-vpn {
unicast;
}
family inet6-vpn {
unicast;
}
family evpn {
signaling;
}
family route-target;
neighbor 10.10.10.231 {
peer-as 64512;
}
neighbor 10.10.10.232 {
peer-as 64512;
}
}
group __contrail_external__ {
type external;
multihop;
local-address 10.10.10.240;
hold-time 90;
keep all;
family inet-vpn {
unicast;
}
family inet6-vpn {
unicast;
}
family evpn {
signaling;
}
family route-target;
Continued on the next page.

Chapter 4–54 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud
The Resulting Config (contd.)
}
}
}
policy-options {
policy-statement _contrail_l3_6_Public-export {
term t1 {
then {
community add target_111_111;
community add target_64512_8000003;
accept;
}
}
}
policy-statement _contrail_l3_6_Public-import {
term t1 {
from community [ target_111_111 target_64512_8000003 ];
then accept;
}
then reject;
}
community target_111_111 members target:111:111;
community target_64512_8000003 members target:64512:8000003;
}
firewall {
family inet {
filter redirect_to_public_vrf_filter {
term term-_contrail_l3_6_Public {
from {
destination-address {
100.100.100.0/24;
}
}
then {
routing-instance _contrail_l3_6_Public;
}
}
term default-term {
then accept;
}
}
}
}
routing-instances {
_contrail_l3_6_Public {
instance-type vrf;
interface irb.6;
vrf-import _contrail_l3_6_Public-import;
vrf-export _contrail_l3_6_Public-export;
vrf-table-label;
routing-options {
static {
route 0.0.0.0/0 next-table inet.0;
route 100.100.100.0/24 discard;
}
auto-export {
family inet {
unicast;
Continued on the next page.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–55


Network Automation Using Contrail Cloud
The Resulting Config (contd.)
}
}
}
}
}
}
}
apply-groups __contrail__;

Chapter 4–56 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Test Communication
The slide shows a ping initiated from a Host routing instance (that emulates a remote device in the lab) to a floating IP
address. You can see that the ping is successful.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–57


Network Automation Using Contrail Cloud

Using OpenStack and Contrail APIs


The slide highlights the topic we discuss next.

Chapter 4–58 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Application Programming Interfaces


Until this point in discussion, we have only mentioned that Contrail SDN solution allows for network programmability using a
set of Application Programming Interfaces, or APIs. It is time to examine this further.
Our goal is to be able to perform any SDN-related task from a program (or script), without manually clicking the GUI buttons.
The good news is that it is definitely possible, because GUIs themselves do use underlying APIs for performing all the tasks.
On the following slides, you will become familiar with both OpenStack and Contrail APIs. Both belong to REST (or RESTful) API
type, where REST stands for “Representational State Transfer” and is much easier thing than it might sound. Typically, REST
uses HTTP(S) and any object accessible from the API corresponds to the unique URL on the server, while HTTP verbs (such as
GET and POST) together with data in the request body correspond to performed action.
CRUD is a frequently used acronym that stands for create, read, update and delete. APIs are often described as being able to
do CRUD functions to manipulate with some objects such as virtual networks or policies.
REST is considered to be stateless, because no client context is being stored on the server between requests. Each request
from any client contains all the information necessary to process the request, and session state is held in the client.
A REST API can be accessed using any program or programming language capable of generating HTTP requests. For a quick
test, a web browser (possibly with a suitable plug-in) or curl utility can be used. For actual programming, libraries have
been developed for many programming languages that simplify the generation of REST API requests (such as the requests
package for Python).
Continued on the next page.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–59


Network Automation Using Contrail Cloud
Application Programming Interfaces (contd.)
For high-level work with a particular API, there are libraries that work on top of specific REST API as well. Such libraries exist
for both OpenStack and Contrail, and we will be using them during this class. Although libraries exist for several languages
including Python, Java and JavaScript, Python is arguably the most popular language in the field. We will be using Python.

Chapter 4–60 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

OpenStack APIs
We start with a discussion of OpenStack’s APIs. It turns out that each OpenStack component (such as Nova, Neutron,
Keystone) has its own API. Several API versions are supported for backwards compatibility so you can safely install the latest
library packages even if you are using, for some reason, the older API version.
On the following slides we will discuss several examples of using the OpenStack API. The complete API reference is available
on the OpenStack website.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–61


Network Automation Using Contrail Cloud

OpenStack: Keystone AP!


This slide demonstrates some OpenStack Keystone API calls.
The first half of the slide lists a sampling of the many available API calls in Keystone.

Generating an API Token


The second half of the slide provides an example of a request for a token by supplying a username and password specific to
the demo project. If the authentication is successful, Keystone will send back HTTP 200 with a token that will expire in
approximately 1 hour.
The parameters used to execute a curl tool have the following meaning:
• s makes curl silent (no progress meter or error messages are shown)
• H adds an HTTP header
• X sets the HTTP method (GET is default)
• d is data (body) of the request
• python -m json.tool is used to validate and pretty-print the JSON data in HTTP reply.

Chapter 4–62 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Using Keystone API Example


The first curl command presented on the slide gets the authentication token (output is abbreviated).
The second command gets a list of tenants (note how token from the previous output is now used in input). In particular, we
get the internal ID for demo tenant, which may be needed for some other API calls (it will be used on one of the following
slides).

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–63


Network Automation Using Contrail Cloud

OpenStack: Glance API


This slide demonstrates some OpenStack Glance API calls. The first half of the slide lists a sampling of the many available
API calls in Glance.

Listing Images
The second half of the slide is an example API call that is used to list the images available.

Chapter 4–64 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

OpenStack: Nova API


This slide demonstrates some OpenStack Nova API calls.
The first half of the slide lists a sampling of the many available API calls in Nova.

Listing VMs for a Tenant


The second half of the slide provides an example of an API call that will list running VMs. Notice that the X-Auth-Token HTTP
header must be included for authentication purposes. Both token and tenant_id to be used in this API call are generated from
API calls to Keystone, which we discussed previously.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–65


Network Automation Using Contrail Cloud

Python Library for OpenStack


OpenStack Python Software Development Kit (SDK) is used to write Python automation scripts that create and manage
resources in your OpenStack cloud. The SDK creates Python bindings to the OpenStack API, which enables you to perform
automation tasks in Python by making calls on Python objects rather than making REST calls directly.
Note that all OpenStack command-line tools (that will be discussed in Troubleshooting chapter) are implemented by using
the Python SDK.
Use pip to install the OpenStack clients on a Linux, Mac OS X, or Microsoft Windows system. It is easy to use and ensures
that you get the latest version of the client from the Python Package Index. Also, pip enables you to update or remove a
package. Install each client separately by using the following command:
pip install python-PROJECTclient
Replace PROJECT with the lowercase name of the client to install, such as nova. Repeat for each client. The following values
are valid:
• barbican - Key Manager Service API
• ceilometer - Telemetry API
• cinder - Block Storage API and extensions
• cloudkitty - Rating service API
• glance - Image service API
Continued on the next page.

Chapter 4–66 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud
Python Library for OpenStack (contd.)
• gnocchi - Telemetry API v3
• heat - Orchestration API
• magnum - Containers service API
• manila - Shared file systems API
• mistral - Workflow service API
• monasca - Monitoring API
• murano - Application catalog API
• neutron - Networking API
• nova - Compute API and extensions
• sahara - Data Processing API
• senlin - Clustering service API
• swift - Object Storage API
• trove - Database service API
• tuskar - Deployment service API
• openstack - Common OpenStack client supporting multiple services
• keystone - Identity service API and extensions (this last package is currently deprecated in favor of openstack,
the Common OpenStack client supporting multiple services).

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–67


Network Automation Using Contrail Cloud

Nova Python Client Example


This slide shows an example script that connects to a Nova API server to get a list of available virtual machine instances
(here they are called “servers”).
The list is then printed, first simply by using built-in Python conversion of any object to a string, and then by using a for loop to
get some more details from VMs object. The slide shows that each list element representing a VM has the attributes of
name, status, and networks. The last one is a dictionary representing the virtual networks to which VM is connected.

Chapter 4–68 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Create a VM
This script is using Nova API to create a new VM instance in OpenStack.
First we provide credentials and initialize the client object. Then we obtain, using the shown find-methods, references to
image “Core-Linux-Image”, flavor “Linux-Core” and virtual network “VN-A”. Finally we use the create() method
to instantiate a VM and use image, flavor and VN as arguments.
Note that examples provided here do not perform any error checking. If an API server is unavailable, or a VM already exists,
or a required image or flavor can not be found, the script will fail. To make the scripts more robust, you typically use try/
except operators to catch and process exceptions.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–69


Network Automation Using Contrail Cloud

Functions of APIs in Contrail


We will now discuss Contrail API. The slide shows different functions of Contrail’s API.

Chapter 4–70 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Contrail API Documentation


The documentation for Contrail API is available on the juniper.net website. It is also available from the installed Contrail
server (second URL in the bottom of the slide).
Analytics API documentation is available from the Contrail server as well, using the following URL:
http://<CONTRAIL-IP>:8081/documentation/index.html
Abbreviation VNC refers to Virtual Network Controller—an SDN controller based on BGP/L3VPN, which is the Contrail
controller in our case.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–71


Network Automation Using Contrail Cloud

Contrail API Python Library Installation


The slide shows the steps needed to install Contrail API Python Library on the Ubuntu Linux host.
Note that all required packages are already installed on nodes running Contrail, so these steps are not needed if you plan to
run your scripts directly from Contrail nodes.

Chapter 4–72 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

API Library Methods


The Contrail API Python Library has a set of classes and methods used to manipulate different objects such as virtual
networks and policies. Once an object is constructed, you can use its methods to get or set its attributes, including
references to other objects.
Note that those methods do not communicate to Contrail API server by themselves, this is done using the VncApi class
discussed next. This distinction will become more clear when we get to script examples.
On the right, the slide shows a screenshot of bpython session. Bpython is an advanced interface to the Python interactive
interpreter, one of its features is that it shows command completion options. Modern integrated developer environments
(IDEs) such as Eclipse provide even more features that help in script development. When using a regular Python interpreter
only, a list of object properties and methods can be still obtained using such functions as dir() and help().

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–73


Network Automation Using Contrail Cloud

Main Library Class


VncApi is the main library class. It has CRUD-style methods that communicate with the Contrail API server and are used to
apply the configuration or read state from server. We will discuss some examples on the following slides.

Chapter 4–74 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Using Python Library


In this example we first do the required imports, then instantiate the main VncApi class and assign the result to vnc_lib
variable. Then we use virtual_networks_list() method of VncApi to obtain and print the list of virtual networks.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–75


Network Automation Using Contrail Cloud

Virtual Network Creation


The slide shows a sample script for creating virtual network using the Contrail API. A vnc_api.VirtualNetwork() call
instantiates a new object of VirtualNetwork class, however, nothing is created in the API server until the
virtual_network_create() method is called at the end of the script. A reference to the project is needed, otherwise
the virtual network will be created in the default project, and the default project is not accessible from Contrail GUI.
The add_network_ipam() method of VirtualNetwork class (particular instance is called vn in our case) is used to
add an IPAM (default one with no settings) and a subnet.
Continued on the next page.

Chapter 4–76 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud
Virtual Network Creation (contd.)
It is seen that the particular hierarchy of object references in case of adding subnet to virtual network is, schematically,
VnSubnetsType > IpamSubnetType > SubnetType. The particular hierarchy in this and similar cases is explained
as follows. Many of library classes are autogenerated, using metaprogramming methods, from an API XML schema (schema
file represents the Data Model and can be found in GitHub repository, contrail-controller/src/schema/
vnc_cfg.xsd). In this example, the relevant parts of the schema are the following:
<xsd:complexType name="VnSubnetsType">
<xsd:all>
<xsd:element name="ipam-subnets" type="IpamSubnetType" maxOccurs="unbounded"/>
<xsd:element name="host-routes" type="RouteTableType"/>
</xsd:all>
</xsd:complexType>

<xsd:complexType name="IpamSubnetType">
<xsd:all>
<xsd:element name="subnet" type="SubnetType"/>
<xsd:element name="default-gateway" type="IpAddressType"/>
<xsd:element name="dns-server-address" type="IpAddressType"/>
<xsd:element name="subnet-uuid" type="xsd:string"/>
<xsd:element name="enable-dhcp" type="xsd:boolean" default="true"/>
<xsd:element name="dns-nameservers" type="xsd:string" maxOccurs="unbounded"/>
<xsd:element name="allocation-pools" type="AllocationPoolType" maxOccurs="unbounded"/>
<xsd:element name="addr_from_start" type="xsd:boolean"/>
<xsd:element name="dhcp-option-list" type="DhcpOptionsListType"/>
<xsd:element name="host-routes" type="RouteTableType"/>
<xsd:element name="subnet-name" type="xsd:string"/>
</xsd:all>
</xsd:complexType>

<xsd:complexType name="SubnetType">
<xsd:all>
<xsd:element name="ip-prefix" type="xsd:string"/>
<xsd:element name="ip-prefix-len" type="xsd:integer"/>
</xsd:all>
</xsd:complexType>
It is seen that, indeed, VnSubnetsType references IpamSubnetType, which in its turn references SubnetType.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–77


Network Automation Using Contrail Cloud

Create and Apply Policy


In this example a policy between virtual networks VN-A and VN-B is created and applied to those virtual networks. It is
assumed that VN-A and VN-B already exist in the system. Again, no error checking is done in this example script to make it as
concise as possible. Use try/except statements to catch exceptions in production scripts. Also, use script parameters to not
hardcode values in the scripts.

Chapter 4–78 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Create and Apply Policy—Result


The slide shows the results of running the script that we just considered. Policy policy-A-B has been created and applied
to both VNs.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–79


Network Automation Using Contrail Cloud

Associate Floating IP
This example shows scripting commands needed to allocate a floating IP from an existing pool and associate it to the
particular VM interface. The last command is, as usual, a call to VncApi method and applies changes on the API server.

Chapter 4–80 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

We Discussed:
• How a tenant is created internally;
• How to manage network policies;
• How to create and assign floating IPs;
• The Device Manager; and
• Using OpenStack and Contrail APIs.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–81


Network Automation Using Contrail Cloud

Review Questions
1.

2.

3.

Chapter 4–82 • OpenStack and Contrail Configuration www.juniper.net


Network Automation Using Contrail Cloud

Lab: Tenant Implementation and Management


The slide lists the objective for this lab.

www.juniper.net OpenStack and Contrail Configuration • Chapter 4–83


Network Automation Using Contrail Cloud
Answers to Review Questions
1.
Routes are exchanged between vRouters using XMPP.
2.
Network policies allows you to control which traffic travels between VNs.
3.
The HTTP protocol is used for transport of REST requests.

Chapter 4–84 • OpenStack and Contrail Configuration www.juniper.net


Acronym List
ACL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .access control list
ADC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Delivery Controller
API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application programming interface
ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Address Resolution Protocol
AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .autonomous system
ASIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application-specific integrated circuit
CE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . customer edge
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic Host Configuration Protocol
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System
DPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep packet inspection
EOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-life
EVPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethernet virtual private network
GRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .generic routing encapsulation
GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface
HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .high availability
IaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Infrastructure as a Service
IBGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . internal BGP
IDE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integrated development environment
IDP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intrusion Detection and Prevention
IDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion detection service
IDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion detection system
IETF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Engineering Task Force
IF-MAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface to Metadata Access Point
IPAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address management
IPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion prevention system
IPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion protection system
IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP Security
IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP version 4
ISO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . International Organization for Standardization
JNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Certification Program
KVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kernel-based Virtual Machine
L3VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layer 3 virtual private network
LBaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Load Balancing as a Service
MAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . media access control
MANO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . management and orchestration
MPLSoGRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi-protocol Label Switching over General Routing and Encapsulation
NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Address Translation
NETCONF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Configuration
NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network File System
NFVI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NFV infrastructure
NVF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Virtualization Function
NTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Time Protocol
NVO3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Virtualization Overlays
OVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open vSwitch
OVSDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open vSwitch Database Management
P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . provider
PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . provider edge
QE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . query expansion
QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quality of service
RD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . route distinguisher
REST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . representational state transfer
RT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . route target
SCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Copy Protocol
SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . software development kit

www.juniper.net Acronym List • ACR–1


SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .software-defined networking
SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Structured Query Language
SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Sockets Layer
STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree Protocol
TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transmission Control Protocol
ToR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Top of Rack
TSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ToR services node
UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .User Datagram Protocol
UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . user interface
UUID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .universal unique identifier
UVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .User Virtual Environment
VAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Value Added Services
vif. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vRouter interface
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual LAN
VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual machine
VN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual network
vNIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual network interface card
VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual path connection
VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Virtual Private Cloud
VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual private network
VPNaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VPN as a Service
VRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual routing and forwarding table
XMPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extensible Messaging and Presence Protocol

ACR–2 • Acronym List www.juniper.net

S-ar putea să vă placă și