Sunteți pe pagina 1din 4

9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff

All stuff virtualization, cloud and technology

29th December 2015 Running VLAN, VXLAN and GRE together


using Neutron & Openstack
There are numerous blogs out there with step by step instructions to setup
OVS networking for openstack neutron to run various types of networks. I
came up with this post to best explain how everything works in the Neutron
land and what a typical openstack deployment looks like. If you are looking
for a more step-by-step procedure to set things up this post is a good place to
start i.e. understanding the networking concepts and design behind running
all the commands listed on other blogs.

A Neutron openstack deployment or any cloud environment these days


typically consist of a Network Controller (SDN controller) and a cluster of
compute hosts or servers. The type of networking you choose to connect all
of these together is entirely upto you and I've seen various ways by which
this can be done. There is no correct or right way of doing this. It's what
works for you but if you don't really have anything already setup and are
planning on starting afresh you could probably use this as a template to
design your networking.

[http://2.bp.blogspot.com/-
sMhvgfoxNPM/VoGmRC0IblI/AAAAAAAA0t0/kuQWdljFUrk/s1600/Private%2BCloud%2BNetworking.png]

http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 1/4
9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff virtualization, cloud and technology

What you see above are three servers - one network node and two compute
nodes.
First we'll go through the design outlined here and further on we can discuss
other possibilities on a case-by-case basis.

Network Node/Server: (4 interfaces/NIC's)

1 management interface: This typically is to login to the machine and


setup, monitor the machine health.
1 external interface: This interface carries traffic outside of this cloud.
Typically it is the intranet.
2 data interfaces: One carrying VLAN traffic and the other carrying
tunneling traffic. (Now, you can use a single interface for
this, or run everything described above with just one interface and so
on..)

Functionality:

To provide inter network routing within the cloud.


To provide dynamic IP's to VM's and be able to route external traffic to a
particular VM.
SNAT/DNAT

Compute Node/Server (3 interfaces/NIC's)

1 management interface: This typically is to login to the machine and


setup, monitor the machine health.
2 data interfaces: One carrying VLAN traffic and the other carrying
tunneling traffic. (Now, you can use a single interface for
this, or run everything described above with just one interface and so
on..)

http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 2/4
9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff virtualization, cloud and technology

Functionality:

Run Virtual Machines on the server

All of the non-management interfaces are placed inside a virtual switch.


This switch can be a linux bridge, OpenVSwitch or any other virtual switch
running on linux. For this post I've used OVS and for simplicity I have
chosen the same name for the switches and used the same interfaces for the
switches. In reality these can be jumbled up (I would personally recommend
keeping them uniform for the ease of debugging).

The data interface carrying VLAN traffic needs to be trunked all the way
between the servers and the switches that form this cloud network. The
reason we do this is to allow Virtual Machines on different VLAN's to be able
to communicate and talk on the same interface of the hypervisor. VLAN
tagging and un-tagging is done by the Integration-switch. The switches
are connected to each other by virtual patch cables (eg: intergration-switch
and data-switch).

The tunnel interface carrying tunnel traffic i.e. GRE/VXLAN can either sit on
a switch or just remain an interface. In my case I have put it in the tunnel-
switch. Now you can definitely have the tunnel traffic & VLAN traffic use the
same interface - To do this you simply have to use the data-switch and not
have a separate interface/bridge for tunnel traffic. This is possible and I have
seen people do this too. Using a single interface for both tunneling (overlays)
and VLAN's reduces the number of NIC's required per server.

Note: The IP will need to be configured on the external-switch instead of the


physical interface. Whenever an interface is enslaved by a bridge/switch, the
layer 3 addressability needs to go onto the bridge/switch.

Note: Big enterprises use bonded interfaces for high availability and link
aggregation. In this case there would be more than one ethernet interface
"bonded" together into a linux bond interface. In that case, the
http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 3/4
9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff virtualization, cloud and technology

architecture diagram above will still hold good but with bond0 or bond1
interface added to the bridge instead of the physical ethernet interfaces eth0
and eth1 shown above.

IP Addresses:
- 1 IP for the management interface.
- 1 IP for the external traffic on the external-switch (only on network node)
- 1 IP for the VLAN data traffic on the data-switch (Not required! Optional)
- 1 IP for the tunnel traffic on the tunnel interface or a tunnel switch.

Finally, do keep in mind that the network node is definitely a single point of
failure in this design but this can be mitigated by using a active-standby
setup (having multiple network nodes) or by going one step further and
moving out the functionality of the network node to the compute nodes. I'll
talk about how to set these interfaces up in a separate article and also
describe the network node internals and how to debug them in another one.
Posted 29th December 2015 by Arun
Labels: network design, neutron, openstack, overlays, SDN, server requirements, vlan

0 Add a comment

Enter your comment...

Comment as: Google Accoun

Publish Preview

http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 4/4

S-ar putea să vă placă și