Sunteți pe pagina 1din 53

Container​ ​Networking​ ​with​ ​Cumulus

Linux​ ​Validated​ ​Design​ ​Guide


HOST​ ​PACK​ ​ADVERTISES​ ​THE​ ​DOCKER​ ​BRIDGE
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

CONTENTS

CONTENTS ​2

INTRODUCTION ​4

LAYER​ ​3​ ​CONTAINER​ ​NETWORKING​ ​ARCHITECTURES ​4


Spine/Leaf​ ​Architecture​ ​Using​ ​Cumulus​ ​Linux​ ​for​ ​Container​ ​Networking ​4
Build​ ​Steps​ ​for​ ​a​ ​Spine/Leaf​ ​Architecture​ ​with​ ​Cumulus​ ​Linux​ ​for​ ​Container​ ​Networking ​5
1.​ ​Set​ ​up​ ​the​ ​physical​ ​network​ ​and​ ​basic​ ​configuration​ ​of​ ​all​ ​switches. ​6
2.​ ​Configure​ ​IP​ ​addresses​ ​and​ ​BGP​ ​on​ ​leaf​ ​and​ ​spine​ ​switches. ​9
3.​ ​Install​ ​ifupdown2​ ​on​ ​servers​ ​(Applicable​ ​for​ ​Ubuntu​ ​Only). ​12
4.​ ​Install​ ​Docker-CE​ ​on​ ​servers. ​12
Container​ ​Networking​ ​with​ ​the​ ​Cumulus​ ​Host​ ​Pack​ ​Advertising​ ​the​ ​Docker​ ​Bridge​ ​Subnet ​13
Build​ ​Steps​ ​for​ ​a​ ​Container​ ​Network​ ​with​ ​Host​ ​Pack​ ​Advertising​ ​Bridge​ ​Subnet ​15
1.​ ​Configure​ ​a​ ​BGP​ ​neighbor​ ​on​ ​the​ ​leaf's​ ​server-facing​ ​interfaces. ​16
2.​ ​Configure​ ​each​ ​server​ ​with​ ​a​ ​loopback​ ​address. ​16
3.​ ​Create​ ​a​ ​user-defined​ ​bridge​ ​network​ ​within​ ​each​ ​server. ​17
4.​ ​Create​ ​a​ ​Host​ ​Pack​ ​layer​ ​3​ ​configuration​ ​file. ​19
5.​ ​Install​ ​containerized​ ​Host​ ​Pack​ ​layer​ ​3​ ​connectivity. ​20
6.​ ​Install​ ​apache​ ​containers​ ​on​ ​the​ ​user-defined​ ​Docker​ ​bridge. ​22
7.​ ​Test​ ​the​ ​container​ ​reachability. ​25

CONCLUSION ​26

APPENDIX​ ​A​ ​-​ ​Configurations ​27


Spine​ ​Switches ​27
Spine01​ ​Configuration ​27
Spine02​ ​Configuration ​30
Leaf​ ​Switches ​33
Leaf01​ ​Configuration ​33
Leaf02​ ​Configuration ​36
Leaf03​ ​Configuration ​39
Leaf04​ ​Configuration ​42
Servers ​45
Server01​ ​Configuration ​45
Server02​ ​Configuration ​47

cumulusnetworks.com ​ ​ ​ ​2
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Server03​ ​Configuration ​49


Server04​ ​Configuration ​51
About​ ​Cumulus​ ​Networks ​53

cumulusnetworks.com ​ ​ ​ ​3
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

INTRODUCTION
As​ ​containers​ ​and​ ​microservices​ ​become​ ​increasingly​ ​popular,​ ​many​ ​developers​ ​and​ ​network
managers​ ​are​ ​deploying​ ​them​ ​in​ ​their​ ​networks.​ ​The​ ​application​ ​of​ ​containers,​ ​including​ ​their​ ​use
in​ ​the​ ​development​ ​of​ ​distributed​ ​microservices,​ ​is​ ​described​ ​in​ ​the​ ​whitepaper​​ ​Introduction​ ​to
Containers:​ ​Where​ ​Linux​ ​Host​ ​Networking​ ​Meets​ ​Network​ ​Infrastructure.​​ ​Also​ ​described​ ​in​ ​that
paper​ ​are​ ​overviews​ ​of​ ​a​ ​variety​ ​of​ ​deployment​ ​options.
Some​ ​container​ ​deployments​ ​use​ ​a​ ​traditional​ ​layer​ ​2​ ​overlay​ ​on​ ​top​ ​of​ ​the​ ​layer​ ​3​ ​network​ ​fabric.
Additionally,​ ​Network​ ​Address​ ​Translation​ ​(NAT)​ ​is​ ​often​ ​used​ ​on​ ​the​ ​host​ ​to​ ​allow​ ​containers
access​ ​outside​ ​the​ ​environment.​ ​ ​A​ ​native​ ​layer​ ​3​ ​deployment,​ ​however,​ ​in​ ​which​ ​all​ ​containers
are​ ​advertised​ ​within​ ​the​ ​routing​ ​protocol​ ​itself,​ ​will​ ​likely​ ​become​ ​the​ ​“gold​ ​standard”​ ​as
containers​ ​become​ ​more​ ​broadly​ ​deployed.​ ​An​ ​all-layer​ ​3​ ​design​ ​increases​ ​flexibility​ ​and​ ​scale
and​ ​reduces​ ​the​ ​domain​ ​(or​ ​blast​ ​radius)​ ​of​ ​an​ ​outage​ ​should​ ​one​ ​ever​ ​occur.​ ​ ​A​ ​Layer​ ​3​ ​design
without​ ​NAT​ ​increases​ ​performance​ ​and​ ​simplifies​ ​troubleshooting.​ ​Further,​ ​Layer​ ​3​ ​designs​ ​also
enable​ ​application​ ​teams​ ​and​ ​networking​ ​teams​ ​to​ ​largely​ ​avoid​ ​troubleshooting​ ​spanning​ ​tree
and​ ​multi-chassis​ ​link​ ​aggregation​ ​(MLAG)​ ​issues​ ​as​ ​well.
In​ ​this​ ​validated​ ​design​ ​guide,​ ​we​ ​will​ ​explore​ ​one​ ​way​ ​to​ ​extend​ ​the​ ​benefits​ ​of​ ​layer​ ​3
networking​ ​through​ ​the​ ​rack​ ​to​ ​the​ ​hosts/containers​ ​with​ ​Cumulus​ ​Networks.​ ​Cumulus​ ​Host​ ​Pack
can​ ​be​ ​used​ ​to​ ​provide​ ​layer​ ​3​ ​connectivity​ ​and​ ​enable​ ​key​ ​advantages​ ​that​ ​contribute​ ​to
web-scale​ ​efficiency​ ​at​ ​any​ ​network​ ​size.

LAYER​ ​3​ ​CONTAINER​ ​NETWORKING​ ​ARCHITECTURES


This​ ​design​ ​guide​ ​offers​ ​a​ ​solution​ ​utilizing​ ​the​ ​Cumulus​ ​Host​ ​Pack​ ​with​ ​Docker​ ​bridge.​ ​It​ ​uses
the​ ​Host​ ​Pack​ ​with​ ​BGP​ ​unnumbered​​ ​for​ ​configuration​ ​simplicity​ ​throughout​ ​the​ ​data​ ​center.​ ​We
advertise​ ​the​ ​Docker​ ​bridge’s​ ​subnet​ ​directly​ ​from​ ​the​ ​host.​ ​This​ ​solution​ ​is​ ​useful​ ​if​ ​you​ ​use
routable​ ​private​ ​IP​ ​addresses​ ​throughout​ ​your​ ​domain​ ​or​ ​are​ ​not​ ​as​ ​concerned​ ​about​ ​conserving
IP​ ​address​ ​space​ ​and​ ​allow​ ​Docker​ ​IPAM​ ​to​ ​address​ ​your​ ​containers.

This​ ​solution​ ​uses​ ​a​ ​web-scale​ ​architecture​,​ ​which​ ​enables​ ​the​ ​spine/leaf​ ​or​ ​Clos​ ​network.​ ​First,
we​ ​discuss​ ​how​ ​to​ ​set​ ​up​ ​the​ ​spine/leaf​ ​architecture​ ​using​ ​eBGP​ ​unnumbered​ ​and​ ​then​ ​explore
the​ ​individual​ ​solution​ ​building​ ​on​ ​top​ ​of​ ​this​ ​architecture.

Spine/Leaf​ ​Architecture​ ​Using​ ​Cumulus​ ​Linux​ ​for​ ​Container​ ​Networking


Figure​ ​1​ ​depicts​ ​the​ ​spine/leaf​ ​architecture​ ​and​ ​is​ ​available​ ​virtually​​ ​using​ ​Cumulus​ ​VX​ ​with
Vagrant.​ ​ ​This​ ​architecture​ ​outlines​ ​a​ ​deployment​ ​with​ ​the​ ​switches​ ​running​ ​Cumulus​ ​Linux​ ​3.3.2,
the​ ​servers​ ​running​ ​Ubuntu​ ​16.04,​ ​and​ ​Docker​ ​CE​ ​17.05.0-ce1.​ ​ ​ ​However,​ ​the​ ​same​ ​solution​ ​can
be​ ​constructed​ ​with​ ​other​ ​linux​ ​variants​ ​on​ ​the​ ​servers,​ ​such​ ​as​ ​CentOS​ ​and​ ​RHEL.

1
​ ​ ​Note:​ ​ ​The​ ​solution​ ​was​ ​tested​ ​virtually​ ​using​ ​Cumulus​ ​VX​ ​3.3.2,​ ​Vagrant,​ ​Ubuntu​ ​16.04​ ​and​ ​Docker​ ​CE​ ​17.05.0-ce.
Actual​ ​setup​ ​with​ ​hardware​ ​may​ ​behave​ ​differently​ ​depending​ ​on​ ​server​ ​hardware​ ​and​ ​software​ ​used.

cumulusnetworks.com ​ ​ ​ ​4
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Figure​ ​1​ ​-​ ​ ​Spine/Leaf​ ​Architecture

As​ ​seen​ ​above,​ ​this​ ​solution​ ​requires​ ​and​ ​out-of-band​ ​management​ ​network​ ​that​ ​has​ ​out​ ​of​ ​band
connectivity​ ​to​ ​all​ ​servers​ ​and​ ​switches.

The​ ​following​ ​steps​ ​are​ ​assembled​ ​sequentially​ ​such​ ​that​ ​each​ ​step​ ​builds​ ​on​ ​the​ ​previous​ ​step
and​ ​only​ ​includes​ ​the​ ​portions​ ​of​ ​the​ ​configuration​ ​that​ ​are​ ​relevant​ ​for​ ​the​ ​given​ ​step.​ ​If​ ​at​ ​any
point​ ​it​ ​is​ ​unclear​ ​what​ ​configuration​ ​should​ ​be​ ​present,​ ​Appendix​ ​A​ ​includes​ ​the​ ​complete
configurations​ ​for​ ​all​ ​the​ ​leaves,​ ​spines,​ ​and​ ​hosts;​ ​you​ ​can​ ​use​ ​these​ ​configurations​ ​directly​ ​to​ ​a
test​ ​environment​ ​or​ ​download​ ​the​ ​ansible​ ​playbook​​ ​from​ ​github.​ ​The​ ​build​ ​steps​ ​demonstrate​ ​why
each​ ​piece​ ​of​ ​configuration​ ​is​ ​necessary.

The​ ​steps​ ​to​ ​build​ ​this​ ​architecture​ ​are​ ​outlined​ ​below.


Build​ ​Steps​ ​for​ ​a​ ​Spine/Leaf​ ​Architecture​ ​with​ ​Cumulus​ ​Linux​ ​for​ ​Container​ ​Networking

BUILD​ ​ORDER

STEP TASKS

1.​ ​Set​ ​up​ ​physical​ ​network​ ​and Rack​ ​and​ ​cable​ ​all​ ​network​ ​switches​ ​and​ ​hosts
basic​ ​configuration​ ​of​ ​switches Install​ ​Cumulus​ ​Linux​ ​and​ ​license​ ​if​ ​not​ ​already​ ​installed
Configure​ ​out-of-band​ ​management
Configure​ ​the​ ​management​ ​VRF
Set​ ​hostname
Configure​ ​DNS
Configure​ ​NTP
Configure​ ​MTU​ ​if​ ​desired
Bring​ ​up​ ​interfaces​ ​and​ ​verify​ ​connectivity

Configure​ ​loopback​ ​address


2.​ ​Configure​ ​IPs​ ​and​ ​BGP​ ​on​ ​leaf
Configure​ ​ ​BGP​ ​unnumbered​ ​on​ ​the​ ​leaf's​ ​spine-facing​ ​interfaces​ ​and​ ​the​ ​spine’s
and​ ​spine​ ​switches
leaf-facing​ ​interfaces

3.​ ​Install​ ​ifupdown2​ ​on​ ​servers Install​ ​ifupdown2​ ​on​ ​all​ ​the​ ​Ubuntu​ ​servers​ ​for​ ​easy​ ​configuration​ ​and​ ​troubleshooting

4.​ ​ ​Install​ ​docker-ce​ ​on​ ​servers Install​ ​docker-ce​ ​on​ ​all​ ​the​ ​Linux​ ​servers​ ​that​ ​will​ ​host​ ​containers

cumulusnetworks.com ​ ​ ​ ​5
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

In​ ​a​ ​greenfield​ ​environment,​ ​the​ ​order​ ​of​ ​configuring​ ​spine​ ​or​ ​leaf​ ​switches​ ​does​ ​not​ ​matter,​ ​so
step​ ​2​ ​can​ ​be​ ​done​ ​in​ ​any​ ​order.​ ​However,​ ​it​ ​is​ ​recommended​ ​to​ ​configure​ ​the​ ​spines​ ​first,​ ​so
BGP​ ​peering​ ​can​ ​be​ ​checked​ ​as​ ​the​ ​leafs​ ​come​ ​up.​ ​In​ ​a​ ​brownfield​ ​environment,​ ​start​ ​with​ ​leaf
switches​ ​for​ ​minimal​ ​network​ ​service​ ​disruptions.​ ​The​ ​build​ ​order​ ​is​ ​further​ ​explained​ ​in​ ​the
following​ ​steps.

1.​ ​Set​ ​up​ ​the​ ​physical​ ​network​ ​and​ ​basic​ ​configuration​ ​of​ ​all​ ​switches.

After​ ​racking​ ​and​ ​cabling​ ​all​ ​switches​ ​on​ ​the​ ​data​ ​plane​ ​and​ ​to​ ​the​ ​out-of-band​ ​management
network,​ ​install​ ​the​ ​Cumulus​ ​Linux​ ​OS​ ​and​ ​license​ ​on​ ​each​ ​switch.​ ​Refer​ ​to​ ​the​ ​Cumulus​ ​Linux
documentation​​ ​for​ ​more​ ​information.

Our​ ​example​ ​network​ ​is​ ​wired​ ​per​ ​the​ ​Tables​ ​1​ ​and​ ​2​ ​below;​ ​you​ ​may​ ​need​ ​to​ ​adjust​ ​the
instructions​ ​if​ ​your​ ​network​ ​is​ ​wired​ ​differently:

Table​ ​1​ ​-​ ​Spine​ ​Switch​ ​Connectivity


SPINE​ ​SWITCH​ ​PHYSICAL​ ​CONNECTIVITY

SPINE01 SPINE02

iface Connected​ ​to iface Connected​ ​to

swp1 Leaf01-swp51 swp1 Leaf01-swp52

swp2 Leaf02-swp51 swp2 Leaf02-swp52

swp3 Leaf03-swp51 swp3 Leaf03-swp52

swp4 Leaf04-swp51 swp4 Leaf04-swp52

Table​ ​2​ ​-​ ​Leaf​ ​Switch​ ​Connectivity


LEAF​ ​SWITCH​ ​PHYSICAL​ ​CONNECTIVITY

LEAF01 LEAF02 LEAF03 LEAF04

iface Connected​ ​to iface Connected​ ​to iface Connected​ ​to: iface Connected​ ​to

swp51 Spine01-swp1 swp51 Spine01-swp2 swp51 Spine01-swp3 swp51 Spine01-swp4

swp52 Spine02-swp1 swp52 Spine02-swp2 swp52 Spine02-swp3 swp52 Spine01-swp4

swp1 Server01-​ ​eth1 swp1 Server01-eth2 swp1 Server03-eth1 swp1 Server03-eth2

swp2 Server02-eth1 swp2 Server02-eth2 swp2 Server04-eth1 swp2 Server04-eth2

cumulusnetworks.com ​ ​ ​ ​6
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Next,​ ​configure​ ​out-of-band​ ​management​.​ ​By​ ​default,​ ​Cumulus​ ​Linux​ ​configures​ ​the​ ​eth0
interface​ ​as​ ​DHCP.​ ​All​ ​switches​ ​and​ ​servers​ ​eth0​ ​interface​ ​should​ ​be​ ​connected​ ​to​ ​the​ ​out​ ​of
band​ ​management​ ​switch

Configure​ ​the​ ​management​ ​VRF​ ​on​ ​all​ ​of​ ​the​ ​switches.​ ​The​ ​below​ ​command​ ​will​ ​configure​ ​the
management​ ​VRF​ ​and​ ​put​ ​the​ ​eth0​ ​interface​ ​into​ ​the​ ​management​ ​VRF.​ ​Performing​ ​the​ ​net
commit​ ​after​ ​this​ ​command​ ​activates​ ​the​ ​VRF​ ​which​ ​causes​ ​your​ ​ssh​ ​session​ ​to​ ​disconnect.​ ​More
information​ ​on​ ​the​ ​management​ ​VRF​ ​can​ ​be​ ​found​ ​in​ ​the​​ ​Cumulus​ ​Linux​ ​user’s​ ​guide.

​ et​ a
cumulus@cumulus:​~$​ n ​ dd​ ​vrf​ ​mgmt
cumulus@cumulus:​~$​ n​ et​ c​ ommit

After​ ​reconnecting​ ​to​ ​the​ ​switch,​ ​set​ ​the​ ​hostname​ ​of​ ​each​ ​switch​ ​using​ ​the​ ​following​ ​command,
replacing​ ​<hostname>​ ​with​ ​the​ ​proper​ ​hostname.​ ​You​ ​must​ ​log​ ​out​ ​of​ ​the​ ​switch​ ​and​ ​back​ ​in​ ​to
see​ ​the​ ​prompt​ ​change:

cumulus@cumulus:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​hostname​ ​<hostname>
cumulus@cumulus:mgmt-vrf:~$​​ n​ et​ c​ ommit

Configure​ ​the​ ​IP​ ​address​ ​of​ ​the​ ​doma​in​ ​name​ ​server​ ​(DNS)​ ​if​ ​desired.​ ​In​ ​our​ ​example,​ ​we​ ​use​ ​a
server​ ​at​ ​192.168.0.254​ ​for​ ​DNS:

cumulus@leaf01:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​dns​ ​nameserver​ ​192.168.0.254
cumulus@leaf01:mgmt-vrf:~$​​ n​ et​ c​ ommit

Cumulus​ ​Linux​ ​provides​ ​Network​ ​Time​ ​Protocol​ ​(NTP)​ ​on​ ​by​ ​default​ ​and​ ​provides​ ​time​ ​servers​ ​by
default.​ ​However,​ ​the​ ​server​ ​and/or​ ​the​ ​source​ ​can​ ​be​ ​changed​ ​if​ ​needed.​ ​More​ ​information
configuring​ ​NTP​ ​can​ ​be​ ​found​ ​in​ ​the​ ​Cumulus​ ​Linux​ ​technical​ ​documentation​.​ ​If​ ​the​ ​NTP​ ​server​ ​is
located​ ​via​ ​the​ ​mgmt-vrf,​ ​the​ ​NTP​ ​daemon​ ​must​ ​be​ ​moved​ ​to​ ​the​ ​mgmt​ ​vrf​ ​context.​ ​Instructions
are​ ​located​ ​in​ ​the​ ​Cumulus​ ​Linux​ ​User​ ​Guide​.

To​ ​configure​ ​a​ ​different​ ​NTP​ ​server​ ​than​ ​the​ ​default,​ ​perform​ ​the​ ​following​ ​command.​ ​In​ ​our
example,​ ​we​ ​use​ ​an​ ​internal​ ​server​ ​at​ ​192.168.0.254​ ​for​ ​the​ ​NTP​ ​server:

cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254

cumulusnetworks.com ​ ​ ​ ​7
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​commit

By​ ​default,​ ​NTP​ ​sources​ ​from​ ​eth0.​ ​If​ ​your​ ​NTP​ ​server​ ​is​ ​reachable​ ​over​ ​the​ ​switch​ ​ports,​ ​you​ ​will
need​ ​to​ ​change​ ​the​ ​source.​ ​Use​ ​the​ ​following​ ​command​ ​to​ ​change​ ​the​ ​source​ ​if​ ​necessary,
replacing​ ​swpx​ ​with​ ​the​ ​appropriate​ ​interface.

cumulus@leaf01:mgmt-vrf:~$​​ n
​ et​ a
​ dd​ ​time​ ​ntp​ ​source​ ​<swpx>
cumulus@leaf01:mgmt-vrf:~$​ ​net​ c ​ ommit

Create​ ​the​ ​interfaces​ ​to​ ​be​ ​used.​ ​The​ ​loopback​ ​us​ ​up​ ​by​ ​default.​ ​Use​ ​the​ ​following​ ​command​ ​to
enable​ ​the​ ​switch​ ​ports​ ​to​ ​check​ ​connectivity​ ​in​ ​the​ ​next​ ​step.​ ​We​ ​are​ ​using​ ​the​ ​leaf​ ​as​ ​an
example.​ ​ ​In​ ​a​ ​case​ ​of​ ​a​ ​spine,​ ​add​ ​interfaces​ ​swp1-4:

cumulus@leaf01:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​interface​ ​swp1-2,swp51-52
cumulus@leaf01:mgmt-vrf:~$​​ n​ et​ c​ ommit

To​ ​verify​ ​all​ ​cables​ ​are​ ​connected​ ​and​ ​functional,​ ​check​ ​the​ ​link​ ​state.​ ​To​ ​check​ ​the​ ​link​ ​state​ ​of​ ​a
switch​ ​port,​ ​run​ ​the​​ ​net​ ​show​ ​interface​ ​command,​ ​which​ ​displays​ ​the​ ​physical​ ​link,​ ​administrative
state,​ ​and​ ​LLDP​ ​neighbor​ ​of​ ​the​ ​interface.​ ​An​ ​example​ ​from​ ​leaf01​ ​is​ ​below​ ​after​ ​all​ ​the​ ​switch
interfaces​ ​in​ ​the​ ​network​ ​are​ ​brought​ ​up.

cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​show​ ​interface


​ ​ ​ ​ ​ ​ ​ ​Name​ ​ ​ ​ ​ ​ ​Master​ ​ ​ ​ ​Speed​ ​ ​ ​ ​ ​ ​MTU​ ​ ​Mode​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​Remote​ ​Host​ ​ ​ ​ ​ ​ R
​ emote​ ​Port​ ​ ​ ​ ​ ​ ​ ​ ​Summary
-----​ ​ ​--------​ ​ ​--------​ ​ ​-------​ ​ ​-----​ ​ ​--------------​ ​ ​---------------​ ​ - ​ ----------------
--------------------------------------
UP​​ ​ ​ ​ ​ ​lo​ ​ ​ ​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​N/A​ ​ ​ ​ ​ ​ ​ ​65536​ ​ ​Loopback​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​127.0.0.0,
::1/128
UP​ ​ ​ ​ ​ ​eth0​ ​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​1G​ ​ ​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​Mgmt​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​oob-mgmt-switch​ ​ ​swp6​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​IP:
192.168.0.11/24(DHCP)
UP​​ ​ ​ ​ ​ ​swp1​ ​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​10G​ ​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured
UP​​ ​ ​ ​ ​ ​swp2​ ​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​10G​ ​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured
UP​​ ​ ​ ​ ​ ​swp51​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​40G​ ​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured​ ​ ​ ​spine01​ ​ ​ ​ ​ ​ ​ ​ ​ ​ s​ wp1
UP​​ ​ ​ ​ ​ ​swp52​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​40G​ ​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured​ ​ ​ ​spine02​ ​ ​ ​ ​ ​ ​ ​ ​ ​ s​ wp1
UP​​ ​ ​ ​ ​ ​mgmt​ ​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​N/A​ ​ ​ ​ ​ ​ ​ ​65536​ ​ ​Interface/L3

cumulusnetworks.com ​ ​ ​ ​8
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

An​ ​example​ ​from​ ​spine01​ ​is​ ​shown​ ​below​ ​after​ ​all​ ​network​ ​interfaces​ ​have​ ​been​ ​brought​ ​up:

cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​show​ ​interface

​ ​ ​ ​ ​ ​ ​ ​Name​ ​ ​ ​ ​ ​Master​ ​ ​ ​ ​Speed​ ​ ​ ​ ​ ​ ​MTU​ ​ ​Mode​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​Remote​ ​Host​ ​ ​ ​ ​ ​ ​Remote​ ​Port​ ​ ​ ​ ​Summary
-----​ ​ ​-------​ ​ ​--------​ ​ ​-------​ ​ ​-----​ ​ ​-------------​ ​ ​---------------​ ​ ​-------------​ ​ ​-------------------------
UP​​ ​ ​ ​ ​ ​lo​ ​ ​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​N/A​ ​ ​ ​ ​ ​ ​65536​ ​ ​Loopback​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​IP:​ ​127.0.0.1/8,​ ​::1/128
UP​​ ​ ​ ​ ​ ​eth0​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​1G​ ​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​Mgmt​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​oob-mgmt-switch​ ​ ​swp10​​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​IP:​ ​192.168.0.21/24(DHCP)

UP​​ ​ ​ ​ ​ ​swp1​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ 4 ​ 0G​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured​ ​ ​leaf01​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​swp51


UP​​ ​ ​ ​ ​ ​swp2​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​40G​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured​ ​ ​leaf02​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​swp51
UP​ ​ ​ ​ ​ ​swp3​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​40G​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured​ ​ ​leaf03​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​swp51
UP​​ ​ ​ ​ ​ ​swp4​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​40G​ ​ ​ ​ ​ ​ ​ ​1500​ ​ ​NotConfigured​ ​ ​leaf04​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​swp51
UP​ ​ ​ ​ ​ ​mgmt​ ​ ​ ​ ​ ​None​ ​ ​ ​ ​ ​ ​N/A​ ​ ​ ​ ​ ​ ​65536​ ​ ​Interface/L3​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​IP:​ ​127.0.0.1/8

Look​ ​for:
● ​ ​The​ ​correct​ ​switch​ ​ports​ ​to​ ​be​ ​in​ ​the​ ​UP​ ​state.​ ​Refer​ ​to​ ​the​ ​Interface​ ​Configuration​ ​and
Management​​ ​chapter​ ​of​ ​the​ ​Cumulus​ ​Linux​ ​documentation​ ​for​ ​more​ ​information.
● ​ ​Cables​ ​that​ ​are​ ​properly​ ​connected​ ​according​ ​to​ ​your​ ​network​ ​topology​ ​diagram

Alternatively,​ ​Cumulus​ ​Prescriptive​ ​Topology​ ​Manager​ ​(PTM)​ ​can​ ​be​ ​used​ ​to​ ​easily​ ​verify​ ​the
entire​ ​topology.​ ​More​ ​information​ ​on​ ​PTM​ ​can​ ​be​ ​found​ ​in​ ​the​ ​Cumulus​ ​Linux​ ​User’s​ ​Guide​.

The​ ​MTU​ ​on​ ​the​ ​interfaces​ ​should​ ​be​ ​at​ ​least​ ​as​ ​large​ ​as​ ​the​ ​docker​ ​bridge​ ​MTU​ ​to​ ​prevent
fragmentation.​ ​By​ ​default,​ ​the​ ​MTU​ ​on​ ​the​ ​switch​ ​ports​ ​is​ ​1500.​ ​To​ ​change​ ​the​ ​interface​ ​MTU​ ​on
the​ ​switches​ ​to​ ​9216​ ​bytes,​ ​configure​ ​the​ ​following​ ​command.​ ​We​ ​show​ ​an​ ​example​ ​on​ ​a​ ​leaf
switch​ ​and​ ​a​ ​spine​ ​switch.

cumulus@leaf01:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​interface​ ​swp1-2,51-52​ ​mtu​ ​9216
cumulus@leaf01:mgmt-vrf:~$​​ n​ et​ c​ ommit

cumulus@spine01:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​interface​ ​swp1-4​ ​mtu​ ​9216
cumulus@spine01:mgmt-vrf:~$​​ n​ et​ c​ ommit

2.​ ​Configure​ ​IP​ ​addresses​ ​and​ ​BGP​ ​on​ ​leaf​ ​and​ ​spine​ ​switches.

cumulusnetworks.com ​ ​ ​ ​9
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Table​ ​3​ ​below​ ​depicts​ ​the​ ​IP​ ​addresses​ ​and​ ​BGP​ ​autonomous​ ​system​ ​used​ ​for​ ​each​ ​switch​ ​in
this​ ​example.

Table​ ​3​ ​-​ ​Network​ ​IP​ ​Address​ ​Scheme


SWITCH​ ​IP​ ​ADDRESS​ ​SCHEME

SWITCH LOOPBACK BGP​ ​AS


ADDRESS

leaf01 10.0.0.11/32 65011

leaf02 10.0.0.12/32 65012

leaf03 10.0.0.13/32 65013

leaf04 10.0.0.14/32 65014

spine01 10.0.0.21/32 65020

spine02 10.0.0.22/32 65020

First,​ ​configure​ ​a​ ​unique​ ​loopback​ ​IP​ ​address​ ​on​ ​each​ ​switch,​ ​replacing​ ​the​ ​IP​ ​address​ ​with​ ​the
appropriate​ ​one​ ​found​ ​in​ ​Table​ ​3:

cumulus@leaf01:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.11/32
cumulus@leaf01:mgmt-vrf:~$​​ n​ et​ c​ ommit

Next,​ ​configure​ ​BGP​ ​unnumbered​ ​on​ ​each​ ​of​ ​the​ ​spine​ ​or​ ​leaf-facing​ ​interfaces.​ ​Using​ ​the
topology​ ​in​ ​Figure​ ​1​ ​and​ ​referencing​ ​Table​ ​3,​ ​ ​leaf01’s​ ​spine-facing​ ​interfaces​ ​are​ ​swp51​ ​and
swp52.​ ​In​ ​the​ ​case​ ​of​ ​the​ ​spines,​ ​the​ ​leaf-facing​ ​interfaces​ ​are​ ​swp1-4.​ ​Table​ ​3​ ​identifies​ ​the​ ​BGP
Autonomous​ ​systems​ ​in​ ​our​ ​example.​ ​The​ ​below​ ​shows​ ​the​ ​leaf01​ ​configuration​ ​in​ ​our​ ​example
network.

cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​autonomous-system​ ​65011


cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.11
cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
cumulus@leaf01:mgmt-vrf:~$​ ​net​ ​add​ ​bgp​ ​neighbor​ ​swp51-52​ ​interface​ ​remote-as​ ​external
cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​commit

In​ ​our​ ​example,​ ​the​ ​spine​ ​switches​ ​neighbors​ ​are​ ​on​ ​swp1-4.​ ​The​ ​configuration​ ​in​ ​our​ ​example​ ​is
shown​ ​below:

cumulusnetworks.com ​ ​ ​ ​10
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​autonomous-system​ ​65020


cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.21
cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
cumulus@spine01:mgmt-vrf:~$​ ​net​ ​add​ ​bgp​ ​neighbor​ ​swp1-4​ ​interface​ ​remote-as​ ​external
cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​add​ ​bgp​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​commit

Note​ ​that​ ​BGP​ ​peer​ ​groups​​ ​can​ ​also​ ​be​ ​used​ ​for​ ​simplicity​ ​and​ ​optimization​ ​if​ ​additional​ ​BGP
commands​ ​are​ ​required.

Check​ ​the​ ​BGP​ ​peering.​ ​We​ ​use​ ​a​ ​spine01​ ​as​ ​an​ ​example:

cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​show​ ​bgp​ ​summ

show​ ​bgp​ ​ipv4​ ​unicast​ ​summary


=============================
BGP​ ​router​ ​identifier​ ​10.0.0.21,​ ​local​ ​AS​ ​number​ ​65020​ ​vrf-id​ ​0
BGP​ ​table​ ​version​ ​4
RIB​ ​entries​ ​7,​ ​using​ ​952​ ​bytes​ ​of​ ​memory
Peers​ ​4,​ ​using​ ​84​ ​KiB​ ​of​ ​memory

Neighbor​ ​ ​ ​ ​ ​ ​ ​ ​V​ ​ ​ ​ ​ ​ ​ ​ ​ ​AS​ ​MsgRcvd​ ​MsgSent​ ​ ​ T ​ blVer​ ​ ​InQ​ ​OutQ​ ​ U ​ p/Down​ ​State/PfxRcd
leaf01(swp1)​ ​ ​ ​ ​ ​
4 ​ ​ ​ ​ ​ ​65011​ ​ ​ ​ ​ ​293​ ​ ​ ​ ​ ​294​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​00:14:23​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​1
leaf02(swp2)​ ​ ​ ​ ​ ​
4 ​ ​ ​ ​ ​ ​65012​ ​ ​ ​ ​ ​156​ ​ ​ ​ ​ ​158​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​00:07:31​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​1
leaf03(swp3)​ ​ ​ ​ ​ ​
4 ​ ​ ​ ​ ​ ​65013​ ​ ​ ​ ​ ​ ​79​ ​ ​ ​ ​ ​ ​81​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​00:03:40​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​1
leaf04(swp4)​ ​ ​ ​ ​ ​
4 ​ ​ ​ ​ ​ ​65014​ ​ ​ ​ ​ ​ ​ ​6​ ​ ​ ​ ​ ​ ​ ​8​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​00:00:00​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​1

Total​ ​number​ ​of​ ​neighbors​ ​4

show​ ​bgp​ ​ipv6​ ​unicast​ ​summary


=============================
No​ ​IPv6​ ​neighbor​ ​is​ ​configured

show​ ​bgp​ ​evpn​ ​summary


=====================
No​ ​L2VPN​ ​neighbor​ ​is​ ​configured

Check​ ​the​ ​routing​ ​table.​ ​Reachability​ ​should​ ​exist​ ​to​ ​all​ ​the​ ​leaf​ ​loopback​ ​addresses​ ​from​ ​the
spine​ ​switches.​ ​In​ ​our​ ​example,​ ​spine01​ ​routing​ ​table​ ​is​ ​shown​ ​below:

cumulusnetworks.com ​ ​ ​ ​11
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@spine01:mgmt-vrf:~$​​ ​net​ ​show​ ​route

show​ ​ip​ ​route


=============
Codes:​ ​K​ ​-​ ​kernel​ ​route,​ ​C​ ​-​ ​connected,​ ​S​ ​-​ ​static,​ ​R​ ​-​ ​RIP,
​ ​ ​ ​ ​ ​ ​ ​O​ ​-​ ​OSPF,​ ​I​ ​-​ ​IS-IS,​ ​B​ ​-​ ​BGP,​ ​P​ ​-​ ​PIM,​ ​T​ ​-​ ​Table,​ ​v​ ​-​ ​VNC,
​ ​ ​ ​ ​ ​ ​ ​V​ ​-​ ​VPN,
​ ​ ​ ​ ​ ​ ​ ​>​ ​-​ ​selected​ ​route,​ ​*​ ​-​ ​FIB​ ​route

B>*​ ​10.0.0.11/32​ ​[20/0]​ ​via​ ​fe80::4638:39ff:fe00:53,​ ​swp1,​ ​00:08:36


B>*​ ​10.0.0.12/32​ ​[20/0]​ ​via​ ​fe80::4638:39ff:fe00:28,​ ​swp2,​ ​00:06:15
B>*​ ​10.0.0.13/32​ ​[20/0]​ ​via​ ​fe80::4638:39ff:fe00:4f,​ ​swp3,​ ​00:05:45
B>*​ ​10.0.0.14/32​ ​[20/0]​ ​via​ ​fe80::4638:39ff:fe00:3b,​ ​swp4,​ ​00:05:15
C>*​ ​10.0.0.21/32​ ​is​ ​directly​ ​connected,​ ​lo

show​ ​ipv6​ ​route


===============
Codes:​ ​K​ ​-​ ​kernel​ ​route,​ ​C​ ​-​ ​connected,​ ​S​ ​-​ ​static,​ R ​ ​ -​ ​ ​RIPng,
​ ​ ​ ​ ​ ​ ​ ​O​ ​-​ ​OSPFv6,​ ​I​ ​-​ ​IS-IS,​ ​B​ ​-​ ​BGP,​ ​T​ ​-​ ​Table,​ ​v​ -​ ​ V​ NC,
​ ​ ​ ​ ​ ​ ​ ​V​ ​-​ ​VPN,
​ ​ ​ ​ ​ ​ ​ ​>​ ​-​ ​selected​ ​route,​ ​*​ ​-​ ​FIB​ ​route

C​ ​*​ ​fe80::/64​ ​is​ ​directly​ ​connected,​ ​swp4


C​ ​*​ ​fe80::/64​ ​is​ ​directly​ ​connected,​ ​swp2
C​ ​*​ ​fe80::/64​ ​is​ ​directly​ ​connected,​ ​swp1
C>*​ ​fe80::/64​ ​is​ ​directly​ ​connected,​ ​swp3

3.​ ​Install​ ​ifupdown2​ ​on​ ​servers​ ​(Applicable​ ​for​ ​Ubuntu​ ​Only).


Optionally,​ ​install​ ​ifupdown2​ ​on​ ​servers​ ​to​ ​ease​ ​troubleshooting​ ​and​ ​configuration.

cumulus@server01:~$​​ ​sudo​ ​apt-get​ ​update


cumulus@server01:~$​​ ​sudo​ ​apt-get​ ​install​ ​-y​ ​ifupdown2
cumulus@server01:~$​​ ​#​ ​Adds​ ​RemainAfterExit​ ​line​ ​under​ ​the​ ​[Service]​ ​Field
cumulus@server01:~$​​ ​sudo​ ​sed​ ​-i​ ​'/\[Service\]/a​ ​RemainAfterExit=yes'
/lib/systemd/system/networking.service

4.​ ​Install​ ​Docker-CE​ ​on​ ​servers.


Install​ ​Docker​ ​CE​​ ​on​ ​all​ ​the​ ​Ubuntu​ ​16.04​ ​servers.​ ​First​ ​add​ ​the​ ​public​ ​key​ ​to​ ​authenticate​ ​docker.
Add​ ​the​ ​docker​ ​repository,​ ​update​ ​the​ ​ubuntu​ ​repository​ ​so​ ​the​ ​ubuntu​ ​server​ ​knows​ ​what

cumulusnetworks.com ​ ​ ​ ​12
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

applications​ ​are​ ​in​ ​the​ ​repositories,​ ​and​ ​install​ ​Docker​ ​CE.​ ​An​ ​example​ ​of​ ​this​ ​procedure​ ​on
Ubuntu​ ​is​ ​shown​ ​below.

cumulus@server01:~$​​ ​curl​ ​-fsSL​ ​https://download.docker.com/linux/ubuntu/gpg​ ​|​ ​sudo​ ​apt-key​ ​add


-
cumulus@server01:~$​ ​sudo​ ​add-apt-repository​ ​"deb​ ​[arch=amd64]
https://download.docker.com/linux/ubuntu​ ​xenial​ ​stable"
cumulus@server01:~$​​ ​sudo​ ​apt-get​ ​update
cumulus@server01:~$​​ ​sudo​ ​apt-get​ ​install​ ​-y​ ​docker-ce

Verify​ ​Docker​ ​is​ ​installed:

cumulus@server01:~$​​ ​sudo​ ​systemctl​ ​status​ ​docker


*​ ​docker.service​ ​-​ ​Docker​ ​Application​ ​Container​ ​Engine
​ ​ ​ ​Loaded:​ ​loaded​ ​(/lib/systemd/system/docker.service;​ ​enabled;​ ​vendor​ ​preset:​ ​enabled)
​ ​ ​ ​Active:​ ​active​ ​(running)​​ ​sinc​e​ ​Mon​ ​2017-03-20​ ​17:52:57​ ​UTC;​ ​3h​ ​14min​ ​ago
​ ​ ​ ​ ​ ​Docs:​ ​https://docs.docker.com
​ ​Main​ ​PID:​ ​2649​ ​(dockerd)

Container​ ​Networking​ ​with​ ​the​ ​Cumulus​ ​Host​ ​Pack​ ​Advertising​ ​the​ ​Docker​ ​Bridge
Subnet
In​ ​this​ ​solution,​ ​a​ ​user-defined​ ​Docker​ ​bridge​ ​within​ ​the​ ​server​ ​is​ ​used​ ​to​ ​communicate​ ​between
local​ ​containers​ ​and​ ​the​ ​outside​ ​world.​ ​The​ ​containers​ ​running​ ​applications​ ​on​ ​the​ ​bridge​ ​use​ ​a
public​ ​IP​ ​address​ ​to​ ​eliminate​ ​the​ ​performance​ ​impact​ ​and​ ​complexity​ ​of​ ​NAT​ ​on​ ​the​ ​server.

In​ ​addition​ ​to​ ​the​ ​specific​ ​application​ ​containers,​ ​the​ ​Cumulus​ ​Host​ ​Pack​ ​layer​ ​3​ ​connectivity​ ​is
also​ ​run​ ​in​ ​a​ ​container​ ​on​ ​each​ ​server.​ ​The​ ​subnet​ ​on​ ​the​ ​user-defined​ ​docker​ ​bridge​ ​is
advertised​ ​via​ ​eBGP​ ​unnumbered​ ​directly​ ​from​ ​the​ ​residing​ ​server​ ​into​ ​the​ ​leaf​ ​layer​ ​and​ ​beyond.
This​ ​gives​ ​layer​ ​3​ ​connectivity​ ​throughout​ ​the​ ​entire​ ​data​ ​center,​ ​adding​ ​redundancy​ ​to​ ​the
containers​ ​and​ ​eliminating​ ​all​ ​the​ ​issues​ ​when​ ​deploying​ ​a​ ​layer​ ​2​ ​network.

The​ ​containerized​ ​version​ ​of​ ​the​ ​Host​ ​Pack​ ​Layer​ ​3​ ​connectivity​ ​must​ ​be​ ​configured​ ​in​ ​privileged
mode​ ​with​ ​access​ ​to​ ​Docker’s​ ​host​ ​network​,​ ​which​ ​means​ ​the​ ​container​ ​shares​ ​the​ ​host​ ​kernel’s
network​ ​namespace​ ​and​ ​has​ ​deep​ ​level​ ​access​ ​and​ ​capability​ ​to​ ​affect​ ​the​ ​host’s​ ​running​ ​kernel.
This​ ​allows​ ​the​ ​routes​ ​learned​ ​within​ ​the​ ​container​ ​to​ ​pass​ ​directly​ ​into​ ​the​ ​kernel​ ​for​ ​use​ ​by​ ​other
applications​ ​and​ ​containers.

Figure​ ​2​ ​depicts​ ​the​ ​solution​ ​architecture​ ​and​ ​a​ ​demo​ ​is​ ​available​ ​virtually​ ​at​ ​the​ ​Host​ ​Pack
Redistributing​ ​Docker​ ​Bridges​.​ ​Actual​ ​configurations​ ​for​ ​this​ ​solution​ ​are​ ​available​ ​in​ ​Appendix​ ​A.​

cumulusnetworks.com ​ ​ ​ ​13
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Figure​ ​2​ ​-​ ​Container​ ​Networking​ ​Advertising​ ​Docker​ ​Bridge

Our​ ​example​ ​is​ ​configured​ ​with​ ​the​ ​following​ ​server​ ​loopback​ ​IP​ ​addresses​ ​and​ ​BGP​ ​ASs​ ​as
outlined​ ​in​ ​Table​ ​4.

Table​ ​4​ ​-​ ​Server​ ​Address​ ​Scheme

SERVER​ ​IP​ ​ADDRESS​ ​SCHEME

SERVER LOOPBACK BGP​ ​AS


ADDRESS

server01 10.0.0.31/32 65031

server02 10.0.0.32/32 65032

server03 10.0.0.33/32 65033

server04 10.0.0.34/32 65034

Table​ ​5​ ​depicts​ ​the​ ​IP​ ​addresses​ ​used​ ​within​ ​the​ ​servers​ ​in​ ​our​ ​example:

cumulusnetworks.com ​ ​ ​ ​14
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Table​ ​5​ ​-​ ​Container​ ​IP​ ​Address​ ​Scheme


CONTAINER​ ​IP​ ​ADDRESSES

SERVER BRIDGE​ ​GATEWAY​ ​IP ​ ​ ​CONTAINER​ ​SUBNET CONTAINER​ ​DEFAULT​ ​IP​ ​ASSIGNMENTS

server01 172.16.1.1/26 172.16.1.0/26 Container​ ​1:​ ​172.16.1.2


Container​ ​2:​ ​172.16.1.3
Container​ ​3:​ ​172.16.1.4

server02 172.16.2.1/26 172.16.2.0/26 Container​ ​1:​ ​172.16.2.2


Container​ ​2:​ ​172.16.2.3
Container​ ​3:​ ​172.16.2.4

server03 172.16.3.1/26 172.16.3.0/26 Container​ ​1:​ ​172.16.3.2


Container​ ​2:​ ​172.16.3.3
Container​ ​3:​ ​172.16.3.4

server04 172.16.4.1/26 172.16.4.0/26 Container​ ​1:​ ​172.16.4.2


Container​ ​2:​ ​172.16.4.3
Container​ ​3:​ ​172.16.4.4

Build​ ​Steps​ ​for​ ​a​ ​Container​ ​Network​ ​with​ ​Host​ ​Pack​ ​Advertising​ ​Bridge​ ​Subnet
Follow​ ​the​ ​build​ ​steps​ ​for​ ​the​ ​generic​ ​spine/leaf​ ​architecture​ ​as​ ​denoted​ ​in​ ​the​ ​Building​ ​a
Spine/Leaf​ ​Architecture​ ​section.​ ​The​ ​final​ ​steps​ ​for​ ​building​ ​out​ ​a​ ​containerized​ ​network​ ​with​ ​the
Host​ ​Pack​ ​are​ ​as​ ​follows.

BUILD​ ​STEPS

TASKS

Per-leaf​ ​switch

1.​ ​Configure​ ​BGP​ ​neighbor​ ​on​ ​the​ ​leafs​ ​server-facing​ ​interface

Per-server

2.​ ​Configure​ ​a​ ​loopback​ ​address​ ​on​ ​each​ ​server

3.​ ​Create​ ​a​ ​user-defined​ ​Docker​ ​bridge

4.​ ​Create​ ​a​ ​Host​ ​Pack​ ​layer​ ​3​ ​configuration​ ​File

5.​ ​Install​ ​containerized​ ​Host​ ​Pack​ ​layer​ ​3​ ​connectivity

6.​ ​Install​ ​containers​ ​on​ ​the​ ​user-defined​ ​bridge

7.​ ​Test​ ​container​ ​reachability

cumulusnetworks.com ​ ​ ​ ​15
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

1.​ ​Configure​ ​a​ ​BGP​ ​neighbor​ ​on​ ​the​ ​leaf's​ ​server-facing​ ​interfaces.
Add​ ​the​ ​following​ ​commands​ ​to​ ​each​ ​leaf​ ​switch,​ ​assuming​ ​swp1​ ​and​ ​swp2​ ​are​ ​server-facing
switch​ ​ports.​ ​Note​ ​the​ ​neighbor​ ​does​ ​not​ ​come​ ​up​ ​until​ ​BGP​ ​is​ ​configured​ ​on​ ​the​ ​server:

cumulus@leaf01:mgmt-vrf:~$​​ n
​ et​ a​ dd​ ​bgp​ ​neighbor​ ​swp1-2​ ​interface​ ​remote-as​ ​external
cumulus@leaf01:mgmt-vrf:~$​​ n​ et​ c​ ommit

2.​ ​Configure​ ​each​ ​server​ ​with​ ​a​ ​loopback​ ​address.


If​ ​they​ ​are​ ​up,​ ​bring​ ​the​ ​loopback​ ​and​ ​Ethernet​ ​interfaces​ ​down​ ​by​ ​performing​ ​the​ ​following
command:

cumulus@server01:~$​ ​sudo​ i ​ fdown​ l ​ o


cumulus@server01:~$​​ s
​ udo​ i ​ fdown​ e ​ th1
cumulus@server01:~$​​ s​ udo​ i ​ fdown​ e ​ th2

Edit​ ​the​ ​server's​ ​ ​/etc/network/interfaces​ ​file​ ​to​ ​add​ ​a​ ​static​ ​loopback​ ​address​ ​and​ ​Ethernet
interfaces​ ​if​ ​not​ ​already​ ​configured.​ ​If​ ​the​ ​MTU​ ​of​ ​the​ ​leaf’s​ ​server​ ​facing​ ​switch​ ​port​ ​was​ ​changed
to​ ​9216,​ ​change​ ​the​ ​server’s​ ​MTU​ ​as​ ​well.​ ​The​ ​configuration​ ​below​ ​has​ ​ifupdown2​ ​installed:

auto​ ​lo
iface​ ​lo​ ​inet​ ​loopback
​ ​ ​ ​address​ ​<loopback​ ​IP>​/32

auto​ ​eth0
​ ​ ​ ​iface​ ​eth0​ ​inet​ ​dhcp

auto​ ​eth1
​ ​ ​ ​iface​ ​eth1
​ ​ ​ ​mtu​ ​9216

auto​ ​eth2
​ ​ ​ ​iface​ ​eth2
​ ​ ​ ​mtu​ ​9216

Bring​ ​up​ ​the​ ​loopback​ ​interface​ ​and​ ​Ethernet​ ​interfaces​ ​by​ ​executing​ ​the​ ​following​ ​ifup
commands:

cumulusnetworks.com ​ ​ ​ ​16
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@server01:~$​​ s
​ udo​ i ​ fup​ l ​ o
cumulus@server01:~$​​ s​ udo​ i ​ fup​ e ​ th1
cumulus@server01:~$​​ s ​ udo​ i ​ fup​ e ​ th2

Verify​ ​the​ ​interfaces​ ​are​ ​up​ ​by​ ​performing​ ​the​ ​following​ ​commands​ ​for​ ​all​ ​the​ ​server​ ​interfaces:

cumulus@server01:~$​​ ​ip​ ​link​ ​show​ ​lo


1:​ ​lo:​ ​<LOOPBACK,UP,LOWER_UP>​​ ​mtu​ ​65536​ ​qdisc​ ​noqueue​ ​state​ ​UNKNOWN​ ​mode​ ​DEFAULT
group​ ​default​ ​qlen​ ​1
​ ​ ​ ​ ​link/loopback​ ​00:00:00:00:00:00​ ​brd​ ​00:00:00:00:00:00
cumulus@server01:~$​ ​ip​ ​link​ ​show​ ​eth1
4:​ ​eth1:​ ​<BROADCAST,MULTICAST,UP,LOWER_UP>​​ ​mtu​ ​9216​ ​qdisc​ ​pfifo_fast​ ​state​ ​UP
mode​ ​DEFAULT​ ​group​ ​default​ ​qlen​ ​1000
​ ​ ​ ​ ​link/ether​ ​44:38:39:00:00:03​ ​brd​ ​ff:ff:ff:ff:ff:ff
cumulus@server01:~$​ ​ip​ ​link​ ​show​ ​eth2
5:​ ​eth2:​ ​<BROADCAST,MULTICAST,UP,LOWER_UP>​​ ​mtu​ ​9216​ ​qdisc​ ​pfifo_fast​ ​state​ ​UP
mode​ ​DEFAULT​ ​group​ ​default​ ​qlen​ ​1000
​ ​ ​ ​ ​link/ether​ ​44:38:39:00:00:17​ ​brd​ ​ff:ff:ff:ff:ff:ff

If​ ​the​ ​interfaces​ ​are​ ​not​ ​up,​ ​perform​ ​the​ ​following​ ​command,​ ​replacing​ ​“​eth1​”​ ​with​ ​the​ ​appropriate
interface:

cumulus@server01:~$​​ ​sudo​ ​ifup​ ​eth1

3.​ ​Create​ ​a​ ​user-defined​ ​bridge​ ​network​ ​within​ ​each​ ​server.


The​ ​IP​ ​subnet​ ​and​ ​the​ ​default​ ​gateway​ ​are​ ​specified​ ​when​ ​the​ ​bridge​ ​is​ ​created.​ ​Other​ ​variables
such​ ​as​ ​the​ ​bridge​ ​name​ ​can​ ​be​ ​specified​ ​here.​ ​More​ ​information​ ​on​ ​creating​ ​a​ ​user-defined
bridge​ ​can​ ​be​ ​found​ ​under​​ ​User-Defined​ ​Networks​.

By​ ​default,​ ​Docker​ ​enables​ ​masquerade​ ​(NAT/PAT).​ ​Masquerade​ ​needs​ ​to​ ​be​ ​disabled​ ​since​ ​the
reachable​ ​IP​ ​will​ ​be​ ​advertised​ ​directly​ ​from​ ​the​ ​server,​ ​and​ ​we​ ​will​ ​set​ ​the​ ​mtu​ ​equal​ ​to​ ​the​ ​host's
physical​ ​interfaces.​ ​The​ ​gateway​ ​and​ ​subnet​ ​settings​ ​are​ ​unique​ ​to​ ​the​ ​host​ ​and​ ​are​ ​based​ ​on
Table​ ​5​.

cumulusnetworks.com ​ ​ ​ ​17
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@server01:~$​​ ​sudo​ ​docker​ ​network​ ​create​ ​--subnet=172.16.1.0/26​​ ​ ​\


--gateway=172.16.1.62​ ​ ​\
--opt​ ​"com.docker.network.bridge.name"="apache_network"​ ​\
--opt​ ​"com.docker.network.bridge.enable_ip_masquerade"=​"false"​​ ​\
--opt​ ​"com.docker.network.driver.mtu"=​"9216"​​ ​\
apache_network

View​ ​the​ ​new​ ​bridge​ ​created​ ​within​ ​the​ ​server:

cumulus@server01​:​~​$​ ​sudo​ ​docker​ ​network​ ​ls


NETWORK​ ​ID​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​NAME​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​DRIVER​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​SCOPE
2a989a1ff773​ ​ ​ ​ ​ ​ ​ ​ ​apache_network​ ​ ​ ​ ​ ​ ​bridge​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​local
69355b79e7fd​ ​ ​ ​ ​ ​ ​ ​ ​bridge​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​bridge​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​local
31655a3e4155​ ​ ​ ​ ​ ​ ​ ​ ​host​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​host​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​local
3397ff8ba588​ ​ ​ ​ ​ ​ ​ ​ ​none​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​null​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​local

View​ ​the​ ​information​ ​about​ ​the​ ​new​ ​bridge.

cumulus@server01:~$​ ​sudo​ ​docker​ ​network​ ​inspect​ ​apache_network


[
​ ​ ​ ​ ​{
​ ​ ​ ​ ​ ​ ​ ​ ​"Name":​ ​"apache_network",
​ ​ ​ ​ ​ ​ ​ ​ ​"Id":​ ​"2a989a1ff77355362540d8c72b92f1f089a7e35407068a5ce95fcf7ded1e653f",
​ ​ ​ ​ ​ ​ ​ ​ ​"Created":​ ​"2017-08-08T23:59:32.779858438+03:00",
​ ​ ​ ​ ​ ​ ​ ​ ​"Scope":​ ​"local",
​ ​ ​ ​ ​ ​ ​ ​ ​"Driver":​ ​"bridge",
​ ​ ​ ​ ​ ​ ​ ​ ​"EnableIPv6":​ ​false,
​ ​ ​ ​ ​ ​ ​ ​ ​"IPAM":​ ​{
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"Driver":​ ​"default",
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"Options":​ ​{},
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"Config":​ ​[
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​{
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"Subnet":​ ​"172.16.1.0/26",
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"Gateway":​ ​"172.16.1.62"
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​}
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​]
​ ​ ​ ​ ​ ​ ​ ​ ​},
​ ​ ​ ​ ​ ​ ​ ​ ​"Internal":​ ​false,
​ ​ ​ ​ ​ ​ ​ ​ ​"Attachable":​ ​false,

cumulusnetworks.com ​ ​ ​ ​18
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ ​ ​ ​ ​ ​ ​ " ​ Ingress":​ ​false,


​ ​ ​ ​ ​ ​ ​ ​ ​"Containers":​ ​{},
​ ​ ​ ​ ​ ​ ​ ​ ​"Options":​ ​{
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"com.docker.network.bridge.enable_ip_masquerade":​ ​"false",
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"com.docker.network.bridge.name":​ ​"apache_network"​,
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​"com.docker.network.driver.mtu":​ ​"9216"
​ ​ ​ ​ ​ ​ ​ ​ ​},
​ ​ ​ ​ ​ ​ ​ ​ ​"Labels":​ ​{}
​ ​ ​ ​ ​}
]

4.​ ​Create​ ​a​ ​Host​ ​Pack​ ​layer​ ​3​ ​configuration​ ​file.


Create​ ​a​ ​layer​ ​3​ ​configuration​ ​file​ ​for​ ​each​ ​server.​ ​This​ ​file​ ​will​ ​be​ ​used​ ​during​ ​initial​ ​container
deployment​ ​as​ ​well​ ​as​ ​any​ ​subsequent​ ​reboots.​ ​For​ ​our​ ​example,​ ​a​ ​file​ ​for​ ​server01​ ​is​ ​shown
below.​ ​Each​ ​server​ ​would​ ​need​ ​a​ ​different​ ​AS​ ​number​ ​and​ ​Router​ ​ID​ ​as​ ​depicted​ ​in​ ​Table​ ​4​.

The​ ​Quagga​ ​configuration​ ​file​ ​configures​ ​eBGP​ ​unnumbered​ ​and​ ​configures​ ​a​ ​route-map​ ​to
advertise​ ​only​ ​the​ ​container​ ​subnet​ ​and​ ​the​ ​loopback​ ​address​ ​into​ ​the​ ​BGP​ ​network.​ ​The
LOCAL_ROUTES​ ​route-map​ ​is​ ​applied​ ​to​ ​the​ ​redistribute​ ​connected​ ​command.​ ​The​ ​as-path
access-list​ ​HOST_ORIGINATED_ROUTES​ ​option​ ​allows​ ​only​ ​routes​ ​from​ ​this​ ​server​ ​to​ ​be
advertised​ ​into​ ​the​ ​domain​ ​to​ ​eliminate​ ​the​ ​server​ ​acting​ ​as​ ​a​ ​transit​ ​router.

!
!
interface​ ​eth1
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
interface​ ​eth2
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
router​ ​bgp​ ​65031
​ ​bgp​ ​router-id​ ​10.0.0.31
​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​neighbor​ ​eth1​ ​interface​ ​remote-as​ ​external
​ ​neighbor​ ​eth2​ ​interface​ ​remote-as​ ​external
​ ​!
​ ​address-family​ ​ipv4​ ​unicast
​ ​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
​ ​ ​neighbor​ ​eth1​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ o​ ut
​ ​ ​neighbor​ ​eth2​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ o​ ut
​ ​exit-address-family
!

cumulusnetworks.com ​ ​ ​ ​19
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

ip​ ​as-path​ ​access-list​ H


​ OST_ORIGINATED_ROUTES​ ​permit​ ​^$
!
route-map​ ​LOCAL_ROUTES​ p ​ ermit​ ​10
​ ​match​ ​interface​ ​lo

route-map​ ​LOCAL_ROUTES​ ​permit​ ​20


​ ​match​ ​interface​ ​apache_network
!
line​ ​vty
!
end

Save​ ​this​ ​file​ ​on​ ​the​ ​local​ ​server01,​ ​as​ ​Quagga_server01.conf​ ​for​ ​example.​ ​We​ ​will​ ​apply​ ​this
configuration​ ​to​ ​the​ ​container​ ​during​ ​container​ ​boot​ ​up​ ​in​ ​the​ ​next​ ​step.

cumulus@server01:~$​​ ​ls​ ​Quagga_server01.conf


Quagga_server01.conf

5.​ ​Install​ ​containerized​ ​Host​ ​Pack​ ​layer​ ​3​ ​connectivity.


IPv6​ ​must​ ​be​ ​enabled​ ​on​ ​the​ ​servers​ ​to​ ​assign​ ​IPv6​ ​link​ ​local​ ​addresses​ ​for​ ​eBGP​ ​unnumbered.
IPv6​ ​forwarding​ ​is​ ​not​ ​necessary​ ​unless​ ​IPv6​ ​routing​ ​is​ ​required.​ ​This​ ​can​ ​be​ ​checked​ ​on​ ​Ubuntu
using​ ​the​ ​below​ ​command​ ​and​ ​the​ ​values​ ​should​ ​be​ ​0​ ​as​ ​shown​ ​below.

cumulus@server01​:​~​$​ ​sudo​ ​sysctl​ ​-a​ ​|grep​ ​disable_ipv6


net.ipv6.conf.all.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.all.stable_secret"
net.ipv6.conf.default.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.default.stable_secret"
net.ipv6.conf.docker0.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.docker0.stable_secret"
net.ipv6.conf.eth0.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.eth0.stable_secret"
net.ipv6.conf.eth1.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.eth1.stable_secret"
net.ipv6.conf.eth2.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.eth2.stable_secret"
net.ipv6.conf.lo.​disable_ipv6​​ ​=​ ​0
sysctl:​ ​reading​ ​key​ ​"net.ipv6.conf.lo.stable_secret"

cumulusnetworks.com ​ ​ ​ ​20
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

If​ ​the​ ​values​ ​are​ ​“1”,​ ​they​ ​can​ ​be​ ​changed​ ​by​ ​the​ ​following.​ ​You​ ​must​ ​be​ ​logged​ ​in​ ​as​ ​root​ ​user​ ​to
perform​ ​this​ ​command:

root@server01:/#​ ​echo​ ​"net.ipv6.conf.all.disable_ipv6=0"​ ​>>​ ​/etc/sysctl.conf


root@server01:/#​ ​sysctl​ ​-p​ ​/etc/sysctl.conf

Install​ ​containerized​ ​Host​ ​Pack​ ​layer​ ​3​ ​connectivity​ ​as​ ​seen​ ​below.​ ​We​ ​create​ ​a​ ​privileged
container​ ​on​ ​the​ ​host​ ​network.​ ​The​ ​container​ ​is​ ​named​ ​cumulus-roh.​ ​During​ ​installation,​ ​we​ ​apply
the​ ​configuration​ ​created​ ​in​ ​Step​ ​4.​ ​By​ ​applying​ ​the​ ​configuration​ ​upon​ ​running​ ​the​ ​container,​ ​we
will​ ​not​ ​need​ ​to​ ​configure​ ​it​ ​manually​ ​and​ ​we​ ​will​ ​save​ ​and​ ​re-apply​ ​the​ ​configuration​ ​should​ ​the
switch​ ​be​ ​rebooted.​ ​Since​ ​the​ ​image​ ​is​ ​not​ ​found​ ​locally,​ ​Docker​ ​automatically​ ​downloads​ ​it​ ​from
Dockerhub.

cumulus@server01​:​~​$​ ​sudo​ ​docker​ ​run​ ​-t​ ​-d​ ​--net=host​ ​--privileged​ ​--name​ ​cumulus-roh​ ​\
>​ ​--restart​ ​unless-stopped​ ​\
>​ ​-v​ ​/home/cumulus/Quagga_server01.conf:/etc/quagga/Quagga.conf​ ​\
>​ ​cumulusnetworks/quagga:latest

Unable​ ​to​ ​find​ ​image​ ​'cumulusnetworks/quagga:latest'​ ​locally


latest:​ ​Pulling​ ​from​ ​cumulusnetworks/quagga
d54efb8db41d:​ ​Pull​ ​complete
f8b845f45a87:​ ​Pull​ ​complete
e8db7bf7c39f:​ ​Pull​ ​complete
9654c40e9079:​ ​Pull​ ​complete
6d9ef359eaaa:​ ​Pull​ ​complete
3273e9db3614:​ ​Pull​ ​complete
0818725e8884:​ ​Pull​ ​complete
86251753504b:​ ​Pull​ ​complete
ecbf6572b698:​ ​Pull​ ​complete
e1b197ea51f6:​ ​Pull​ ​complete
3a75f3c35041:​ ​Pull​ ​complete
3fb44c1ebfec:​ ​Pull​ ​complete
5e99c37848bc:​ ​Pull​ ​complete
Digest:​ ​sha256:b47cca55a334862b325836c15a8fb54ff6f4d0a3203f9d40298acc742b9c7dd4
Status:​ ​Downloaded​ ​newer​ ​image​ ​for​ ​cumulusnetworks/quagga:latest
e4d88d7d9b8be76c63366756c6a8ad0b6f0f286d4461d46d060a53612774575c

Check​ ​to​ ​be​ ​sure​ ​the​ ​container​ ​is​ ​installed​ ​and​ ​active:

cumulusnetworks.com ​ ​ ​ ​21
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@server01​:​~​$​ ​sudo​ ​docker​ ​ps​ ​-a


CONTAINER​ ​ID​ ​ ​ ​ ​ ​ ​ ​ ​IMAGE​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ C
​ OMMAND​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ C
​ REATED
STATUS​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​PORTS​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​NAMES
e4d88d7d9b8b​ ​ ​ ​ ​ ​ ​ ​ ​cumulusnetworks/quagga:latest​ ​ ​ " ​ /usr/lib/quagga/s..."​ ​ ​ 9 ​ ​ ​seconds​ ​ago
Up​ ​9​ ​seconds​​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​cumulus-roh

The​ ​procedures​ ​for​ ​installing​ ​Containerized​ ​Host​ ​Pack​ ​layer​ ​3​ ​connectivity​ ​are​ ​found​ ​in​ ​the
Installing​ ​the​ ​Cumulus​ ​Quagga​ ​package​ ​in​ ​a​ ​Docker​ ​Container​​ ​section​ ​of​ ​the​ ​Cumulus​ ​Linux
technical​ ​documentation.

After​ ​all​ ​servers​ ​are​ ​configured,​ ​check​ ​a​ ​leaf​ ​switch​ ​to​ ​be​ ​sure​ ​the​ ​server​ ​BGP​ ​neighbors​ ​are​ ​up.

cumulus@leaf01:mgmt-vrf:~$​​ ​net​ ​show​ ​bgp​ ​summ

show​ ​bgp​ ​ipv4​ ​unicast​ ​summary


=============================
BGP​ ​router​ ​identifier​ ​10.0.0.11,​ l ​ ocal​ ​AS​ ​number​ ​65011​ ​vrf-id​ ​0
BGP​ ​table​ ​version​ ​12
RIB​ ​entries​ ​19,​ ​using​ ​2584​ ​bytes​ o​ f​ ​memory
Peers​ ​4,​ ​using​ ​84​ ​KiB​ ​of​ ​memory

Neighbor​ ​ ​ ​ ​ ​ ​ ​ ​V​ ​ ​ ​ ​ ​ ​ ​ ​ ​AS​ ​MsgRcvd​ ​MsgSent​ ​ ​ T ​ blVer​ ​ ​InQ​ ​OutQ​ ​ U ​ p/Down​ ​State/PfxRcd
server01(swp1)​ ​ ​4​ ​ ​ ​ ​ ​ ​65031​ ​ ​ ​ ​1147​ ​ ​ ​ ​1165​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​00:16:25​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​1
server02(swp2)​ ​ ​4​ ​ ​ ​ ​ ​ ​65032​ ​ ​ ​ ​1245​ ​ ​ ​ ​1257​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​01:02:08​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​1
spine01(swp51)​ ​ ​4​ ​ ​ ​ ​ ​ ​65020​ ​ ​ ​ ​1419​ ​ ​ ​ ​1421​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​01:10:22​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​6
spine02(swp52)​ ​ ​4​ ​ ​ ​ ​ ​ ​65020​ ​ ​ ​ ​1419​ ​ ​ ​ ​1421​ ​ ​ ​ ​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​ ​ ​ ​0​ ​01:10:22​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​6

Cumulus​ ​Networks​ ​recommends​ ​configuring​ ​a​ ​prefix​ ​list​ ​to​ ​allow​ ​only​ ​a​ ​default​ ​route​ ​in.​ ​Care
must​ ​be​ ​taken​ ​to​ ​ensure​ ​the​ ​host​ ​does​ ​not​ ​act​ ​as​ ​a​ ​transit​ ​router.​ ​More​ ​information​ ​on​ ​the​ ​prefix
list​ ​recommendations​ ​can​ ​be​ ​found​ ​in​ ​the​​ ​BGP​ ​chapter​​ ​of​ ​the​ ​Cumulus​ ​Linux​ ​technical
documentation.

6.​ ​Install​ ​apache​ ​containers​ ​on​ ​the​ ​user-defined​ ​Docker​ ​bridge.

The​ ​pre-built​ ​5.6​ ​apache​ ​container​ ​is​ ​used.​ ​To​ ​test​ ​that​ ​the​ ​correct​ ​container​ ​is​ ​accessed,​ ​create
some​ ​short​ ​HTML​ ​files.​ ​Later,​ ​these​ ​files​ ​will​ ​be​ ​copied​ ​into​ ​the​ ​appropriate​ ​container​ ​upon
container​ ​bootup.

cumulusnetworks.com ​ ​ ​ ​22
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@server01:~$​​ e
​ cho​ " ​ This​ i ​ s​ s ​ erver01​ c ​ ontainer1."​ > ​ ​ / ​ home/cumulus/container1.html
cumulus@server01:~$​​ e​ cho​ " ​ This​ i ​ s​ s ​ erver01​ c ​ ontainer2."​ > ​ ​ / ​ home/cumulus/container2.html
cumulus@server01:~$​​ e ​ cho​ " ​ This​ i ​ s​ s ​ erver01​ c ​ ontainer3."​ > ​ ​ / ​ home/cumulus/container3.html

Run​ ​the​ ​same​ ​command​ ​three​ ​times,​ ​changing​ ​the​ ​name​ ​of​ ​the​ ​container​ ​and​ ​the​ ​HTML​ ​file.

Next,​ ​add​ ​three​ ​containers​ ​running​ ​apache​ ​to​ ​the​ ​bridge​ ​apache_network​.​ ​Perform​ ​the​ ​following
step​ ​three​ ​times,​ ​changing​ ​the​ ​html​ ​file.

cumulus@server01:~$​​ ​sudo​​ ​docker​ ​run​ ​--net=apache_network​ ​\


​ ​ ​ ​ ​-v​ ​/home/cumulus/​container1.html:​/var/www/html/index.html:ro​ ​\
​ ​ ​ ​ ​-d​ ​-p​ ​80​ ​\
​ ​ ​ ​ ​-it​ ​\
​ ​ ​ ​ ​php:5.6-apache

More​ ​information​ ​about​ ​the​ ​options​ ​in​ ​a​ ​docker​ ​run​ ​command​ ​can​ ​be​ ​found​ ​in​ ​the​ ​Docker​ ​run
reference​.

Note​ ​how​ ​Docker​ ​has​ ​changed​ ​the​ ​iptables​ ​to​ ​allow​ ​specific​ ​access​ ​to​ ​the​ ​containers​ ​running
Apache.​ ​For​ ​example,​ ​if​ ​you​ ​send​ ​data​ ​on​ ​port​ ​80​ ​to​ ​the​ ​IP​ ​address​ ​172.16.1.1,​ ​iptables​ ​will​ ​map
that​ ​to​ ​the​ ​server’s​ ​internal​ ​port​ ​32771,​ ​which​ ​is​ ​then​ ​mapped​ ​to​ ​a​ ​specific​ ​container​ ​on​ ​the​ ​host.

cumulus@server01:~$​ ​sudo​ ​iptables-save


#​ ​Generated​ ​by​ ​iptables-save​ ​v1.6.0​ ​on​ ​Wed​ ​Aug​ ​ ​9​ ​00:08:16​ ​2017
*nat
:PREROUTING​ ​ACCEPT​ ​[0:0]
:INPUT​ ​ACCEPT​ ​[0:0]
:OUTPUT​ ​ACCEPT​ ​[0:0]
:POSTROUTING​ ​ACCEPT​ ​[0:0]
:DOCKER​ ​-​ ​[0:0]
-A​ ​PREROUTING​ ​-m​ ​addrtype​ ​--dst-type​ ​LOCAL​ ​-j​ ​DOCKER
-A​ ​OUTPUT​ ​!​ ​-d​ ​127.0.0.0/8​ ​-m​ ​addrtype​ ​--dst-type​ ​LOCAL​ ​-j​ ​DOCKER
-A​ ​POSTROUTING​ ​-s​ ​172.17.0.0/16​ ​!​ ​-o​ ​docker0​ ​-j​ ​MASQUERADE
-A​ ​POSTROUTING​ ​-s​ ​172.16.1.1/32​ ​-d​ ​172.16.1.1/32​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ ​80​ ​-j​ ​MASQUERADE
-A​ ​POSTROUTING​ ​-s​ ​172.16.1.2/32​ ​-d​ ​172.16.1.2/32​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ ​80​ ​-j​ ​MASQUERADE
-A​ ​POSTROUTING​ ​-s​ ​172.16.1.3/32​ ​-d​ ​172.16.1.3/32​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ ​80​ ​-j​ ​MASQUERADE
-A​ ​DOCKER​ ​-i​ ​docker0​ ​-j​ ​RETURN
-A​ ​DOCKER​ ​!​ ​-i​ ​apache_network​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ ​32771​ ​-j​ ​DNAT​ ​--to-destination
172.16.1.1:80

cumulusnetworks.com ​ ​ ​ ​23
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

-A​ ​DOCKER​ ​!​ ​-i​ ​apache_network​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ ​32772​ ​-j​ ​DNAT​ ​--to-destination
172.16.1.2:80
-A​ ​DOCKER​ ​!​ ​-i​ ​apache_network​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ ​32773​ ​-j​ ​DNAT​ ​--to-destination
172.16.1.3:80
COMMIT
#​ ​Completed​ ​on​ ​Wed​ ​Aug​ ​ ​9​ ​00:08:16​ ​2017
#​ ​Generated​ ​by​ ​iptables-save​ ​v1.6.0​ ​on​ ​Wed​ ​Aug​ ​ ​9​ ​00:08:16​ ​2017
*filter
:INPUT​ ​ACCEPT​ ​[119:7660]
:FORWARD​ ​DROP​ ​[0:0]
:OUTPUT​ ​ACCEPT​ ​[75:11576]
:DOCKER​ ​-​ ​[0:0]
:DOCKER-ISOLATION​ ​-​ ​[0:0]
-A​ ​FORWARD​ ​-j​ ​DOCKER-ISOLATION
-A​ ​FORWARD​ ​-o​ ​apache_network​ ​-m​ ​conntrack​ ​--ctstate​ ​RELATED,ESTABLISHED​ ​-j​ ​ACCEPT
-A​ ​FORWARD​ ​-o​ ​apache_network​ ​-j​ ​DOCKER
-A​ ​FORWARD​ ​-i​ ​apache_network​ ​!​ ​-o​ ​apache_network​ ​-j​ ​ACCEPT
-A​ ​FORWARD​ ​-i​ ​apache_network​ ​-o​ ​apache_network​ ​-j​ ​ACCEPT
-A​ ​FORWARD​ ​-o​ ​docker0​ ​-m​ ​conntrack​ ​--ctstate​ ​RELATED,ESTABLISHED​ ​-j​ ​ACCEPT
-A​ ​FORWARD​ ​-o​ ​docker0​ ​-j​ ​DOCKER
-A​ ​FORWARD​ ​-i​ ​docker0​ ​!​ ​-o​ ​docker0​ ​-j​ ​ACCEPT
-A​ ​FORWARD​ ​-i​ ​docker0​ ​-o​ ​docker0​ ​-j​ ​ACCEPT
-A​ ​DOCKER​ ​-d​ ​172.16.1.1/32​ ​!​ ​-i​ ​apache_network​ ​-o​ ​apache_network​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ 8
​ 0​ - ​ j
ACCEPT
-A​ ​DOCKER​ ​-d​ ​172.16.1.2/32​ ​!​ ​-i​ ​apache_network​ ​-o​ ​apache_network​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ 8​ 0​ - ​ j
ACCEPT
-A​ ​DOCKER​ ​-d​ ​172.16.1.3/32​ ​!​ ​-i​ ​apache_network​ ​-o​ ​apache_network​ ​-p​ ​tcp​ ​-m​ ​tcp​ ​--dport​ 8 ​ 0​ - ​ j
ACCEPT
-A​ ​DOCKER-ISOLATION​ ​-i​ ​docker0​ ​-o​ ​apache_network​ ​-j​ ​DROP
-A​ ​DOCKER-ISOLATION​ ​-i​ ​apache_network​ ​-o​ ​docker0​ ​-j​ ​DROP
-A​ ​DOCKER-ISOLATION​ ​-j​ ​RETURN
COMMIT

We​ ​can​ ​look​ ​at​ ​the​ ​final​ ​routing​ ​table:

cumulus@server01​:​~​$​ ​sudo​ ​ip​ ​route​ ​show


default​ ​via​ ​192.168.0.254​ ​dev​ ​eth0
10.0.0.11​ ​via​ ​169.254.0.1​ ​dev​ ​eth1​ ​ ​proto​ 1
​ 86​ ​ m​ etric​ 2​ 0​ o​ nlink
10.0.0.12​ ​via​ ​169.254.0.1​ ​dev​ ​eth2​ ​ ​proto​ 1​ 86​ ​ m​ etric​ 2​ 0​ o​ nlink

cumulusnetworks.com ​ ​ ​ ​24
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

10.0.0.13​ ​ ​proto​ ​186​ ​ ​metric​ ​20


nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
10.0.0.14​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
10.0.0.21​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
10.0.0.22​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
10.0.0.32​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
10.0.0.33​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
10.0.0.34​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
172.16.1.0/26​ ​dev​ ​apache_network​ ​ ​proto​ ​kernel​ ​ ​scope​ ​link​ ​ ​src​ ​172.16.1.62
172.16.2.0/26​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
172.16.3.0/26​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
172.16.4.0/26​ ​ ​proto​ ​186​ ​ ​metric​ ​20
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth1​ ​weight​ ​1​ ​onlink
nexthop​ ​via​ ​169.254.0.1​ ​ ​dev​ ​eth2​ ​weight​ ​1​ ​onlink
172.17.0.0/16​ ​dev​ ​docker0​ ​ ​proto​ ​kernel​ ​ ​scope​ ​link​ ​ ​src​ ​172.17.0.1​ ​linkdown
192.168.0.0/24​ ​dev​ ​eth0​ ​ ​proto​ ​kernel​ ​ ​scope​ ​link​ ​ ​src​ ​192.168.0.31

7.​ ​Test​ ​the​ ​container​ ​reachability.


Curl​ ​to​ ​the​ ​Apache​ ​containers​ ​from​ ​a​ ​remote​ ​server:

cumulus@server02​:​~​$​ ​curl​ ​172.16.1.1


This​ ​is​ ​server01​ ​container1.
cumulus@server02​:​~​$​ ​curl​ ​172.16.1.2
This​ ​is​ ​server01​ ​container2.
cumulus@server02​:​~​$​ ​curl​ ​172.16.1.3

cumulusnetworks.com ​ ​ ​ ​25
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

This​ ​is​ ​server01​ ​container3.

Curl​ ​to​ ​a​ ​remote​ ​container​ ​on​ ​server01​ ​from​ ​a​ ​container​ ​on​ ​server02:

cumulus@server02​:​~​$​ ​sudo​ ​docker​ ​exec​ ​-it​ ​con02​ ​/bin/bash


root@de36e38a74c4:/var/www/html#​ ​curl​ ​172.16.1.1
This​ ​is​ ​server01​ ​container1.
root@de36e38a74c4:/var/www/html#​ ​curl​ ​172.16.1.2
This​ ​is​ ​server01​ ​container2.
root@de36e38a74c4:/var/www/html#​ ​curl​ ​172.16.1.3
This​ ​is​ ​server01​ ​container3.

Here​ ​is​ ​the​ ​local​ ​ARP​ ​table​ ​on​ ​server01​ ​with​ ​the​ ​containers​ ​running:

cumulus@server01​:​~​$​ ​arp
Address​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​HWtype​ ​ ​HWaddress​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ F ​ lags​ ​Mask​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ I ​ face
169.254.0.1​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ether​ ​ ​ ​44:38:39:00:00:15​ ​ ​ ​CM​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​eth2
oob-mgmt-server​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ether​ ​ ​ ​44:38:39:00:00:57​ ​ ​ ​C​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​eth0
172.16.1.2​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ether​ ​ ​ ​02:42:ac:10:01:02​ ​ ​ ​C​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​apache_network
169.254.0.1​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ether​ ​ ​ ​44:38:39:00:00:03​ ​ ​ ​CM​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​eth1
172.16.1.3​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ether​ ​ ​ ​02:42:ac:10:01:03​ ​ ​ ​C​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​apache_network
172.16.1.1​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ether​ ​ ​ ​02:42:ac:10:01:01​ ​ ​ ​C​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​apache_network

In​ ​summary,​ ​The​ ​Container​ ​Networking​ ​with​ ​the​ ​Host​ ​Pack​ ​Advertising​ ​the​ ​Bridge​ ​subnet
advertises​ ​docker​ ​bridge​ ​directly​ ​into​ ​the​ ​routing​ ​domain​ ​without​ ​use​ ​of​ ​NAT.​ ​ ​In​ ​addition,​ ​this
scenario​ ​can​ ​also​ ​be​ ​deployed​ ​with​ ​multiple​ ​bridges,​ ​each​ ​bridge​ ​hosting​ ​an​ ​application​ ​for
example.​ ​Each​ ​bridge​ ​would​ ​then​ ​need​ ​to​ ​be​ ​advertised​ ​via​ ​Quagga.

CONCLUSION

Container​ ​networking​ ​is​ ​vital​ ​in​ ​any​ ​modern​ ​containerized​ ​data​ ​center.​ ​The​ ​infrastructure​ ​needs​ ​to
support​ ​large​ ​amounts​ ​of​ ​traffic​ ​and​ ​reachability​ ​between​ ​containers.​ ​Cumulus​ ​Linux​ ​is​ ​in​ ​a
unique​ ​position​ ​to​ ​offer​ ​optimal​ ​reachability​ ​with​ ​our​ ​offerings​ ​such​ ​as​ ​Host​ ​Pack.

This​ ​document​ ​outlined​ ​a​ ​solution​ ​to​ ​provide​ ​an​ ​ideal​ ​environment​ ​for​ ​container​ ​networking.​ ​Try​ ​it
out​ ​for​ ​yourself​ ​on​ ​a​ ​laptop​ ​using​ ​Cumulus​ ​VX​​ ​with​ ​Vagrant​ ​or​ ​Cumulus​ ​in​ ​the​ ​Cloud​.

cumulusnetworks.com ​ ​ ​ ​26
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

APPENDIX​ ​A​ ​-​ ​Configurations

This​ ​appendix​ ​includes​ ​the​ ​configurations​ ​for​ ​Host​ ​Pack​ ​connectivity,​ ​along​ ​with​ ​advertising
Docker’s​ ​user-defined​ ​bridge​ ​subnet.

Spine​ ​Switches

Spine01​ ​Configuration

NCLU​ ​Commands:

net​ ​add​ ​interface​ ​swp1-4​ ​mtu​ ​9216


net​ ​add​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.21/32
net​ ​add​ ​interface​ ​eth0​ ​ip​ ​address​ ​dhcp
net​ ​add​ ​interface​ ​eth0​ ​vrf​ ​mgmt
net​ ​add​ ​hostname​ ​spine01
net​ ​add​ ​routing​ ​log​ ​syslog
net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
net​ ​add​ ​bgp​ ​autonomous-system​ ​65020
net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.21
net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
net​ ​add​ ​bgp​ ​neighbor​ ​swp1-4​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​ipv4​ ​unicast​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254​ ​iburst
net​ ​add​ ​dns​ ​nameserver​ ​ipv4​ ​192.168.0.254​ ​vrf​ ​mgmt
net​ ​commit

cumulusnetworks.com ​ ​ ​ ​27
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@spine01:mgmt-vrf:~$​ ​net​ ​show​ ​configuration

interface​ l ​ o
​ ​ ​address​ 1​ 0.0.0.21/32

interface​ ​eth0
​ ​ ​vrf​ ​mgmt
​ ​ ​address​ ​dhcp

interface​ ​swp2
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp3
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp1
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp4
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​mgmt
​ ​ ​address​ ​127.0.0.1/8
​ ​ ​vrf-table​ ​auto

hostname​ ​spine01

username​ ​cumulus​ ​nopassword

service​ ​integrated-vtysh-config

log​ ​syslog

router​ ​bgp​ ​65020


​ ​ ​bgp​ ​router-id​ ​10.0.0.21
​ ​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​ ​neighbor​ ​swp1​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp2​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp3​ ​interface​ ​remote-as​ ​external

cumulusnetworks.com ​ ​ ​ ​28
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ ​neighbor​ ​swp4​ ​interface​ ​remote-as​ ​external

​ ​ a ​ ddress-family​ i​ pv4​ ​unicast


​ ​ ​ ​ ​redistribute​ c​ onnected​ ​route-map​ ​LOCAL_ROUTES

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​ ​match​ ​interface​ ​lo

line​ ​vty

dot1x
​ ​ ​mab-activation-delay​ ​30
​ ​ ​eap-reauth-period​ ​0

​ ​ r ​ adius
​ ​ ​ ​ ​accounting-port​ ​1813
​ ​ ​ ​ ​shared-secret
​ ​ ​ ​ ​authentication-port​ ​1812

time

​ ​ z ​ one
​ ​ ​ ​ ​Etc/UTC

​ ​ ​ntp

​ ​ ​ ​ s ​ ervers
​ ​ ​ ​ ​ ​ ​192.168.0.254​ ​iburst

​ ​ ​ ​ s ​ ource
​ ​ ​ ​ ​ ​ ​eth0

dns

​ ​ n ​ ameserver
​ ​ ​ ​ ​192.168.0.254​ ​#​ ​vrf​ ​mgmt

cumulusnetworks.com ​ ​ ​ ​29
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Spine02​ ​Configuration

NCLU​ ​commands:

net​ ​add​ ​interface​ ​swp1-4​ ​mtu​ ​9216


net​ ​add​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.22/32
net​ ​add​ ​interface​ ​eth0​ ​ip​ ​address​ ​dhcp
net​ ​add​ ​interface​ ​eth0​ ​vrf​ ​mgmt
net​ ​add​ ​hostname​ ​spine02
net​ ​add​ ​routing​ ​log​ ​syslog
net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
net​ ​add​ ​bgp​ ​autonomous-system​ ​65020
net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.22
net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
net​ ​add​ ​bgp​ ​neighbor​ ​swp1-4​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​ipv4​ ​unicast​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254​ ​iburst
net​ ​add​ ​dns​ ​nameserver​ ​ipv4​ ​192.168.0.254​ ​vrf​ ​mgmt
net​ ​commit

cumulusnetworks.com ​ ​ ​ ​30
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@spine02:mgmt-vrf:~$​ ​net​ ​show​ ​configuration

interface​ l ​ o
​ ​ ​address​ 1​ 0.0.0.22/32

interface​ ​eth0
​ ​ ​vrf​ ​mgmt
​ ​ ​address​ ​dhcp

interface​ ​swp2
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp3
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp1
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp4
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​mgmt
​ ​ ​address​ ​127.0.0.1/8
​ ​ ​vrf-table​ ​auto

hostname​ ​spine02

username​ ​cumulus​ ​nopassword

service​ ​integrated-vtysh-config

log​ ​syslog

router​ ​bgp​ ​65020


​ ​ ​bgp​ ​router-id​ ​10.0.0.22
​ ​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​ ​neighbor​ ​swp1​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp2​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp3​ ​interface​ ​remote-as​ ​external

cumulusnetworks.com ​ ​ ​ ​31
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ ​neighbor​ ​swp4​ ​interface​ ​remote-as​ ​external

​ ​ a ​ ddress-family​ i​ pv4​ ​unicast


​ ​ ​ ​ ​redistribute​ c​ onnected​ ​route-map​ ​LOCAL_ROUTES

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​ ​match​ ​interface​ ​lo

line​ ​vty

dot1x
​ ​ ​mab-activation-delay​ ​30
​ ​ ​eap-reauth-period​ ​0

​ ​ r ​ adius
​ ​ ​ ​ ​accounting-port​ ​1813
​ ​ ​ ​ ​shared-secret
​ ​ ​ ​ ​authentication-port​ ​1812

time

​ ​ z ​ one
​ ​ ​ ​ ​Etc/UTC

​ ​ ​ntp

​ ​ ​ ​ s ​ ervers
​ ​ ​ ​ ​ ​ ​192.168.0.254​ ​iburst

​ ​ ​ ​ s ​ ource
​ ​ ​ ​ ​ ​ ​eth0

dns

​ ​ n ​ ameserver
​ ​ ​ ​ ​192.168.0.254​ ​#​ ​vrf​ ​mgmt

cumulusnetworks.com ​ ​ ​ ​32
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Leaf​ ​Switches

Leaf01​ ​Configuration

NCLU​ ​Commands:

net​ ​add​ ​interface​ ​swp1-2,51-52​ ​mtu​ ​9216


net​ ​add​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.11/32
net​ ​add​ ​interface​ ​eth0​ ​ip​ ​address​ ​dhcp
net​ ​add​ ​interface​ ​eth0​ ​vrf​ ​mgmt
net​ ​add​ ​hostname​ ​leaf01
net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
net​ ​add​ ​bgp​ ​autonomous-system​ ​65011
net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.11
net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
net​ ​add​ ​bgp​ ​neighbor​ ​swp1-2​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​neighbor​ ​swp51-52​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254​ ​iburst
net​ ​add​ ​dns​ ​nameserver​ ​ipv4​ ​192.168.0.254​ ​vrf​ ​mgmt
net​ ​commit

cumulusnetworks.com ​ ​ ​ ​33
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@leaf01:mgmt-vrf:~$​ ​net​ ​show​ ​configuration

interface​ l ​ o
​ ​ ​address​ 1​ 0.0.0.11/32

interface​ ​eth0
​ ​ ​vrf​ ​mgmt
​ ​ ​address​ ​dhcp

interface​ ​swp2
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp1
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp51
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp52
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​mgmt
​ ​ ​address​ ​127.0.0.1/8
​ ​ ​vrf-table​ ​auto

hostname​ ​leaf01

username​ ​cumulus​ ​nopassword

service​ ​integrated-vtysh-config

log​ ​syslog

router​ ​bgp​ ​65011


​ ​ ​bgp​ ​router-id​ ​10.0.0.11
​ ​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​ ​neighbor​ ​swp1​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp2​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp51​ ​interface​ ​remote-as​ ​external

cumulusnetworks.com ​ ​ ​ ​34
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ ​neighbor​ ​swp52​ ​interface​ ​remote-as​ ​external

​ ​ a ​ ddress-family​ i​ pv4​ ​unicast


​ ​ ​ ​ ​redistribute​ c​ onnected​ ​route-map​ ​LOCAL_ROUTES

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​ ​match​ ​interface​ ​lo

line​ ​vty

dot1x
​ ​ ​mab-activation-delay​ ​30
​ ​ ​eap-reauth-period​ ​0

​ ​ r ​ adius
​ ​ ​ ​ ​accounting-port​ ​1813
​ ​ ​ ​ ​shared-secret
​ ​ ​ ​ ​authentication-port​ ​1812

time

​ ​ z ​ one
​ ​ ​ ​ ​Etc/UTC

​ ​ ​ntp

​ ​ ​ ​ s ​ ervers
​ ​ ​ ​ ​ ​ ​192.168.0.254​ ​iburst

​ ​ ​ ​ s ​ ource
​ ​ ​ ​ ​ ​ ​eth0

dns

​ ​ n ​ ameserver
​ ​ ​ ​ ​192.168.0.254​ ​#​ ​vrf​ ​mgmt

cumulusnetworks.com ​ ​ ​ ​35
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Leaf02​ ​Configuration

NCLU​ ​Commands:

net​ ​add​ ​interface​ ​swp1-2,51-52​ ​mtu​ ​9216


net​ ​add​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.12/32
net​ ​add​ ​interface​ ​eth0​ ​ip​ ​address​ ​dhcp
net​ ​add​ ​interface​ ​eth0​ ​vrf​ ​mgmt
net​ ​add​ ​hostname​ ​leaf02
net​ ​add​ ​routing​ ​log​ ​syslog
net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
net​ ​add​ ​bgp​ ​autonomous-system​ ​65012
net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.12
net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
net​ ​add​ ​bgp​ ​neighbor​ ​swp1-2​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​neighbor​ ​swp51-52​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​ipv4​ ​unicast​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254​ ​iburst
net​ ​add​ ​dns​ ​nameserver​ ​ipv4​ ​192.168.0.254​ ​vrf​ ​mgmt
net​ ​commit

cumulusnetworks.com ​ ​ ​ ​36
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@leaf02:mgmt-vrf:~$​ ​ ​net​ ​show​ ​configuration

interface​ l ​ o
​ ​ ​address​ 1​ 0.0.0.12/32

interface​ ​eth0
​ ​ ​vrf​ ​mgmt
​ ​ ​address​ ​dhcp

interface​ ​swp2
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp1
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp51
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp52
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​mgmt
​ ​ ​address​ ​127.0.0.1/8
​ ​ ​vrf-table​ ​auto

hostname​ ​leaf02

username​ ​cumulus​ ​nopassword

service​ ​integrated-vtysh-config

log​ ​syslog

router​ ​bgp​ ​65012


​ ​ ​bgp​ ​router-id​ ​10.0.0.12
​ ​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​ ​neighbor​ ​swp1​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp2​ ​interface​ ​remote-as​ ​external

cumulusnetworks.com ​ ​ ​ ​37
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ n ​ eighbor​ s
​ wp51​ i​ nterface​ r​ emote-as​ e​ xternal
​ ​ ​neighbor​ s ​ wp52​ i​ nterface​ r​ emote-as​ e​ xternal

​ ​ a ​ ddress-family​ i​ pv4​ ​unicast


​ ​ ​ ​ ​redistribute​ c​ onnected​ ​route-map​ ​LOCAL_ROUTES

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​ ​match​ ​interface​ ​lo

line​ ​vty

dot1x
​ ​ ​mab-activation-delay​ ​30
​ ​ ​eap-reauth-period​ ​0

​ ​ r ​ adius
​ ​ ​ ​ ​accounting-port​ ​1813
​ ​ ​ ​ ​shared-secret
​ ​ ​ ​ ​authentication-port​ ​1812

time

​ ​ z ​ one
​ ​ ​ ​ ​Etc/UTC

​ ​ ​ntp

​ ​ ​ ​ s ​ ervers
​ ​ ​ ​ ​ ​ ​192.168.0.254​ ​iburst

​ ​ ​ ​ s ​ ource
​ ​ ​ ​ ​ ​ ​eth0

dns

​ ​ n ​ ameserver
​ ​ ​ ​ ​192.168.0.254​ ​#​ ​vrf​ ​mgmt

cumulusnetworks.com ​ ​ ​ ​38
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Leaf03​ ​Configuration

NCLU​ ​Commands:

net​ ​add​ ​interface​ ​swp1-2,51-52​ ​mtu​ ​9216


net​ ​add​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.13/32
net​ ​add​ ​interface​ ​eth0​ ​ip​ ​address​ ​dhcp
net​ ​add​ ​interface​ ​eth0​ ​vrf​ ​mgmt
net​ ​add​ ​hostname​ ​leaf03
net​ ​add​ ​routing​ ​log​ ​syslog
net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
net​ ​add​ ​bgp​ ​autonomous-system​ ​65013
net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.13
net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
net​ ​add​ ​bgp​ ​neighbor​ ​swp1-2​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​neighbor​ ​swp51-52​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​ipv4​ ​unicast​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254​ ​iburst
net​ ​add​ ​dns​ ​nameserver​ ​ipv4​ ​192.168.0.254​ ​vrf​ ​mgmt
net​ ​commit

cumulusnetworks.com ​ ​ ​ ​39
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@leaf03:mgmt-vrf:~$​ ​net​ ​show​ ​configuration

interface​ l ​ o
​ ​ ​address​ 1​ 0.0.0.13/32

interface​ ​eth0
​ ​ ​vrf​ ​mgmt
​ ​ ​address​ ​dhcp

interface​ ​swp2
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp1
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp51
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp52
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​mgmt
​ ​ ​address​ ​127.0.0.1/8
​ ​ ​vrf-table​ ​auto

hostname​ ​leaf03

username​ ​cumulus​ ​nopassword

service​ ​integrated-vtysh-config

log​ ​syslog

router​ ​bgp​ ​65013


​ ​ ​bgp​ ​router-id​ ​10.0.0.13
​ ​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​ ​neighbor​ ​swp1​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp2​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp51​ ​interface​ ​remote-as​ ​external

cumulusnetworks.com ​ ​ ​ ​40
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ ​neighbor​ ​swp52​ ​interface​ ​remote-as​ ​external

​ ​ a ​ ddress-family​ i​ pv4​ ​unicast


​ ​ ​ ​ ​redistribute​ c​ onnected​ ​route-map​ ​LOCAL_ROUTES

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​ ​match​ ​interface​ ​lo

line​ ​vty

dot1x
​ ​ ​mab-activation-delay​ ​30
​ ​ ​eap-reauth-period​ ​0

​ ​ r ​ adius
​ ​ ​ ​ ​accounting-port​ ​1813
​ ​ ​ ​ ​shared-secret
​ ​ ​ ​ ​authentication-port​ ​1812

time

​ ​ z ​ one
​ ​ ​ ​ ​Etc/UTC

​ ​ ​ntp

​ ​ ​ ​ s ​ ervers
​ ​ ​ ​ ​ ​ ​192.168.0.254​ ​iburst

​ ​ ​ ​ s ​ ource
​ ​ ​ ​ ​ ​ ​eth0

dns

​ ​ n ​ ameserver
​ ​ ​ ​ ​192.168.0.254​ ​#​ ​vrf​ ​mgmt

cumulusnetworks.com ​ ​ ​ ​41
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Leaf04​ ​Configuration

NCLU​ ​Commands:

net​ ​add​ ​interface​ ​swp1-2,51-52​ ​mtu​ ​9216


net​ ​add​ ​loopback​ ​lo​ ​ip​ ​address​ ​10.0.0.14/32
net​ ​add​ ​interface​ ​eth0​ ​ip​ ​address​ ​dhcp
net​ ​add​ ​interface​ ​eth0​ ​vrf​ ​mgmt
net​ ​add​ ​hostname​ ​leaf04
net​ ​add​ ​routing​ ​log​ ​syslog
net​ ​add​ ​routing​ ​route-map​ ​LOCAL_ROUTES​ ​permit​ ​10​ ​match​ ​interface​ ​lo
net​ ​add​ ​bgp​ ​autonomous-system​ ​65014
net​ ​add​ ​bgp​ ​router-id​ ​10.0.0.14
net​ ​add​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
net​ ​add​ ​bgp​ ​neighbor​ ​swp1-2​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​neighbor​ ​swp51-52​ ​interface​ ​remote-as​ ​external
net​ ​add​ ​bgp​ ​ipv4​ ​unicast​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
net​ ​add​ ​time​ ​ntp​ ​server​ ​192.168.0.254​ ​iburst
net​ ​add​ ​dns​ ​nameserver​ ​ipv4​ ​192.168.0.254​ ​vrf​ ​mgmt
net​ ​commit

cumulusnetworks.com ​ ​ ​ ​42
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

cumulus@leaf04:mgmt-vrf:~$​ ​net​ ​show​ ​configuration

interface​ l ​ o
​ ​ ​address​ 1​ 0.0.0.14/32

interface​ ​eth0
​ ​ ​vrf​ ​mgmt
​ ​ ​address​ ​dhcp

interface​ ​swp2
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp1
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp51
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​swp52
​ ​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​ ​mtu​ ​9216
​ ​ ​no​ ​ipv6​ ​nd​ ​suppress-ra

interface​ ​mgmt
​ ​ ​address​ ​127.0.0.1/8
​ ​ ​vrf-table​ ​auto

hostname​ ​leaf04

username​ ​cumulus​ ​nopassword

service​ ​integrated-vtysh-config

log​ ​syslog

router​ ​bgp​ ​65014


​ ​ ​bgp​ ​router-id​ ​10.0.0.14
​ ​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​ ​neighbor​ ​swp1​ ​interface​ ​remote-as​ ​external
​ ​ ​neighbor​ ​swp2​ ​interface​ ​remote-as​ ​external

cumulusnetworks.com ​ ​ ​ ​43
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

​ ​ n ​ eighbor​ s
​ wp51​ i​ nterface​ r​ emote-as​ e​ xternal
​ ​ ​neighbor​ s ​ wp52​ i​ nterface​ r​ emote-as​ e​ xternal

​ ​ a ​ ddress-family​ i​ pv4​ ​unicast


​ ​ ​ ​ ​redistribute​ c​ onnected​ ​route-map​ ​LOCAL_ROUTES

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​ ​match​ ​interface​ ​lo

line​ ​vty

dot1x
​ ​ ​mab-activation-delay​ ​30
​ ​ ​eap-reauth-period​ ​0

​ ​ r ​ adius
​ ​ ​ ​ ​accounting-port​ ​1813
​ ​ ​ ​ ​shared-secret
​ ​ ​ ​ ​authentication-port​ ​1812

time

​ ​ z ​ one
​ ​ ​ ​ ​Etc/UTC

​ ​ ​ntp

​ ​ ​ ​ s ​ ervers
​ ​ ​ ​ ​ ​ ​192.168.0.254​ ​iburst

​ ​ ​ ​ s ​ ource
​ ​ ​ ​ ​ ​ ​eth0

dns

​ ​ n ​ ameserver
​ ​ ​ ​ ​192.168.0.254​ ​#​ ​vrf​ ​mgmt

cumulusnetworks.com ​ ​ ​ ​44
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Servers
Server01​ ​Configuration
etc/network/interfaces

auto​ ​lo
iface​ ​lo​ ​inet​ ​loopback
​ ​ ​address​ ​10.0.0.31/32

auto​ ​eth0
iface​ ​eth0​ ​inet​ ​dhcp

auto​ ​eth1
​ ​ ​iface​ ​eth1
​ ​ ​mtu​ ​9216

auto​ ​eth2
​ ​ ​iface​ ​eth2
​ ​ ​mtu​ ​9216

cumulusnetworks.com ​ ​ ​ ​45
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

server01#​ ​sh​ ​run


Building​ ​configuration...

Current​ ​configuration:
!
no​ ​ipv6​ ​forwarding
username​ ​cumulus​ ​nopassword
!
service​ ​integrated-vtysh-config
!
log​ ​file​ ​/var/log/quagga/quagga.log
!
log​ ​timestamp​ ​precision​ ​6
!
interface​ ​eth1
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
interface​ ​eth2
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
router​ ​bgp​ ​65032
​ ​bgp​ ​router-id​ ​10.0.0.31
​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​neighbor​ ​eth1​ ​interface​ ​remote-as​ ​external
​ ​neighbor​ ​eth2​ ​interface​ ​remote-as​ ​external
​ ​!
​ ​address-family​ ​ipv4​ ​unicast
​ ​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
​ ​ ​neighbor​ ​eth1​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ ​out
​ ​ ​neighbor​ ​eth2​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ ​out
​ ​Exit-address-family
!
ip​ ​as-path​ ​access-list​ ​HOST_ORIGINATED_ROUTES​ ​permit​ ​^$
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​10
​ ​match​ ​interface​ ​lo
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​20
​ ​match​ ​interface​ ​apache_network
!
line​ ​vty
!
end

cumulusnetworks.com ​ ​ ​ ​46
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Server02​ ​Configuration
/etc/network/interfaces

auto​ ​lo
iface​ ​lo​ ​inet​ ​loopback
​ ​ ​address​ ​10.0.0.32/32

auto​ ​eth0
iface​ ​eth0​ ​inet​ ​dhcp

auto​ ​eth1
​ ​ ​iface​ ​eth1
​ ​ ​mtu​ ​9216

auto​ ​eth2
​ ​ ​iface​ ​eth2
​ ​ ​mtu​ ​9216

cumulusnetworks.com ​ ​ ​ ​47
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

server02#​ ​sh​ ​run


Building​ ​configuration...

Current​ ​configuration:
!
no​ ​ipv6​ ​forwarding
username​ ​cumulus​ ​nopassword
!
service​ ​integrated-vtysh-config
!
log​ ​file​ ​/var/log/quagga/quagga.log
!
log​ ​timestamp​ ​precision​ ​6
!
interface​ ​eth1
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
interface​ ​eth2
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
router​ ​bgp​ ​65032
​ ​bgp​ ​router-id​ ​10.0.0.32
​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​neighbor​ ​eth1​ ​interface​ ​remote-as​ ​external
​ ​neighbor​ ​eth2​ ​interface​ ​remote-as​ ​external
​ ​!
​ ​address-family​ ​ipv4​ ​unicast
​ ​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
​ ​ ​neighbor​ ​eth1​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ o​ ut
​ ​ ​neighbor​ ​eth2​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ o​ ut

​ e ​ xit-address-family
!
ip​ ​as-path​ ​access-list​ ​HOST_ORIGINATED_ROUTES​ ​permit​ ​^$
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​10
​ ​match​ ​interface​ ​lo
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​20
​ ​match​ ​interface​ ​apache_network
!
line​ ​vty
!
end

cumulusnetworks.com ​ ​ ​ ​48
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Server03​ ​Configuration
/etc/network/interfaces

auto​ ​lo
iface​ ​lo​ ​inet​ ​loopback
​ ​ ​address​ ​10.0.0.33/32

auto​ ​eth0
iface​ ​eth0​ ​inet​ ​dhcp

auto​ ​eth1
​ ​ ​iface​ ​eth1
​ ​ ​mtu​ ​9216

auto​ ​eth2
​ ​ ​iface​ ​eth2
​ ​ ​mtu​ ​9216

cumulusnetworks.com ​ ​ ​ ​49
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

server03#​ ​sh​ ​run


Building​ ​configuration...

Current​ ​configuration:
!
no​ ​ipv6​ ​forwarding
username​ ​cumulus​ ​nopassword
!
service​ ​integrated-vtysh-config
!
log​ ​file​ ​/var/log/quagga/quagga.log
!
log​ ​timestamp​ ​precision​ ​6
!
interface​ ​eth1
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
interface​ ​eth2
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
router​ ​bgp​ ​65032
​ ​bgp​ ​router-id​ ​10.0.0.33
​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​neighbor​ ​eth1​ ​interface​ ​remote-as​ ​external
​ ​neighbor​ ​eth2​ ​interface​ ​remote-as​ ​external
​ ​!
​ ​address-family​ ​ipv4​ ​unicast
​ ​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
​ ​ ​neighbor​ ​eth1​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ ​out
​ ​ ​neighbor​ ​eth2​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ ​out
​ ​exit-address-family
!
ip​ ​as-path​ ​access-list​ ​HOST_ORIGINATED_ROUTES​ ​permit​ ​^$
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​10
​ ​match​ ​interface​ ​lo
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​20
​ ​match​ ​interface​ ​apache_network
!
line​ ​vty
!
end

cumulusnetworks.com ​ ​ ​ ​50
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

Server04​ ​Configuration

etc/network/interfaces:

auto​ ​lo
iface​ ​lo​ ​inet​ ​loopback
​ ​ ​address​ ​10.0.0.34/32

auto​ ​eth0
iface​ ​eth0​ ​inet​ ​dhcp

auto​ ​eth1
​ ​ ​iface​ ​eth1
​ ​ ​mtu​ ​9216

auto​ ​eth2
​ ​ ​iface​ ​eth2
​ ​ ​mtu​ ​9216

cumulusnetworks.com ​ ​ ​ ​51
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

server04#​ ​sh​ ​run


Building​ ​configuration...

Current​ ​configuration:
!
no​ ​ipv6​ ​forwarding
username​ ​cumulus​ ​nopassword
!
service​ ​integrated-vtysh-config
!
log​ ​file​ ​/var/log/quagga/quagga.log
!
log​ ​timestamp​ ​precision​ ​6
!
interface​ ​eth1
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
interface​ ​eth2
​ ​ipv6​ ​nd​ ​ra-interval​ ​10
​ ​no​ ​ipv6​ ​nd​ ​suppress-ra
!
router​ ​bgp​ ​65032
​ ​bgp​ ​router-id​ ​10.0.0.34
​ ​bgp​ ​bestpath​ ​as-path​ ​multipath-relax
​ ​neighbor​ ​eth1​ ​interface​ ​remote-as​ ​external
​ ​neighbor​ ​eth2​ ​interface​ ​remote-as​ ​external
​ ​!
​ ​address-family​ ​ipv4​ ​unicast
​ ​ ​redistribute​ ​connected​ ​route-map​ ​LOCAL_ROUTES
​ ​ ​neighbor​ ​eth1​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ o​ ut
​ ​ ​neighbor​ ​eth2​ ​filter-list​ ​HOST_ORIGINATED_ROUTES​ o​ ut

​ e
​ xit-address-family
!
ip​ ​as-path​ ​access-list​ ​HOST_ORIGINATED_ROUTES​ ​permit​ ​^$

route-map​ ​LOCAL_ROUTES​ ​permit​ ​10


​ ​match​ ​interface​ ​lo
!
route-map​ ​LOCAL_ROUTES​ ​permit​ ​20
​ ​match​ ​interface​ ​apache_network
!
line​ ​vty
!
end

cumulusnetworks.com ​ ​ ​ ​52
​DOCUMENT:​​ ​VALIDATED​ ​DESIGN​ ​GUIDE

About​ ​Cumulus​ ​Networks


Cumulus​ ​Networks​ ​is​ ​leading​ ​the​ ​transformation​ ​of​ ​bringing​ ​web-scale​ ​networking​ ​to​ ​enterprise​ ​cloud.​ ​Its​ ​network​ ​switch,
Cumulus​ ​Linux,​ ​is​ ​the​ ​only​ ​solution​ ​that​ ​allows​ ​you​ ​to​ ​affordably​ ​build​ ​and​ ​efficiently​ ​operate​ ​your​ ​network​ ​like​ ​the​ ​world’s
largest​ ​data​ ​center​ ​operators,​ ​unlocking​ ​vertical​ ​network​ ​stacks.​ ​By​ ​allowing​ ​operators​ ​to​ ​use​ ​standard​ ​hardware
components,​ ​Cumulus​ ​Linux​ ​offers​ ​unprecedented​ ​operational​ ​speed​ ​and​ ​agility,​ ​at​ ​the​ ​industry’s​ ​most​ ​competitive​ ​cost.
Cumulus​ ​Networks​ ​has​ ​received​ ​venture​ ​funding​ ​from​ ​Andreessen​ ​Horowitz,​ ​Battery​ ​Ventures,​ ​Sequoia​ ​Capital,​ ​Peter
Wagner​ ​and​ ​four​ ​of​ ​the​ ​original​ ​VMware​ ​founders.​ ​For​ ​more​ ​information​ ​visit​ ​cumulusnetworks.com​​ ​or​ ​follow
@cumulusnetworks​.

©2017​ ​Cumulus​ ​Networks.​ ​All​ ​rights​ ​reserved

CUMULUS,​ ​the​ ​Cumulus​ ​Logo,​ ​CUMULUS​ ​NETWORKS,​ ​and​ ​the​ ​Rocket​ ​Turtle​ ​Logo​ ​(the​ ​“Marks”)​ ​are​ ​trademarks​ ​and​ ​service​ ​marks​ ​of
Cumulus​ ​Networks,​ ​Inc.​ ​in​ ​the​ ​U.S.​ ​and​ ​other​ ​countries.​ ​You​ ​are​ ​not​ ​permitted​ ​to​ ​use​ ​the​ ​Marks​ ​without​ ​the​ ​prior​ ​written​ ​consent​ ​of
Cumulus​ ​Networks.​ ​The​ ​registered​ ​trademark​ ​Linux​®​​ ​is​ ​used​ ​pursuant​ ​to​ ​a​ ​sublicense​ ​from​ ​LMI,​ ​the​ ​exclusive​ ​licensee​ ​of​ ​Linus​ ​Torvalds,
owner​ ​of​ ​the​ ​mark​ ​on​ ​a​ ​worldwide​ ​basis.​ ​All​ ​other​ ​marks​ ​are​ ​used​ ​under​ ​fair​ ​use​ ​or​ ​license​ ​from​ ​their​ ​respective​ ​owners.

cumulusnetworks.com ​ ​ ​ ​53

S-ar putea să vă placă și