Sunteți pe pagina 1din 31

Enterprise Cloud Platform

Administration Course 5.0


Networking
Objectives

After completing this module, you will be able to:


• Discuss Open vSwitch
• Discuss the default network factory
configuration
• List best practices for configuring the network
• View the network configuration
• Discuss virtual network segmentation

2
Layer 2 Network Management with Open vSwitch

3
Layer 2 Network Management

AHV uses Open vSwitch (OVS) to connect the CVM, the


hypervisor, and the guest VMs to each other and to the
physical network.

The OVS package is installed by default on each AHV node


and the OVS services start automatically when you start a
node.

OVS enables the hypervisor, CVM, and guest VMs to


connect to each other and to the physical network.

4
AHV Networking Overview

5
About Open vSwitch

Open vSwitch is an open-source software switch


implemented in the Linux kernel and designed to work in a
multiserver virtualization environment.

• By default, OVS behaves like a Layer 2 learning switch that


maintains a MAC address learning table.

• The hypervisor host and VMs connect to virtual ports on the


switch.

• Nutanix uses the OpenFlow protocol to configure and


communicate with Open vSwitch.

• Each hypervisor hosts an OVS instance, and all OVS instances


combine to form a single switch.

6
About Open vSwitch

Bridges Ports Bonds

Each AHV server maintains


an OVS instance and all OVS
instances combine to form a
single logical switch.
Constructs called bridges
manage the switch instances
residing on the AHV hosts. As
an example, the following
diagram shows OVS
instances running on two
hypervisor hosts.

7
About Open vSwitch

Bridges Ports Bonds

Ports are logical constructs


created in a bridge that
represent connectivity to the
virtual switch. Nutanix uses
several port types, including
internal, tap, VXLAN, and
bond.

8
About Open vSwitch

Bridges Ports Bonds

• An internal port with the same name as the default bridge


(br0)--provides access for the AHV host.
• Tap ports act as bridge connections for virtual NICs
presented to virtual machines.
• VXLAN ports are used for the IP Address Management
functionality provided by Acropolis.
• Bonded ports provide NIC teaming for the physical interfaces
of the AHV host.

9
About Open vSwitch

Bridges Ports Bonds

Bonded ports aggregate the


physical interfaces on the AHV
host. By default, a bond named
bond0 is created in bridge
br0.After the node imaging
process, all interfaces are placed
within a single bond, which is a
requirement for the Foundation
imaging process.

10
Open vSwitch

The following diagram shows OVS instances running on two


hypervisor hosts.

11
Default Factory Configuration

12
Default Factory Configuration

The default factory configuration includes an OVS bridge


named br0 and a native Linux bridge called virbr0.

13
Best Practices for Configuring Networking
in an Acropolis Cluster

14
Best Practices for Configuring Networking

The following are a few of the best practices for network


components:
Network Recommendations
Components
Open vSwitch Do not modify the OpenFlow tables that
are associated with the default OVS bridge
br0.
VLANs • Add the CVM and the Acropolis
hypervisor to the same VLAN.
• Do not add any other device, including
guest VMs, to the VLAN to which the
CVM and hypervisor host are assigned.

15
Best Practices for Configuring Networking

Network Recommendations
Components
Virtual bridges • Do not delete or rename OVS bridge br0.
• Do not modify the native Linux bridge
virbr0.
OVS bonded port Aggregate the 10 GbE interfaces on the
(bond0) physical host to an OVS bond on the default
OVS bridge br0 and trunk these interfaces on
the physical switch.
1 GbE and 10 GbE Do not include the 1 GbE interfaces in the
interfaces (physical same bond as the 10 GbE interfaces.
host)
IPMI port on the • Do not trunk switch ports that connect to
hypervisor host the IPMI interface.
• Configure the switch ports as access ports
for management simplicity.
16
Best Practices for Configuring Networking

Network Recommendations
Components
Upstream physical Nutanix recommends the use of 10Gbps, line-
switch rate, non-blocking switches with larger buffers
for production workloads.
Physical Network Add all the nodes that belong to a given
Layout cluster to the same Layer-2 network segment.
CVM Do not remove the CVM from either the OVS
bridge br0 or the native Linux bridge virbr0.

17
Recommended Network Configuration

18
Viewing the Network Configuration

19
Viewing the Network Configuration

Show link speed and status

List the physical interfaces to show interface properties such


as link speed and status:

nutanix@cvm$ manage_ovs show_interfaces

Output similar to the following is displayed:

name mode link speed


eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

20
Viewing the Network Configuration

Show Ports and Interfaces

List the uplink configuration to show the ports and interfaces


that are configured as uplinks:
nutanix@cvm$ manage_ovs --bridge_name br0
show_uplinks
Output similar to the following is displayed:
Uplink ports: bond0
Uplink ifaces: eth1 eth0

List the configuration of Open vSwitch to show the


virtual switching configuration

21
Viewing the Network Configuration

Viewing the Network Configuration

List the configuration of Open vSwitch to show the virtual


switching configuration
root@ahv# ovs-vsctl show

22
Viewing the Network Configuration

Viewing the Network Configuration


Output similar to the following is displayed:
9ce3252-f3c1-4444-91d1-b5281b30cdba
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth3"
Interface "eth2"
Port "bond1"
Interface "eth1"
Interface "eth0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="1", remote_ip="192.0.2.131"}
ovs_version: "2.3.1"
23
Viewing the Network Configuration

Viewing the configuration of an OVS bond

List the configuration of the bond to show the


configuration of an OVS bond:
root@ahv# ovs-appctl bond/show <bond_name>

24
Viewing the Network Configuration

Viewing the configuration of an OVS bond


Output similar to the following is displayed:
---- bond0 ----
bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
active slave mac: 0c:c4:7a:48:b2:68(eth0)

slave eth0: enabled


active slave
may_enable: true

slave eth1: disabled


may_enable: false
25
Virtual Network Segmentation with VLANs

26
Virtual Network Segmentation with vLANs

You can set up a segmented virtual network on an Acropolis


node by assigning the ports on Open vSwitch bridges to
different VLANs.

VLAN port assignments are configured from the CVM that


runs on each node.

Note: Follow the best practices while assigning a vLAN.

27
Access Mode
Configuring a Virtual NIC to Operate in Access or Trunk
Mode

By default, a virtual NIC on a guest VM operates in access


mode.

In this mode, the virtual NIC can send and receive traffic only
over its own VLAN, which is the VLAN of the virtual network
to which it is connected.

If restricted to using access mode interfaces, a VM running


an application on multiple VLANs (such as a firewall
application) must use multiple virtual NICs—one for each
VLAN.

28
Trunk Mode
Configuring a Virtual NIC to Operate in Access or Trunk
Mode

A virtual NIC in trunk mode can send and receive traffic over
any number of VLANs in addition to its own VLAN.

• You can trunk specific VLANs or trunk all VLANs.

• You can also convert a virtual NIC from the trunk mode to
the access mode.

29
Before and After Host Network Configuration

Before After

br0 br0 br1

bond0 bond0 bond1

eth0 eth1 eth2 eth3 eth0 eth1 eth2 eth3

10 GBE 1 GBE

30
Lab Exercise Host Network Configuration

31

S-ar putea să vă placă și