Sunteți pe pagina 1din 54

NSX 6.

2 Technical Overview
NSBU TPM Team

Presenter
Title
October, 2015
2014 VMware Inc. All rights reserved.
NSX vSphere 6.2
Release Priorities

Extend within and beyond the data center

Easy to operate at scale

Enhance Network & Security Services

2
Delivering a platform that
Extends NSX Control beyond vCenter and Data Center boundaries
Cross-VC NSX Use Cases
Increase the span of NSX logical networks to enable:
Capacity Pooling across multiple vCenter Servers
Non disruptive migrations
Cloud and VDI deployments

vCenter Server A vCenter Server B vCenter Server C

DB App Web

App Web DB

Web App DB

CONFIDENTIAL 4
Cross-VC NSX Use Cases
Centralized security policy management
One place to Manage FW rules
Rules enforced regardless of VM location and VC

Universal Firewall Policy

CONFIDENTIAL 5
Cross-VC NSX Use Cases
NSX 6.2 supports new mobility boundaries in vSphere 6
Enable Cross VC and Long Distance vMotion
On existing networks, with no new hardware required

vCenter-A vCenter-B

<= 150ms RTT

VDS-A VDS-B

VXLAN
Transport (L3) &
vMotion Network
(L3)

CONFIDENTIAL 6
Cross-VC NSX Use Cases
Enhance NSX Multi-Site Support
Active-Active (From Metro to 150ms RTT)
Disaster Recovery
N-S Connectivity N-S Connectivity

DB Web App App DB Web

vCenter-A vCenter-B

App DB Web Web App DB

NSX Mgr A NSX Mgr B

SRM A Web App DB DB Web App


SRM B

<=150ms

CONFIDENTIAL 7
Cross-VC NSX Logical Networks
Universal Object Configuration
(NSX UI & API)
Universal Configuration Synchronization

Universal
Controller
USS Cluster

vCenter & NSX Manager A vCenter & NSX Manager B vCenter & NSX Manager H
Primary Secondary Secondary

Local VC Inventory Local VC Inventory Local VC Inventory

Universal Distributed Logical Router


Universal Logical
Switches

Universal
DFW

CONFIDENTIAL 8
Cross-VC NSX Design Guidelines Networking
Universal Controller Cluster size remains at 3 nodes
NSX Controllers always run within a single vCenter
Server and single Site
The Universal Controller Cluster continues to manage
Local VXLAN/DLR objects in addition to Universal objects
Transport Zone determines whether Logical Switches are
Local or Universal
Cross VC vMotion is validated with NSX 6.2 for Universal
Logical Networks (L2 and L3)

CONFIDENTIAL 9
Cross-VC NSX Universal Distributed Firewall

NSX 6.2 also supports Cross-VC Distributed Firewall for centralized management of Firewall
Policy
This is configured through a Universal section in the DFW rule table on the Primary
NSX Manager
The Universal section will automatically be synchronized to all Secondary
Secondary

Secondary NSX Managers


There is a maximum of One Universal section for General Secondary
Secondary Secondary
Secondary

rules and One Universal Section for Ethernet rules


Universal DFW supports both VXLAN and VLAN backed
Primary
deployments Secondary
Secondary Secondary
Secondary

vMotion across VCs with Universal DFW policy is fully


supported
Secondary
Secondary Secondary
Secondary

CONFIDENTIAL 10
Cross-VC NSX Universal Distributed Firewall Rules
The following Universal Grouping Objects are available:
Universal Security Groups
Universal IP Sets
Universal MAC Sets
Universal Services &
Service Groups
Universal DFW rules are based on these Universal objects only. VC inventory remains local to
an NSX Manager
IP based rules are the standard approach when applying policy across VC boundaries

Universal Security Groups and IP/MAC Sets can also be used in Local sections

CONFIDENTIAL 11
Cross-VC NSX vSphere Considerations
vSphere 6 is a current requirement for Universal Logical Switches, Distributed Logical Routers
and Distributed Firewall
Cross-VC NSX does not have a dependency on a specific Platform Services Controller
deployment model. Both Embedded and External modes are supported

Embedded External
Benefits of an
Virtual Machine or External PSC include:
Virtual Machine or Virtual Machine or Physical Server
Physical Server Physical Server
Platform Services
Enhanced Linked
Platform Services
Controller
Platform Services
Controller
Controller Mode (centralized
management of NSX)
vCenter Server vCenter Server

Virtual Machine or Virtual Machine or


Cross VC vMotion
Physical Server Physical Server
from vSphere Web
NSX Manager NSX Manager vCenter Server vCenter Server Client UI
Primary Secondary

NSX Manager NSX Manager

Primary Secondary
CONFIDENTIAL 20
Cross-VC NSX Design Guidelines General
Support for up to 8 NSX Managers initially
One Primary
Seven Secondary
Cross-VC NSX Control Plane latency increased to 150ms RTT
NSX
Design Aligns with Long Distance vMotion
Guidelines Universal Synchronization is Full sync operation
Performs differential between Primary and Secondary NSX
Managers
Synchronizes the differences
Simple and reliable approach, but does mean the overhead
increases with the number of universal objects
NSX Manager now has a high scale configuration (8vCPU
24GB of RAM)
NSX Controller, UDLR Control VM and Edge Services Gateway
migrationCONFIDENTIAL
across vCenter Servers is not supported 21
Cross-VC NSX Key Benefits
Provides a comprehensive Cross vCenter network and security solution covering L2, L3 and
Firewalling
Decoupled from underlying physical network
Fully integrated software based solution, not hardware centric

No need to span L2 for Cross VC/LD vMotion


or workload migration
In place upgrade and migration for existing NSX
deployments
Integration with other VMware SDDC components
Enhances NSX Multi-Site and Disaster Recovery
capabilities
Addresses customer pain point of vCenter
Server being a hard boundary

CONFIDENTIAL 22
Delivering a platform that is
Easy to operate at scale
NSX 6.2: Whats New in Ops & Troubleshooting
Central CLI
Traceflow
Communication Channel Health
Includes new APIs
General Operations related improvements
IPFIX Netflow now includes blocked flows
Active-Standby Edge status
Log Improvement for OSPF/BGP on NSX Edge
Audit Log improvements
Central CLI for NSX
Reduces troubleshooting time for distributed network functions

Overview
VM
VM VM Central CLI for Monitoring and Troubleshooting
VM
VM
Show Command available for Logical Switches, Logical
VM VM Routers, NSX Edges and Distributed Firewall
NSX vSwitch VM
Hypervisor

NSX vSwitch
Hypervisor Benefits

Simplify troubleshooting
Reduce time to resolution
Central access to distributed network functions

25
Central CLI Overview
Read-only Commands
Available Centrally on NSX Manager via SSH/Console/API.
Leverages existing message bus channel to gather data from remote nodes
CLIs are categorized by function:
Logical Switch
Logical Router (DLR)
Distributed Firewall (DFW)
Edge

The command may query the following sources based on the command that is executed:
Local Config Database from NSX Manager
Controller
Host
Edge
Central CLI Syntax (1 of 2)
show
show logical-switch

show logical-router

show dfw

show edge

show dlb

show vm

show vnic

show cluster

show host

show controller
Central CLI Data Plane
Reduces troubleshooting time for distributed network functions

NSX Mgr.

manager# show dfw vnic 501f02cc-9078-22ad-0186-4a05d7f18978.000


filter nic-311161-eth0-vmware-sfw.2 stats

Stats:
rule 2036: 845 evals, in 0 out 0 pkts, in 0 out 0 bytes
rule 1428: 845 evals, in 0 out 0 pkts, in 0 out 0 bytes
rule 1004: 845 evals, in 18 out 0 pkts, in 1152 out 0 bytes
rule 1004: 672 evals, in 18 out 0 pkts, in 1296 out 0 bytes
rule 1003: 823 evals, in 252 out 0 pkts, in 83420 out 0 bytes
rule 1003: 131 evals, in 0 out 0 pkts, in 0 out 0 bytes
rule 1002: 785 evals, in 4646 out 0 pkts, in 3068616 out 0 bytes
rule 1001: 310 evals, in 253030 out 0 pkts, in 15596640 out 0 bytes

manager# show logical-router LDR-1 host host-1 routes

VDR LDR-2 Route Table


Legend: [U: Up], [G: Gateway], [C: Connected], [I: Interface]
Legend: [H: Host], [F: Soft Flush] [!: Reject] [E: ECMP]

Destination GenMask Gateway Flags Ref Origin UpTime Interface


----------- ------- ------- ----- --- ------ ------ ---------
10.10.10.0 255.255.255.0 0.0.0.0 UCI 1 MANUAL 743 lif1
20.20.20.0 255.255.255.0 0.0.0.0 UCI 1 MANUAL 735 lif2

VXLAN 5000
VXLAN 5001
Data Plane

28 28
Central CLI Calling via API
L2 and L3 Trace Flow
Test Connectivity through Logical and Physical paths

Overview
Web1 Web2
Ability to trace a packet through Virtual network.
VM IP Packet
VM
Logical Shows where the packet is dropped
Switch 1 5
Supports L2 and L3 Trace flow

3 4 Benefits

Helps Identify problems in Virtual network.



Enhanced supportability and troubleshooting
User defined packet header helps validate and troubleshoot
FW rules

30
L2 and L3 Trace Flow
Quickly identify problems in logical vs. physical network and pinpoint issues in NSX data-path

Web1 Web2

VM IP Packet
VM
Logical
Switch 1

3 4

31
Enhancements to NSX APIs
UI and APIs to retrieve Run Time state from Controller, Hypervisor and Edge

Access to Run time Data Overview


Example: LS state
from HV1:
VTEP Table
API MAC Table New APIs that provide additional Controller, Hypervisor and
Edge info.
UI and APIs to detect health of Communication Channels

Control Plane
Benefits
Access to LS, LR run
time information
VMware and Partner management tools can provide details
using these APIs
Support Additional Troubleshooting workflows through VROPs
Data Plane
HV1 HV2
Access to LS, LR and
DFW run time information

32
Communication Channel Health And Recovery
Checks communication channel status between:
NSX Manager And Firewall Agent (vsfwd)
NSX Manager And Network Control Plane Agent (netcpa)
Host And All Controller Nodes (it should connect to)

Heartbeats are exchanged between NSX Manager > Host


Connection Status from each host to controller is locally monitored. The health
status is then reported by each Host to NSX Manager over the Messaging bus
Communication Health Check API on NSX Manager when invoked, will poll
internal database to see the last time it heard from vsfwd and netcpa on the
selected host
If vsfwd is down on host, then NSX Manager > netcpa will be shown as Down
(since heartbeats are lost). However, Host to Controller state will be Unknown
since host may still be communicating with Controller Nodes
Delivering a platform that
Provides distributed network & security services
With a choice of partner integrations
Improved IP Discovery Mechanisms for Virtual Machines
Operational improvements of Distributed Firewall

Overview
VM

Provides an alternate mechanism to associate VMs/vNICs to


IP Address when VM Tools is not present
Hypervisor
VM Introduces 2 new methods for IP Learning: (ARP/DHCP
NSX vSwitch learning) which are dynamic
In addition to Secure Manual Mode, Secure Trust-on-First-
Hypervisor Use Mode
NSX vSwitch VM

Benefits
Hypervisor
I NSX vSwitch Prevents security risk when VM Tools is not present
AP
S T
RE

35
IP Discovery Mechanisms available pre-6.2
Two categories of IP Discovery mechanisms available in pre-6.2 NSX releases.

Automated Method (VMTools)


VMTools needs to be installed on every Guest Virtual Machine.
VMTools will report the IP addresses on a particular VM.

Manual Method (SpoofGuard)


SpoofGuard to
Manually authorize IP addresses to specific VMs. (VIO uses this method)
Trust on First Use (TOFU) on that VM

IP/MAC based rules do not require any discovery

36
Improved IP Discovery Mechanisms in 6.2
Two new automated IP Discovery Mechanisms
DHCP Snooping
ARP Snooping

DHCP Snooping
Tracks DHCP Protocol Messages and updates the IP based on confirmation.
Tracks both IPv4 & IPv6 addresses for a vNIC.

ARP Snooping
ARP messages from the guest VM are snooped.

Does not rely on VMTools

37
Improved IP Discovery Mechanisms in 6.2
Components

Switch Security Module:

Existing DVFilter agent that learns from VM TX traffic, and also uses
Switch Security Module the snooped addresses to enforce basic L2, L3 security such as
[dvfilter-switch-security] spoof guard.
vNic slot 1 Currently no security features are enabled, and the module primarily
learns VM IPs and MACs.
DVfilter

DFW

vNic slot 2

DVS P-G or LS DFW:

Used for IP discovery (DHCP snooping and ARP snooping) [new


with 6.2]

Enforces IP spoofguard

38
Routing Enhancements
Enhance routing configuration and troubleshooting

Overview

Support administrative distance for static routes


Support exact match for redistribution rules
Enable/Disable Strict uRPF interface checks on the edge
Show AS path in show ip bgp route CLI command
Do not announce management interface from DLR control VM
Automatic Consistency Check for Logical Routing.
Support relays in DHCP server
Support /31 subnet mask

Benefits

Ease of configuration and troubleshooting


VM VM VM VM VM
Enhance Routing functionality and operations

39
Routing Enhancements: Admin distance for static route (1 of 2)
Floating Static routes for Routing Protocols Backup

Edge Forwarding Table


Overview
Edge> show ip route
O 172.16.1.0/24 via 192.168.1.1
O 172.16.2.0/24 via 192.168.1.1 In pre-6.2, the static routes have an administrative distance of
O 172.16.3.0/24 via 192.168.1.1 1 and its not configurable.
In 6.2, you will be able to change this administrative distance,
this will allow static route to be used as a backup; one of the
Edge use-case is to use static routes with high AD to reduce the
convergence time in case of DLR control VM failover.

.2
192.168.1.0/24
Benefits
Improve convergence time for DLR control VM failover
.1
Improve flexibility in the supported routing topologies
DLR
Active Standby

Web App DB

172.16.1.0/24 172.16.2.0/24 172.16.3.0/24


40
Routing Enhancements: Admin distance for static route (2 of 2)
Floating Static routes for Routing Protocols Backup

Edge Forwarding Table


Overview
Edge> show ip route
S 172.16.0.0/16 via 192.168.1.1 d
250 In pre-6.2, the static routes have an administrative distance of
1 and its not configurable.
In 6.2, you will be able to change this administrative distance,
Edge this will allow static route to be used as a backup; one of the
use-case is to use static routes with high AD to reduce the
convergence time in case of DLR control VM failover.

.2
192.168.1.0/24 Benefits
.1 Improve convergence time for DLR control VM failover
DLR
X
Active Standby
Improve flexibility in the supported routing topologies

Web App DB

172.16.1.0/24 172.16.2.0/24 172.16.3.0/24

41
Routing Enhancements: No DLR Control VM with static routing
Reduce NSX VM footprint

Configure default NSX-v with VIO use-case Overview


static route via
192.168.1.2
In pre-6.2, even if the routing used between DLR and Edge is
E1 static routes, DLR Control VM is spun up.
In 6.2, you can configure static routes on the DLR without
ESXi Host Kernel spinning up a DLR Control VM. We dont have any failure
detection mechanism between DLR and Edge, so the link
net-vdr -l --route Default+Edge-1 .2 between DLR and Edge is supposed to be highly reliable. See
S 0.0.0.0 via 192.168.1.2 the notes for more details.
.
.1
Benefits
DLR
DLR control VMs are now optional

172.16.2.0/24

42
Routing Enhancements: Disable uRPF check per interface
Fine tuning Edge configuration
B
Overview
External Network

This feature will ability to disable uRPF check via a


documented API and UI.
uRPF will still be enabled by default.
The API will offer other fine tuning parameters, see the notes
for more details.

A

Benefits
DLR Improve flexibility in ECMP topology with the option to disable
uRPF check.

Web App DB

Effect of Strict uRPF with ECMP

43
Routing Enhancements: Exact Match for redistribution filters

Overview
External Network
In pre-6.2, the redistribution filter is using longest prefix
match. For example the statement deny 10.0.0.0/16 would
also deny the route 10.0.1.0/24
In 6.2, the redistribution filter will have the same matching
E-BGP algorithm as ACL, so exact prefix match by default (except if
E1 E2 E3 E8
le or ge options are used)
Redistribution
With filters

OSPF Benefits
DLR
Aligned the configuration with the rest of the industry.

Web App DB

44
Routing Enhancements: Do not announce DLR HA interface

Edge Forwarding Table


Overview
Edge> show ip route
O 172.16.1.0/24 via 192.168.1.1 Management interface renamed to HA interface
O 192.168.2.0/24 via 192.168.1.1
In pre-6.2, if the DLR is configured to redistribute connected
into a dynamic routing protocol, the HA/management interface
prefix will be announce with next-hop the forwarding IP
Edge address.
In 6.2, even if connected are redistributed, the HA interface of
the Control VM will not be advertised.
OS
.2 PF
192.168.1.0/24
Benefits

.3 Avoid incorrect routing information exchange


.1
Management int Avoid confusion
DLR 192.168.2.0/24
Control VM

App

172.16.1.0/24
45
Routing Enhancements: Display AS-path in show ip bgp
Enhance routing troubleshooting

Overview

AS 8228 It will display the full AS path for all the prefix in the output of
the CLI command show ip bgp
AS 65002 AS 65003

AS 65001
E1 E2 E3 E8
Benefits
Enhance CLI to ease the troubleshooting operations

DLR AS 65000

Web App DB

46
Support /31 subnet mask and /32 Host Routes

Overview

Support /31 subnet mask for interface IP on ESG and DLR


Physical Router /31 subnet mask can used for ESG uplink, DLR uplink and
DLR HA interface
/32 static host routes can be added

.2

10.0.0.2/31

Edge .3 Benefits
Edge> show ip route Conserve IP addresses
S 172.16.10.11/32 via 192.168.1.2 Provide VM level granularity for routing (can also be
.
redistributed to a dynamic routing protocol)

47
Support relays in DHCP server

Overview
Edge
The static binding or pool does not have to be directly
DHCP server connected to the ESG where the DHCP server is configured
anymore.
.2
Support subnet mask for DHCP static binding and pool in API
and UI.
.1

DLR
Benefits
DHCP relay
Use ESG DHCP server with DHCP relay in place.

172.16.2.0/24

48
Load Balancer & L2VPN Enhancements
Improves scalability & usability

Overview

Scale: Increase # of VIP supported from 64 to 1024


L3
Operation: Provide LB monitor information on failure (last
VM1 VM2 VM3
check, status last change and fail reason)
VM1 VM2

L2 L2
LB Feature: Support VIP and Pool port range
L2
Future enablement support for additional 3 rd
rd-party LBaaS

deployment options, form factors


Read/Write CLI for L2VPN
Enhanced workflow for routing to non stretched networks
using L2VPN
Distributed L4 LB Tech Preview
L2 VPN
Benefits
Internet / WAN Tighter integrations with 3rd
rd-party load balancer services

Improved ease of use and troubleshooting

49
NSX Load-Balancing Health Monitoring
NSX 6.0 / 6.1 NSX 6.2

Highly granular health monitoring:


Reports information on failure
LB health check monitoring configured with Keeps track of last health check and status change
intervals and retries
Reports failure reason (e.g. TCP handshake failure,
No granular info on individual health-check SSL handshake failure, HTTP request failed and so
failures, last status change and fail reason on)
Status of a pool member available via UI & CLI.

Benefit: Enhance visibility and troubleshooting

50
NSX LB VIP and Pool port range
NSX 6.0 / 6.1 NSX 6.2

LB Feature: Support VIP and Pool port


Today a VIP is the combination of "VIP IP@" range
+ "port (UDP or TCP)"
Support applications that require multiple
Some applications
ports
Listen on a range of port (e.g. from 5000 to 6000
for Exchange RPC access) Benefit: More applications supported natively
Or may work on any port (e.g. any TCP/UDP by our NSX LB
port should be load balanced to the pool)

51
Distributed Logical Router and Bridging Integration
NSX 6.0 / 6.1 NSX 6.2

On a given Logical Switch, Distributed Logical Routing


can coexist with bridging
Benefits: Optimizes traffic flow by eliminating the need to
route through a NSX Central edge

NSX Edge NSX Edge


A logical switch could NOT simultaneously:
DB LS extended to VLAN
DB LS extended to VLAN
must use Edge for routing
Participate in distributed routing and uses DLR for routing

Extend layer DLR


DLR 2 to a VLAN

DB-LS DB-VLAN APP-LS DB-LS DB-VLAN


APP-LS
VM VM
VM VM Bridging Physical
Bridging Physical instance server
APP LS not extended to instance server
VLAN can use DLR

52
Software Layer 2 Gateway Form Factor
Native capability of NSX
High performance VXLAN to VLAN gateway in hypervisor kernel

Scale-up Flexibility & Operations


x86 performance curve Rich set of stateful services
Encapsulation & encryption offloads Multi-tier logical routing
Scale-out as you grow Advanced monitoring
Single gateway can handle all P/V traffic VLAN 10
VLAN 20
Then additional gateways can be introduced VLAN 30

53
Physical Services Integration via NSX Hardware VTEPs
Provide connectivity to physical workloads and services

Overview

NSX Hardware VTEP enabled physical appliance


VM1 VM2 Attach any physical services appliance
Extensible (schema-based)
Not dependent on Multicast
LS VNI
5001

VLAN 100 Benefits

High density of physical ports to connect physical workloads


Broad ecosystem of NSX partners (initial design partners are
Arista and Cumulus) other vendors supporting OVSDB to
follow.

Early access Tech Preview in 6.2.0, not GA

54
NSX Hardware VTEP OVSDB integration: Logical and Physical
VM1 VM2
Logical view
VLAN
100

Physical view
Physical
Infrastructure IP Network
No Multicast

B
SD
OV

VM1

55
NSX Packaging & Scale
NSX can run on any vSphere Edition, not tied to Enterprise Plus
NSX can work with any vSphere edition, no dependency on vSphere Enterprise Plus (as of
vSphere 6.0 and vSphere 5.5 U3)
NSX Manager license deployment will include VDS
VDS available on all hosts which are managed by the vCenter where NSX Manager is installed

Deploying VDS continues to be a prerequisite for NSX


VDS needs to be configured on all hosts that require NSX services

NSX can be used on hosts that have different vSphere editions


NSX hosts can run any combination of vSphere editions
NSX services can span hosts that run different editions of vSphere

VDS entitlement with NSX is only for use with NSX, not for standalone use
customers who want to run VDS independent of NSX need to purchase vSphere Enterprise Plus
entitlements for the relevant hosts.
EULA enforced

CONFIDENTIAL 57
NSX Support for vSphere 6.0
NSX builds on top of industry-first hypervisor technologies

Overview

Builds upon next generation of VMotion innovation


Support for Cross-VC VMotion over VXLAN
Dedicated TCP/IP Stack for vMotion
Network IO Control v3 support for NSX Logical Switches

+ NSX Plug-In to vSphere Web Client, with improved browser


support, responsiveness and performance gains

Benefits

Leverage existing investments and skillsets


Builds upon foundation of the Software-Defined Data Center

58
NSX 6.2 Scalability

Overview

Number of Clusters increases from 12 to 16


Scale Updates
Hypervisors per NSX increases from 384 to 512
Hypervisors 512 Cross-VC NSX validated with up to 8 vCenter Servers
Clusters 16
vCenter Servers 8
Rules per VM 3500 Benefits

VIPs per LB 1024 (X-Large) Increased scale and performance

Pools per LB 1024 (X-Large)


Servers per Pool 3072 (X-Large) Numbers are accurate as of 10/18/2016

59
Summary & QA
NSX 6.2
Accelerating NSX adoption
& driving new opportunities

Expand NSX Control Within and across datacenters

Cross-vCenter Networking and Security Disaster Recovery Solution with NSX Multi-Site Data Center Solutions
Cross-VC vMotion over VXLAN, with
Routing and Security

Operational Excellence Accelerating path to production

Troubleshooting: Central CLI, TraceFlow Improved IP Discovery mechanisms for Logical Routing Enhancements
NSX API Enhancements Virtual Machines Logical Load Balancing enhancements
Health Check of NSX Comm. Channels

Ecosystem with NSX Delivering solutions TO customers WITH Ecosystem

Bridging + Routing Enhancements NSX/F5 integration enhancements Splunk Integration


vRealize Operations Suite HyTrust Integration Intel-Mcafee
Questions

S-ar putea să vă placă și