Sunteți pe pagina 1din 1

Sample ESXi Networking Per Host vSphere Standard Switch

Nutanix Network Configuration - Example


Core Switches
Core SW1
1
SYST
RPS
MASTR
STAT
DUPLX
SPEED
STACK
MODE

Core SW2

10

11 12

13 14

15 16

17 18

19 20

21 22

Catalyst 3750 SERIES

23 24

1X

11X

13X

23X

2X

12X

14X

24X

SYST
RPS
MASTR
STAT
DUPLX
SPEED
STACK
MODE

10

11 12

13 14

15 16

17 18

19 20

21 22

Catalyst 3750 SERIES

23 24

1X

11X

13X

23X

2X

12X

14X

24X

ToR Switches
SW1
HP ProCurve
HP
1810G-24
1810G-24
Switch
Switch
J9450A
J9450A

Mode

11

Link

13

Mode

15

17

19

21

23T

HP ProCurve
HP
1810G-24
Switch

Link

23 S

Link

Locator

Clear

Mode
Link

Mode

10

12

Link

14

Mode

16

18

20

22

24T

Mode

11

Link

13

Mode

15

17

19

21

23T

Dual-Personality Ports:
10/100/1000-T (T) or SFP (S)

off = 10Mbps

Use only one (T or S) for each Port

flash = 100Mbps
on = 1000Mbps

Fault

24 S

FDx
Reset

Spd Mode:

Power

on = 1000Mbps

Spd

Link

J9450A
J9450A

Use only one (T or S) for each Port

flash = 100Mbps

Act

LED
Mode

10/100/1000Base-T Ports (1-24T) - Ports are Auto-MDIX

1810G-24
Switch

Dual-Personality Ports:
10/100/1000-T (T) or SFP (S)

off = 10Mbps

Fault
Locator

SW2
10/100/1000Base-T Ports (1-24T) - Ports are Auto-MDIX

Link

Spd Mode:

Power

Act

LED
Mode

Link

23 S

Link

24 S

FDx
Spd

Mode

Reset

Clear

Mode
Link

Use only HP ProCurve SFPs

Mode

10

12

Link

14

Mode

16

18

20

22

24T

Mode

Use only HP ProCurve SFPs

10GBit Ethernet
802.1q Trunks

NTNX
CVM

All other Portgroups


& VMkernel Port

VM Network

PG_NutanixCVM

NTNX
CVM

PG_NutanixCVM

VM Network

NTNX
CVM

Mgmt

vMotion

vMotion

PG_UserVMs

PG_UserVMs

vSwitch0

svm-iscsi-pg

svm-iscsi-pg
vmk-svm-iscsi-pg

vSwitchNutanix
(never touch!)

vSwitchNutanix
(never touch!)

HostA

HostB

Active

db

All other Portgroups


& VMkernel Port

VM Network
Mgmt
vMotion
PG_UserVMs

GuestVMs

vSwitch0

vmk-svm-iscsi-pg

St
an

All other Portgroups


& VMkernel Port

Mgmt

GuestVMs

Optional CLI shortcuts that could help with deployment


db

db

vmnic0

Active

Active

St
an

vmnic1

St
an

db

db

vmnic0

St
an

y
db
St
an

PG_NutanixCVM

GuestVMs are
typically
separated by
VLANs/
Portgroups

St
an

vmnic1

Active

vmnic0

Active

vmnic1

Active

Nutanix CVM
traffic (IP
Storage) is now
separated from
other traffic in
normal
conditions

GuestVMs

vSwitch0
svm-iscsi-pg
vmk-svm-iscsi-pg

vSwitchNutanix
(never touch!)

HostC

Each host or
node contains
2x10Gbit, 2x1Gbit
and 1x10/100 IPMI
for OOB mgmt.

The following commands are executed from any Nutanix CVM SSH shell after cluster has been created and operational:
# Create Port Group "PG_NutanixCVM" and "PG_UserVMs" (modify accordingly for your naming standards)
for i in `hostips`; do echo $i; ssh root@$i 'esxcfg-vswitch -A "PG_NutanixCVM" vSwitch0; esxcfg-vswitch -A "PG_UserVMs" vSwitch0';done

#Set Nic teaming (vmnic 1 for PG_NutanixCVM as primary, but for others use the other vmnic0 as primary)
for i in `hostips`; do echo $i;
s=vmnic0,vmnic2,vmnic3';done
for i in `hostips`; do echo $i;
s=vmnic1,vmnic2,vmnic3';done
for i in `hostips`; do echo $i;
s=vmnic1,vmnic2,vmnic3';done
for i in `hostips`; do echo $i;
s=vmnic1,vmnic2,vmnic3';done

ssh root@$i 'esxcli network vswitch standard portgroup policy failover set -a=vmnic1 -p="PG_NutanixCVM" ssh root@$i 'esxcli network vswitch standard portgroup policy failover set -a=vmnic0 -p="PG_UserVMs" ssh root@$i 'esxcli network vswitch standard portgroup policy failover set -a=vmnic0 -p="VM Network" ssh root@$i 'esxcli network vswitch standard portgroup policy failover set -a=vmnic0 -p="Management Network" -

#Unlink vmnic1 and re-link it so 2 x 10Gbit nics become active / active team
for i in `hostips`; do echo $i; ssh root@$i 'esxcfg-vswitch -U vmnic1 vSwitch0; esxcfg-vswitch -L vmnic1 vSwitch0';done

#Verify:
for i in `hostips`; do echo $i; ssh root@$i 'esxcfg-vswitch -l';done

OPTIONAL
#Add VLAN id (eg 111) to PG-UserVMs portgroup - again modify as you see fit :
for i in `hostips`; do echo $i; ssh root@$i 'esxcfg-vswitch vSwitch0 -v 111 -p "PG-UserVMs"';done

# Separate the CVM and User VMs traffic when all is healthy
# Edit each CVM settings in vCenter and move to "PG_NutanixCVM" Portgroup (CVMs will use vmnic1 as primary)
# Edit each User VM settings in vCenter and move to "PG_UserVMs" Portgroup (UserVMs in this PG will use vmnic0 as primary)
#Unlink vmnic2 and vmnic3 - so no 1 Gbit at all ! (Just using 10Gbit links as active/active as an example)
for i in `hostips`; do echo $i; ssh root@$i 'esxcfg-vswitch -U vmnic2 vSwitch0; esxcfg-vswitch -U vmnic3 vSwitch0';done

#Re-link vmnic2 and vmnic3 (1 gigs) if you want them back


for i in `hostips`; do echo $i; ssh root@$i 'esxcfg-vswitch -L vmnic2 vSwitch0; esxcfg-vswitch -L vmnic3 vSwitch0';done

In this example
only the 2x10Gbit
is used

Note that a 2U Nutanix block can contain from 1 to 4 hosts depending on model chosen. Adding more hosts can be done with additional 2U
chassis. Minimum cluster size is 3 hosts (shown here as HostA,HostB,HostC).

S-ar putea să vă placă și