Sunteți pe pagina 1din 24

Nimble Array Self-Install Document

For 2.x Code Installations

Jason Johnson Sr. Systems Engineer


jason.johnson@nimblestorage.com

Aaron Thomas Account Executive


amthomas@nimblestorage.com

Table of Contents

Audience: ....................................................................................................................................................................3
Scope: .........................................................................................................................................................................3

First Steps ...................................................................................................................................................................4

Understanding IPs .....................................................................................................................................................4

iSCSI Switch Selection Guidelines ...............................................................................................................................5


Known good Switches.................................................................................................................................................5

Network Best Practices...............................................................................................................................................6


Pre-Install Checklist ....................................................................................................................................................7
Terminology................................................................................................................................................................8

Recommended Multi-Switch Connectivity .................................................................................................................8


Network Best Practices...............................................................................................................................................9
Windows Timeouts .....................................................................................................................................................9

VMware Settings ..................................................................................................................................................... 10

Nimble Array configuration ..................................................................................................................................... 10


Volume Creation...................................................................................................................................................... 18

Volume Connection Verification ............................................................................................................................. 19

VMware Vcenter Plug-In registration:..................................................................................................................... 19

Port use and Firewall considerations ...................................................................................................................... 20


Support Service Levels ............................................................................................................................................. 21

Popular CLI commands (user name for CLI login is admin) ................................................................................ 21
Shutdown Automation ............................................................................................................................................ 22
10GbE Cable Detail .................................................................................................................................................. 23

Short name is SFP+. Long version is below (from the hardware user guide). ......................................................... 23

Audience:

This document is intended for new customers storage system administrators or Nimble reseller system
engineers wishing to perform a self-installation or to provide pre-information regarding the installation process.

Scope:

This document provides the basic information to self-install a Nimble Storage array. This document is not
intended to replace other Nimble documentation but merely to augment and facilitate a quick and easy selfstartup. This document is not all inclusive and is not detailed as to every aspect of the operation. Use this
document as a quick guide to get the system installed and basic verification then refer to other related
documentation like the Nimble User Guide for more in-depth information on use and management.

First Steps Before you receive your array:

Call Nimble Support and request a customer login: (877) 364-6253 X2


Once you have this log on to the support site and download the following:
http://support.nimblestorage.com/download/download.html
o Latest Release Notes
o Latest User Guides
o Latest CLI Reference Guide
o Windows Integration Toolkit
o VMware Integration Toolkit (if applicable)
o Related Best Practice Guides
Fill in the Pre-Install checklist on page 5, one for each system you are installing, and send the completed
form to your Nimble SE and/or your reseller SE.

Understanding IPs

The array management IP addresses: The array management IP address is a "floating" address, and is assigned
to the physical port by the system. It is the IP address used to access the array's GUI and CLI. The address is used
by the controller taking the active role.
Best Practice: This IP address can be used for data, but this is not desirable: specific target IP addresses of the
interface pairs should be used instead.
The target discovery IP address: The discovery IP address is also a "floating" address, and is assigned to the
physical port by the system. It is the IP address used to allow the iSCSI initiator to discover the array. This can be
the same as the management IP address or a different IP address.
Best Practice: This IP address can be used for data, but this is not desirable: specific target IP addresses of the
interface pairs should be used instead.
The data IP addresses: Each interface pair in the array (up to a total of six (CS220/CS240/CS260/CS420/
CS440/CS460) or four (CS210) or two (CS220G/CS240G/CS260G/CS420G/CS440G/CS460G) can be assigned an IP
address and subnet. Both members of the interface pair use this address, but because of the active-standby
nature of the controllers, the IP address is never used simultaneously by both. These IP addresses cannot be the
same as the Management/Target Discovery IP addresses.
These are the IP addresses to which iSCSI initiators should be assigned access. By specifying a physical target
portal using your Advanced iSCSI initiator feature, you can ensure that different applications and volumes use
different ports, avoiding a situation in which all data and management traffic goes through one physical port
while leaving the others unused. Instruct your iSCSI initiator to use a specific interface pair for data traffic to and
from volumes.
The two controller diagnostic IP addresses: One static IP address is assigned to each controller to ensure that
controllers can be accessed regardless of any network problem that might occur. This allows direct access to the
controller at any time for diagnostic purposes: regardless of the state of the array or the controller, access to the
4

controller is always available. The system determines the physical port to which these IP addresses are
associated.

iSCSI Switch Selection Guidelines

These are some basic guidelines for selecting a good switch for the iSCSI LAN between your Nimble and the
servers. Its best to have a separate physical redundant switch pair for your iSCSI traffic, but if you do chose to
run this traffic on your campus LAN you should VLAN that traffic to keep it separate. Switch selection basics:

It should be a good quality layer 2 or layer 3 managed switch, stacked is preferred, ISLs at least.
It needs to have dedicated port buffer space of at least 512K for each port.

In needs to have hardware flow control (this was the most important)

It is good if the switch can support jumbo frames (at the same time as flow controlmany HP Procurves

The backplane need to be able to handle the full bandwidth of the data flowing through the switch, i.e.,

can do one or the other)

if the Nimble can handle has 6 data ports and can handle 720MB/sec the switch backplane should be
able to do better.

Known good Switches

These are some known good switch options in no particular order. This is not an exclusive list.
1GbE Switches:

Juniper EX3300
Juniper EX4200
Brocade ICX-6610
Brocade VDX-6710
Cisco 3750X
HP 2810-24G - you cannot have Jumbo Frames and Flow Control on simultaneously.
HP 2910

10GbE Switchs:

Juniper EX4500
Brocade VDX-6720
Juniper EX2500
Juniper EX4200
Brocade VDX-6720-

Network Best Practices


Best Practice

Details

Do not use Spanning Tree Protocol (STP)

Do not use STP on switch ports that connect to iSCSI initiators or


the Nimble storage array network interfaces.

Configure flow control on each switch port

Configure Flow Control on each switch port that handles iSCSI


connections. If your application server is using a software iSCSI
initiator and NIC combination to handle iSCSI traffic, you must
also enable Flow Control on the NICs to obtain the performance
benefit.

Disable unicast storm control

Disable unicast storm control on each switch that handles iSCSI


traffic. However, the use of broadcast and multicast storm
control is encouraged.

Use jumbo frames when applicable

Configure jumbo frames on each switch that handles iSCSI traffic.


If your server is using a software iSCSI initiator and NIC
combination to handle iSCSI traffic, you must also enable Jumbo
Frames on the NICs to obtain the performance benefit (or reduce
CPU overhead) and ensure consistent behavior. Do not enable
Jumbo Frames on switches unless Jumbo Frames is also
configured on the NICs.

Multicast filtering off.


IP precedence off

Pre-Install Checklist

Fill in this pre-install checklist and return this to your installing Nimble SE or reseller SE. The Nimble Array
Setup Manager wizard will ask you to input the Phas2 1 information, then you will be redirected to a browser
session to log into where you will complete phase 2 through 5. Having this already determined and readily at
hand will speed up and simplify the process. It is much easier to get the install right the first time than it is to
change some configuration items later. The whole process should take less than15 minutes.
The system is case sensitive and will not accept spaces or underscores, dashes are OK.
Phase 1
Array Name
Management IP
Subnet
Default Gateway(s)
Password
Phase 2
Discovery IP
Eth 1
Eth 2 **
Eth 3 or TG1
Eth 4 or TG2
Eth 5 (not CS210 or gs)
Eth 6 (not CS210 or gs)

Required Info

Type (Data,Manag., Both)

Example
Nimble-01
192.168.10.50
255.255.255.0
192.168.10.1
P@ssw0rd
192.168.1.100
MTU (Stand, Jumbo)

Data IP

Controller A IP
Controller B IP
Phase 3
Domain Name
DNS Server
Phase 4
Time Zone
NTP Server
Phase 5
SNMP Server
Email From
Email To

Notes

For management subnet


8 characters minimum

Example
192.168.10.53
192.168.10.54

Example
192.168.10.51
192.168.10.52
192.168.1.101
192.168.1.102
192.168.1.103
192.168.1.104
Notes
For support use
For support use

bs.com
4.2.2.2

Fully Qualified
On management subnet

Europe/London
192.168.10.5
snmp.bs.com
Nimble-01@bs.com
IT-Admin@bs.com

Make sure relay is set

Public/Campus Network
iSCSI network
**The install wizard assumes two management ports, so if you are only configuring one management port skip
the eth2 port and change it back to a data port in the GUI after the wizard installation

Terminology

Sibling interfaces
o Interfaces on same controller
o E.g., ControllerA.eth1 and ControllerA.eth2
Mirrored interfaces
o Pair of corresponding interfaces across controllers
o E.g., ControllerA.eth1 and ControllerB.eth1
Cross link
o Link between switches
Connection from host to array
o Direct: does not go through the cross link
o Cross: goes through the cross link
Recommended connections
o Are direct under recommended network connectivity
o Conn1: Host.eth1 to Array.eth1
o Conn2: Host.eth2 to Array.eth2
Non-recommended connections
o Are cross under recommended network connectivity
o Conn3: Host.eth1 to Array.eth2
o Conn4: Host.eth2 to Array.eth1

Recommended Multi-Switch Connectivity

eth1 Host eth2

S1

cross

eth1 eth2
Controller A

S2

eth1 eth2
Controller B

Active link
Standby link

The easiest way to get this right is to connect the even ports to the even switch number and the odd ports to the
odd switch number. i.e., eth1 to switch 1, eth2 to switch 2, eth3 to switch 1, eth4 to switch 2, for each
controller.

Network Best Practices


Best Practice

Details

Do not use Spanning Tree Protocol (STP)

Do not use STP on switch ports that connect to iSCSI


initiators or the Nimble storage array network interfaces.

Configure flow control on each switch port

Configure Flow Control on each switch port that handles


iSCSI connections. If your application server is using a
software iSCSI initiator and NIC combination to handle
iSCSI traffic, you must also enable Flow Control on the
NICs to obtain the performance benefit.

Disable unicast storm control

Disable unicast storm control on each switch that handles


iSCSI traffic. However, the use of broadcast and multicast
storm control is encouraged.

Use jumbo frames when applicable

Configure jumbo frames on each switch that handles iSCSI


traffic. If your server is using a software iSCSI initiator and
NIC combination to handle iSCSI traffic, you must also
enable Jumbo Frames on the NICs to obtain the
performance benefit (or reduce CPU overhead) and ensure
consistent behavior. Do not enable Jumbo Frames on
switches unless Jumbo Frames is also configured on the
NICs.

Windows Timeouts (Set to 45 seconds)


1) Changing iSCSI Timeouts on Windows
o Update the registry setting on your Windows server to 45 seconds. For more information about this,
see the Microsoft iSCSI guide at http://technet.microsoft.com/en-us/library/dd904411(WS.10).aspx
o HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC108002BE10318}\instance-number\Parameters
o MaxRequestHoldTime set to 60 seconds (0X3C)
o LinkDownTime set to 45 seconds (0X2D)
o HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk
o TimeOutValue set to 60 seconds (0X3C)
o Setup MPIO on your server and make sure its active.
o This will require a reboot of the server to make both the registry edit and the MPIO active so do this
ahead of time so as to not delay installation.
o Setup your email gateway/relay prior to the installation
o See page 24 of the Nimble Windows Installation Tool Kit

VMware Settings

2) Changing iSCSI Timeouts on Vmware


o None Needed
3) Changing iSCSI Timeouts on Linux (iscsid.conf)
o For Linux guests attaching iSCSI volumes
o node.conn[0].timeo.noop_out_interval = 5
o node.conn[0].timeo.noop_out_timeout = 12
4) Configure Round Robin ESX 4.1 only (4.0 will be different): To set the default to Round Robin for all new
Nimble volumes type the following, all on one line:
o

esxcli nmp satp addrule --psp VMW_PSP_RR --satp VMW_SATP_ALUA --vendor Nimble

5) Configure Round Robin ESXi5 only: To set the default to Round Robin for all new Nimble volumes type the
following, all on one line:
o

esxcli storage nmp satp rule add --psp=VMW_PSP_RR --satp=VMW_SATP_ALUA --vendor=Nimble

6) Configure the Vcenter Plug-In:


o

Pre-1.3.x firmware use the CLI command:




vmwplugin --register --username arg --password arg --server arg

*See Nimble VMware Integration Guide


*Support mode to get CMD line (Turn on: Configuration-Security profile-propertiesremote tech support ssh)

For 1.3.x code or newer configure via the GUI under the Administration tab

Nimble Array configuration via Nimble Install Manager

Prerequisites
o Static IP. Set your IP address to the same subnet as your array management will be on.
o Have your array controllers A & B correctly cabled to your switch fabric per the previous
drawings.
o Complete all your switch configurations for Flow Control, Jumbo Frames, Spanning tree, Unicast,
etc.
o Install the Windows Integration Tools (WIT) on the Laptop or Server youre installing with.

10

1) Start the Nimble Array Setup Manager


2) Select the Array to install and click Next

3) Enter the array name (No spaces or underscores, case sensitive)


4) Set your management IP address,
Subnet mask & Default Gateway
5) Enter and confirm your array
password. (Examples shown)
6) Click Finish

11

7) You should get this screen after a few seconds.


8) Click Close and follow the instruction by opening a browser and pointing it to your management IP address

9) If a browser connection does not open up


automatically, open a browser and enter the management IP address as the URL.

Enter Management IP

If you get this screen click


this selection to continue

12

10) Log into GUI interface using the password you just created. Note for CLI login via an SSH connection the
username is admin, the password is the same as the GUI interface.

Log in with the password you


just set

11) Select the type of network topology you intend to use. The typical and most desirable is a separate
Management and Data subnet, but you can also set a flat network where data and management are on the
same subnet.

Select the type of network you are


configuring: Most likely this middle
selection

13

12) Set you IP addresses per the pre-installation checklist document you filled out.

Set your physical IP addresses

13) Set you controller support IP addresses. These should be on the management subnet and are used by
Nimble support.

Set your diagnostic IP


addresses

14

14) Enter your domain and DNS servers

Set your Domain and DNS


Servers

15) Select your time zone and enter your NTP server.

Set your Time Zone and NTP


server

15

16) Setup your email alerts. Aliases are recommended. Make sure you have the following addresses in your
Exchange server for unauthenticated relay:
Management IP: X.X.X.X
Ctrlr-A IP: X.X.X.X
Ctrlr-B IP: X.X.X.X

Enter you email alert


addresses

Enter you email server

17) Click finish and make not of the information regarding firewall ports on the next screen.

Follow these instructions

16

18) Your system is now functional. Lets setup the AutoSupport Call home capabilities. Go to the
Administration Tab and select the AutoSupport/HTTP Proxy line.

Open the AutoSupport page

19) Check the Send AutoSupport box


20) Check the Enable Secure tunnel box (this is optional but a good idea)

Check this box


Check this box or not as desired.

Test the

Send

21) Select the test AutoSupport Setting. The test will run, you should get a passing indicator for each line. If
you do not check your firewall settings (see tables below) to make sure the proper ports are open for the
17

management IP and the Controller support IP. Re-test until these all pass. If you have trouble, contact our
support so they can help you. (877) 364-6253 X2.
22) Once the tests have passed select Send AutoSupport and verify you received an autosupport test email. If
you did not then check your email alert settings and relays.

Volume Creation

Your system is now ready to create volumes and map to servers. Please refer to other documents listed below
for details on these steps:

Nimble Users Guide This is embedded in the GUI firmware and can be located under the Help button
VMware Integration Guide

If you are going to be connecting to a VMware environment refer now and complete the steps in the VMware
integration guide.
1) Select the Management tab, then the Initiator Group line:
a. Create a new initiator group. Give the imitator group a name that make sense to you based on
what servers it includes
b. Add the IQN numbers (initiators) to the group for each of the servers that will mount the volume
you create. Note: This should be one server to one volume unless its a clustered volume like a
VMware or HyperV datastore. With clustered volumes you will have multiple hosts per given
volume.
2) Select the Management tab, then the Volume line:
a. Select the New Volume button and create a new volume by following the GUI wizard steps.
b. In the process select or create a performance profile that matches your volume type, i.e.,
VMware ESX, or matches the block size of the application that will be using the volume.
c. Allow access to the volume via an Initiator group and select the initiator group you created in
step #1.
d. Do not set a protection scheme at this time, choose none
3) Now refer to the appropriate documentation on how to have your Operating system attach, mount, and
format this volume.

18

Volume Connection Verification

After connecting a volume to a server verify that you are making the proper number of connections. This will
help insure that you have your networking and multi-pathing setup properly.
In the array navigate to: Monitor >> Connections
1 row for each volume initiator combination
Verification Formula:

(Physical hosts * Physical Ports per Host * Array Data Ports)


(Count of subnets * Switches per Subnet )

Note:

2 switches with same VLAN/subnet trunked together is 1 switch


2 switches with same VLAN/subnet NOT trunked is 2 switches

VMware Vcenter Plug-In registration:


In 1.3.x firmware you can register the Vcenter Plug-In via the GUI. Navigate to >>Administration>>Plugins and
fill in the Vcenter host information and credentials.

19

Port use and Firewall considerations

(See User Guide for more details)

**Also allow ICMP echo request to nsdiag.nimblestorage.com

20

Support Service Levels


Service Levels
P1 telephone response less than 30 minutes (current average is <5min); Immediate escalation to Engineering.
P2 response less than 4 business hours
P3 response less than 8 business hours
Severity Definitions
P1 Not serving data; severe performance degradation; single controller not operational
P2 Performance degradation, intermittent SW faults, network degradation
P3 Problem or defect causing minimal business impact; request for information

Popular CLI commands

(user name for CLI login is admin)

halt array
(shuts down the array in a safe and orderly manner and powers it off)
stats --perf sys --latency diff (plots out latency)
array info
(lists array configuration information)
vmwplugin --unregister --username adminname --password adminpassword --server vcenterhost
o Un registers the VMware plug-in
Vol list (lists volumes)
Vol --info volname (Info on volume name given)

21

Shutdown Automation

To automate a Nimble Array shutdown you can use this. Created a simple script using private ssh keys, putty
and plink.exe for their Windows host. The script just calls this:
plink.exe ssh session name halt --array
Where the session name is a putty session configured for the Nimble array hostname and private SSH key
information.

22

10GbE Cable Detail


Short name is SFP+. Long version is below (from the hardware user guide).

SFP specifications
The following applies to the "G" version of the CS220 and CS240. These models include 10 GigE ports.
About the SR optical transceiver 10GBASE-SR ("short range") uses the IEEE 802.3 Clause 49 64B/66B Physical
Coding Sublayer (PCS) and 850 nm lasers. It delivers serialized data over multi-mode fiber at a line rate of
10.3125 Gbit/s. Over deployed multi-mode fiber cabling, it has a maximum range of between 26 metres (85 ft)
and 82 metres (269 ft) depending on cable type. Over new 50 m 2000 MHzkm OM3 multi-mode fiber
(MMF), it has a maximum range of 300 metres (980 ft).
The transmitter can be implemented with a vertical-cavity surface-emitting laser (VCSEL) which is low cost and
low power. MMF has the advantage of having lower cost connectors than SMF because of its wider core. OM3 is
now the preferred choice for structured optical cabling within buildings. 10GBASE-SR delivers the lowest cost,
lowest power and smallest form factor optical modules.
Specifications
As shipped, the CS220G/CS240G include two hot pluggable Finisar SFPs per controller with the following
specifications:

Part FTLX8571D3BCL
Family Fiber Optics - Transceivers Series
Data Rate 10Gbps (9.95 to 10.5)
Wavelength 850nmVCSEL laser
Applications 10GBASE-SR/SW 10G
Voltage, per power supply 3.3V
Connector type LC Duplex
Mounting type SFP+
Power dissipation < 1W
RoHS RoHS-6 compliant
Operating temperature 0C to 70C
Maximum link length 300m on 2000 MHZ-km MMF

23

Alternatively customers could unplug the Finisar SFP+ module from the 10gigE port and use copper 10Gigabit
Twinax.
Also known as 10GSFP+Cu, 10GBase-CR or 10GBase-CX1, SFP+ Direct Attach uses a passive twin-ax cable
assembly and connects directly into an SFP+ housing. SFP+ Direct Attach has a fixed-length cable, typically 3 or
5m in length, and like 10GBASE-CX4, is low power, low cost and low latency with the added advantages of using
less bulky cables and of having the small form factor of SFP+. SFP+ Direct Attach today is tremendously popular,
with more ports installed than 10GBASE-SR.
Below is a picture of a TWINAX cable. It has SFP modules welded on both ends of the cable. One end plugs into
the switch; the other end plugs into the Nimble array. The distance you can cover with TWINAX is only 5m
(passive TWINAX cable) or 10m with an active TWINAX cable.

24

S-ar putea să vă placă și