Documente Academic
Documente Profesional
Documente Cultură
Table of Contents
Audience: ....................................................................................................................................................................3
Scope: .........................................................................................................................................................................3
Popular CLI commands (user name for CLI login is admin) ................................................................................ 21
Shutdown Automation ............................................................................................................................................ 22
10GbE Cable Detail .................................................................................................................................................. 23
Short name is SFP+. Long version is below (from the hardware user guide). ......................................................... 23
Audience:
This document is intended for new customers storage system administrators or Nimble reseller system
engineers wishing to perform a self-installation or to provide pre-information regarding the installation process.
Scope:
This document provides the basic information to self-install a Nimble Storage array. This document is not
intended to replace other Nimble documentation but merely to augment and facilitate a quick and easy selfstartup. This document is not all inclusive and is not detailed as to every aspect of the operation. Use this
document as a quick guide to get the system installed and basic verification then refer to other related
documentation like the Nimble User Guide for more in-depth information on use and management.
Understanding IPs
The array management IP addresses: The array management IP address is a "floating" address, and is assigned
to the physical port by the system. It is the IP address used to access the array's GUI and CLI. The address is used
by the controller taking the active role.
Best Practice: This IP address can be used for data, but this is not desirable: specific target IP addresses of the
interface pairs should be used instead.
The target discovery IP address: The discovery IP address is also a "floating" address, and is assigned to the
physical port by the system. It is the IP address used to allow the iSCSI initiator to discover the array. This can be
the same as the management IP address or a different IP address.
Best Practice: This IP address can be used for data, but this is not desirable: specific target IP addresses of the
interface pairs should be used instead.
The data IP addresses: Each interface pair in the array (up to a total of six (CS220/CS240/CS260/CS420/
CS440/CS460) or four (CS210) or two (CS220G/CS240G/CS260G/CS420G/CS440G/CS460G) can be assigned an IP
address and subnet. Both members of the interface pair use this address, but because of the active-standby
nature of the controllers, the IP address is never used simultaneously by both. These IP addresses cannot be the
same as the Management/Target Discovery IP addresses.
These are the IP addresses to which iSCSI initiators should be assigned access. By specifying a physical target
portal using your Advanced iSCSI initiator feature, you can ensure that different applications and volumes use
different ports, avoiding a situation in which all data and management traffic goes through one physical port
while leaving the others unused. Instruct your iSCSI initiator to use a specific interface pair for data traffic to and
from volumes.
The two controller diagnostic IP addresses: One static IP address is assigned to each controller to ensure that
controllers can be accessed regardless of any network problem that might occur. This allows direct access to the
controller at any time for diagnostic purposes: regardless of the state of the array or the controller, access to the
4
controller is always available. The system determines the physical port to which these IP addresses are
associated.
These are some basic guidelines for selecting a good switch for the iSCSI LAN between your Nimble and the
servers. Its best to have a separate physical redundant switch pair for your iSCSI traffic, but if you do chose to
run this traffic on your campus LAN you should VLAN that traffic to keep it separate. Switch selection basics:
It should be a good quality layer 2 or layer 3 managed switch, stacked is preferred, ISLs at least.
It needs to have dedicated port buffer space of at least 512K for each port.
In needs to have hardware flow control (this was the most important)
It is good if the switch can support jumbo frames (at the same time as flow controlmany HP Procurves
The backplane need to be able to handle the full bandwidth of the data flowing through the switch, i.e.,
if the Nimble can handle has 6 data ports and can handle 720MB/sec the switch backplane should be
able to do better.
These are some known good switch options in no particular order. This is not an exclusive list.
1GbE Switches:
Juniper EX3300
Juniper EX4200
Brocade ICX-6610
Brocade VDX-6710
Cisco 3750X
HP 2810-24G - you cannot have Jumbo Frames and Flow Control on simultaneously.
HP 2910
10GbE Switchs:
Juniper EX4500
Brocade VDX-6720
Juniper EX2500
Juniper EX4200
Brocade VDX-6720-
Details
Pre-Install Checklist
Fill in this pre-install checklist and return this to your installing Nimble SE or reseller SE. The Nimble Array
Setup Manager wizard will ask you to input the Phas2 1 information, then you will be redirected to a browser
session to log into where you will complete phase 2 through 5. Having this already determined and readily at
hand will speed up and simplify the process. It is much easier to get the install right the first time than it is to
change some configuration items later. The whole process should take less than15 minutes.
The system is case sensitive and will not accept spaces or underscores, dashes are OK.
Phase 1
Array Name
Management IP
Subnet
Default Gateway(s)
Password
Phase 2
Discovery IP
Eth 1
Eth 2 **
Eth 3 or TG1
Eth 4 or TG2
Eth 5 (not CS210 or gs)
Eth 6 (not CS210 or gs)
Required Info
Example
Nimble-01
192.168.10.50
255.255.255.0
192.168.10.1
P@ssw0rd
192.168.1.100
MTU (Stand, Jumbo)
Data IP
Controller A IP
Controller B IP
Phase 3
Domain Name
DNS Server
Phase 4
Time Zone
NTP Server
Phase 5
SNMP Server
Email From
Email To
Notes
Example
192.168.10.53
192.168.10.54
Example
192.168.10.51
192.168.10.52
192.168.1.101
192.168.1.102
192.168.1.103
192.168.1.104
Notes
For support use
For support use
bs.com
4.2.2.2
Fully Qualified
On management subnet
Europe/London
192.168.10.5
snmp.bs.com
Nimble-01@bs.com
IT-Admin@bs.com
Public/Campus Network
iSCSI network
**The install wizard assumes two management ports, so if you are only configuring one management port skip
the eth2 port and change it back to a data port in the GUI after the wizard installation
Terminology
Sibling interfaces
o Interfaces on same controller
o E.g., ControllerA.eth1 and ControllerA.eth2
Mirrored interfaces
o Pair of corresponding interfaces across controllers
o E.g., ControllerA.eth1 and ControllerB.eth1
Cross link
o Link between switches
Connection from host to array
o Direct: does not go through the cross link
o Cross: goes through the cross link
Recommended connections
o Are direct under recommended network connectivity
o Conn1: Host.eth1 to Array.eth1
o Conn2: Host.eth2 to Array.eth2
Non-recommended connections
o Are cross under recommended network connectivity
o Conn3: Host.eth1 to Array.eth2
o Conn4: Host.eth2 to Array.eth1
S1
cross
eth1 eth2
Controller A
S2
eth1 eth2
Controller B
Active link
Standby link
The easiest way to get this right is to connect the even ports to the even switch number and the odd ports to the
odd switch number. i.e., eth1 to switch 1, eth2 to switch 2, eth3 to switch 1, eth4 to switch 2, for each
controller.
Details
VMware Settings
esxcli nmp satp addrule --psp VMW_PSP_RR --satp VMW_SATP_ALUA --vendor Nimble
5) Configure Round Robin ESXi5 only: To set the default to Round Robin for all new Nimble volumes type the
following, all on one line:
o
For 1.3.x code or newer configure via the GUI under the Administration tab
Prerequisites
o Static IP. Set your IP address to the same subnet as your array management will be on.
o Have your array controllers A & B correctly cabled to your switch fabric per the previous
drawings.
o Complete all your switch configurations for Flow Control, Jumbo Frames, Spanning tree, Unicast,
etc.
o Install the Windows Integration Tools (WIT) on the Laptop or Server youre installing with.
10
11
Enter Management IP
12
10) Log into GUI interface using the password you just created. Note for CLI login via an SSH connection the
username is admin, the password is the same as the GUI interface.
11) Select the type of network topology you intend to use. The typical and most desirable is a separate
Management and Data subnet, but you can also set a flat network where data and management are on the
same subnet.
13
12) Set you IP addresses per the pre-installation checklist document you filled out.
13) Set you controller support IP addresses. These should be on the management subnet and are used by
Nimble support.
14
15) Select your time zone and enter your NTP server.
15
16) Setup your email alerts. Aliases are recommended. Make sure you have the following addresses in your
Exchange server for unauthenticated relay:
Management IP: X.X.X.X
Ctrlr-A IP: X.X.X.X
Ctrlr-B IP: X.X.X.X
17) Click finish and make not of the information regarding firewall ports on the next screen.
16
18) Your system is now functional. Lets setup the AutoSupport Call home capabilities. Go to the
Administration Tab and select the AutoSupport/HTTP Proxy line.
Test the
Send
21) Select the test AutoSupport Setting. The test will run, you should get a passing indicator for each line. If
you do not check your firewall settings (see tables below) to make sure the proper ports are open for the
17
management IP and the Controller support IP. Re-test until these all pass. If you have trouble, contact our
support so they can help you. (877) 364-6253 X2.
22) Once the tests have passed select Send AutoSupport and verify you received an autosupport test email. If
you did not then check your email alert settings and relays.
Volume Creation
Your system is now ready to create volumes and map to servers. Please refer to other documents listed below
for details on these steps:
Nimble Users Guide This is embedded in the GUI firmware and can be located under the Help button
VMware Integration Guide
If you are going to be connecting to a VMware environment refer now and complete the steps in the VMware
integration guide.
1) Select the Management tab, then the Initiator Group line:
a. Create a new initiator group. Give the imitator group a name that make sense to you based on
what servers it includes
b. Add the IQN numbers (initiators) to the group for each of the servers that will mount the volume
you create. Note: This should be one server to one volume unless its a clustered volume like a
VMware or HyperV datastore. With clustered volumes you will have multiple hosts per given
volume.
2) Select the Management tab, then the Volume line:
a. Select the New Volume button and create a new volume by following the GUI wizard steps.
b. In the process select or create a performance profile that matches your volume type, i.e.,
VMware ESX, or matches the block size of the application that will be using the volume.
c. Allow access to the volume via an Initiator group and select the initiator group you created in
step #1.
d. Do not set a protection scheme at this time, choose none
3) Now refer to the appropriate documentation on how to have your Operating system attach, mount, and
format this volume.
18
After connecting a volume to a server verify that you are making the proper number of connections. This will
help insure that you have your networking and multi-pathing setup properly.
In the array navigate to: Monitor >> Connections
1 row for each volume initiator combination
Verification Formula:
Note:
19
20
halt array
(shuts down the array in a safe and orderly manner and powers it off)
stats --perf sys --latency diff (plots out latency)
array info
(lists array configuration information)
vmwplugin --unregister --username adminname --password adminpassword --server vcenterhost
o Un registers the VMware plug-in
Vol list (lists volumes)
Vol --info volname (Info on volume name given)
21
Shutdown Automation
To automate a Nimble Array shutdown you can use this. Created a simple script using private ssh keys, putty
and plink.exe for their Windows host. The script just calls this:
plink.exe ssh session name halt --array
Where the session name is a putty session configured for the Nimble array hostname and private SSH key
information.
22
SFP specifications
The following applies to the "G" version of the CS220 and CS240. These models include 10 GigE ports.
About the SR optical transceiver 10GBASE-SR ("short range") uses the IEEE 802.3 Clause 49 64B/66B Physical
Coding Sublayer (PCS) and 850 nm lasers. It delivers serialized data over multi-mode fiber at a line rate of
10.3125 Gbit/s. Over deployed multi-mode fiber cabling, it has a maximum range of between 26 metres (85 ft)
and 82 metres (269 ft) depending on cable type. Over new 50 m 2000 MHzkm OM3 multi-mode fiber
(MMF), it has a maximum range of 300 metres (980 ft).
The transmitter can be implemented with a vertical-cavity surface-emitting laser (VCSEL) which is low cost and
low power. MMF has the advantage of having lower cost connectors than SMF because of its wider core. OM3 is
now the preferred choice for structured optical cabling within buildings. 10GBASE-SR delivers the lowest cost,
lowest power and smallest form factor optical modules.
Specifications
As shipped, the CS220G/CS240G include two hot pluggable Finisar SFPs per controller with the following
specifications:
Part FTLX8571D3BCL
Family Fiber Optics - Transceivers Series
Data Rate 10Gbps (9.95 to 10.5)
Wavelength 850nmVCSEL laser
Applications 10GBASE-SR/SW 10G
Voltage, per power supply 3.3V
Connector type LC Duplex
Mounting type SFP+
Power dissipation < 1W
RoHS RoHS-6 compliant
Operating temperature 0C to 70C
Maximum link length 300m on 2000 MHZ-km MMF
23
Alternatively customers could unplug the Finisar SFP+ module from the 10gigE port and use copper 10Gigabit
Twinax.
Also known as 10GSFP+Cu, 10GBase-CR or 10GBase-CX1, SFP+ Direct Attach uses a passive twin-ax cable
assembly and connects directly into an SFP+ housing. SFP+ Direct Attach has a fixed-length cable, typically 3 or
5m in length, and like 10GBASE-CX4, is low power, low cost and low latency with the added advantages of using
less bulky cables and of having the small form factor of SFP+. SFP+ Direct Attach today is tremendously popular,
with more ports installed than 10GBASE-SR.
Below is a picture of a TWINAX cable. It has SFP modules welded on both ends of the cable. One end plugs into
the switch; the other end plugs into the Nimble array. The distance you can cover with TWINAX is only 5m
(passive TWINAX cable) or 10m with an active TWINAX cable.
24