Documente Academic
Documente Profesional
Documente Cultură
TABLE OF CONTENTS
1
INTRODUCTION ............................................................................................................................... 4
1.1 Why clustered Data ONTAP? .................................................................................................................... 4
1.2 Lab Objectives.......................................................................................................................................... 5
1.3 Prerequisites ............................................................................................................................................ 6
1.4 How To Use This Lab Guide ..................................................................................................................... 6
1.4.1 The Callout Conventions Used In This Lab Guide..................................................................................... 6
1.5 Lab Architecture ....................................................................................................................................... 8
1.6 Accessing the Command Line ................................................................................................................... 9
Appendix 1 Using the clustered Data ONTAP Command Line ..................................................... 194
References ......................................................................................................................................... 196
Version History .................................................................................................................................. 197
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
LIST OF TABLES
Table 1) Lab Host Credentials.................................................................................................................................. 9
Table 2) Lab Controller Credentials .......................................................................................................................... 9
Table 3) Preinstalled NetApp Software ..................................................................................................................... 9
LIST OF FIGURES
Figure 1) Intro Lab Architecture................................................................................................................................ 8
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1 INTRODUCTION
This lab introduces the fundamentals of clustered Data ONTAP. In it we will create a 2-node cluster and
configure Windows 2008R2 and Red Hat Enterprise Linux 6.3 hosts to access storage on the cluster
using CIFS, NFS, and iSCSI.
This lab does include additional storage nodes and hosts beyond those just mentioned. Those additional
components will be described later in this guide and utilized in an upcoming version of this lab.
1.1
A helpful way to start understanding the benefits offered by clustered Data ONTAP is to consider server
virtualization. Before server virtualization system administrators frequently deployed applications on
dedicated servers in order to maximize application performance and to avoid the instabilities often
encountered when combining multiple applications on the same operating system instance. While this
design approach was effective it also had the following drawbacks:
It does not scale well adding new servers for every new application is extremely expensive.
It is inefficient most servers are significantly underutilized meaning that businesses are not
extracting the full benefit of their hardware investment.
Server virtualization directly addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware,
meaning that businesses can now consolidate their server workloads to a smaller set of more effectively
utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a
pool of physical servers enables businesses to reduce the impact of downtime due to scheduled
maintenance activities.
Clustered Data ONTAP brings these same benefits and many others to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a
single logical cluster that can non-disruptively service multiple storage workload needs. With clustered
Data ONTAP you can:
Combine different types and models of NetApp storage controllers (known as nodes) into a
shared physical storage resource pool (referred to as a cluster).
Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on
the same storage cluster.
Consolidate various storage workloads to the cluster. Each workload can be assigned its own
Storage Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its
own data volumes, LUNs, CIFS shares, and NFS exports.
Use Quality of Service (QoS) capabilities to manage resource utilization between storage
workloads.
Non-disruptively migrate live data volumes and client connections from one cluster node to
another.
Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down
during hardware refresh cycles.
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Leverage multiple nodes in the cluster to simultaneously service a given SVMs storage
workloads. This means that businesses can scale out their SVMs beyond the bounds of a single
physical node in response to growing storage and performance requirements, all non-disruptively.
Apply software & firmware updates and configuration changes without cluster, SVM, and volume
downtime.
1.2
Lab Objectives
This lab is designed to explore fundamental concepts of clustered Data ONTAP, and utilizes a modular
design to allow you to zero in on the specific topics that are of interest to you. Section 2 is required for all
invocations of the lab because it is a prerequisite for both Section 3 and Section 4. If you are interested in
NAS functionality then complete Section 3 in which you will provision both NFS and CIFS storage. If you
are interested in SAN functionality then complete Section 4 to create and mount an iSCSI LUN for
Windows, an iSCSI LUN for Linux, or both if you so choose.
Here is a more detailed summary of the tasks that you will perform in this lab.
Create a cluster.
Create an aggregate.
Configure the Storage Virtual Machine for CIFS and NFS access.
Mount a CIFS share from the Storage Virtual Machine on a Windows client.
Mount a NFS volume from the Storage Virtual Machine on a Linux client.
Section 4 (Optional - Estimated Completion Time including all optional subsections = 90 minutes)
o
Create a Windows LUN on the volume and map the LUN to an igroup.
Configure a Windows client for iSCSI and MPIO and mount the LUN.
Create a Linux LUN on the volume and map the LUN to an igroup.
Configure a Linux client for iSCSI and multipath and mount the LUN.
This lab includes instructions for completing each of these tasks using either System Manager, NetApps
graphical administration interface, or the Data ONTAP command line. The end state of the lab produced
by either method is exactly the same so use whichever you are the most comfortable with.
Note that while switching back and forth between the graphical and command line methods from one
section of the lab guide to another is supported, this guide was not designed to support switching back
and forth between these methods within a single section. For the best experience we recommend sticking
with a single method for the duration of a lab section.
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1.3
Prerequisites
This lab introduces clustered Data ONTAP and so this guide makes no assumptions that the user has
previous experience with Data ONTAP. The lab does assume some basic familiarity with storage system
related concepts such as RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps
assume that the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mount NFS volumes and LUNs on a Linux client. All steps are performed
from the Linux command line and assume a basic working knowledge of the Linux command line. A basic
working knowledge of a text editor such as vi may be useful but is not required.
1.4
This lab uses a combination of screenshots and command line examples to present the configuration
steps for this lab. Where possible both a graphical and command line procedure is shown for each
sections task, with the graphical option being presented first and the command line option being
presented second. Each such section is preceded by a gray box with orange lettering to help you identify
the start of the available completion options.
If a section can only be completed using single method then the orange wording in the gray box will
reflect that fact.
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Expand the More section and then complete the fields as shown in the screenshot. Note that the
Password you specify here is Netapp1!.
2) Click Add to add the cluster to System Manager.
In many instances we use partial screenshots to focus attention on just the part of a window that is of
interest. In these cases torn edges indicate the parts of the window that have been omitted:
Command line instructions are all delimited by a text box. The actual command you will be entering will
be highlighted in blue, with the command output displayed in black. Many of the commands you will be
entering are long and span more than one line in the lab guide; in these cases the entire command is
actually entered as a single line with a space character separating the text from successive lines as
shown in the lab guide.
cluster1::> volume create -vserver svm1 -volume vol1 -aggregate aggr1
-size 1GB -junction-path /vol1
[Job 34] Job is queued: Create vol1.
[Job 34] Job succeeded: Successful
cluster1::>
So, in the preceding example you would actually enter the command as:
volume create -vserver svm1 -volume vol1 -aggregate aggr1 -size 1GB -junction-path /vol1
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1.5
Lab Architecture
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to just your lab session. While we encourage you to follow the
demonstration steps outlined in this lab guide, you are free to deviate from this guide and experiment with
other Data ONTAP features that may interest you. The virtual storage controllers (vsims) used in this lab
offer nearly all the same functionality as do physical storage controllers (the main exception right now
being that these vsims dont offer HA support) but at a reduced performance profile, which is why Lab on
Demand labs are not suitable for performance testing. If you need to conduct performance testing we
recommend that you contact NetApps Customer Proof of Concept (CPOC) team for assistance.
Table 1 provides a listing of the servers and storage controller nodes in the lab along with their IP address
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Description
Windows 2008R2 Remote Access
host
OnCommand Unified Manager
OCUM
Server
IP Address(es)
Username
Password
192.168.0.5
Demo\Administrator
Netapp1!
192.168.0.71
Administrator
Netapp1!
WFA
192.168.0.72
admin
Netapp1!
RHEL1
192.168.0.12
root
Netapp1!
RHEL2
192.168.0.13
root
Netapp1!
DC
192.168.0.253
Demo\Administrator
Netapp1!
unjoined1
192.168.0.111
admin
Netapp1!
unjoined2
192.168.0.112
admin
Netapp1!
unjoined3
192.168.0.121
admin
Netapp1!
unjoined4
192.168.0.122
admin
Netapp1!
The vsims for this lab are initially delivered unjoined to any cluster, indicated by the fact that the nodes
hostnames are all of the form unjoinedN. If you follow the flow outline in this lab guide the nodes will be
renamed during the course of the lab as shown in Table 2.
Table 2) Lab Controller Credentials
Hostname
Description
IP Address(es)
Username
Password
cluster1
192.168.0.101
Admin
Netapp1!
cluster1-01
Previously UNJOINED1
192.168.0.111
admin
Netapp1!
cluster1-02
Previously UNJOINED2
192.168.0.112
admin
Netapp1!
cluster2
192.168.0.102
Admin
Netapp1!
cluster2-01
Previously UNJOINED3
192.168.0.121
admin
Netapp1!
cluster2-02
Previously UNJOINED4
192.168.0.122
admin
Netapp1!
The NetApp software pre-installed on the various hosts in this lab is listed in Table 3.
Table 3) Preinstalled NetApp Software
Hostname
JUMPHOST
Description
System Manager 3.0RC1, Data ONTAP DSM v4.0 for Windows MPIO, Windows Host
Utility Kit v6.0.2
OC-CORE
RHEL1, RHEL2
1.6
PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in
order to run command line commands. The launch icon for the PuTTY application is pinned to the taskbar
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
on the Windows host jumphost as shown in the following screenshot; just double-click on the icon to
launch it.
Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. In this
example we are connecting to the unconfigured vsim named unjoined1.
1) By default PuTTY should launch into the Basic options for your PuTTY session display as
shown in the screenshot. If you accidentally navigate away from this view just click on the
Session category item to return to this view.
2) Use the scrollbar in the Saved Sessions box to navigate down to the desired host and doubleclick it to populate the Host Name and Save Sessions fields for the session you plan to open.
3) Click the Open button to initiate the ssh connection to the selected host. A terminal window will
open and you will be prompted to log into the host. You can find the correct username and
password for the host in the Lab Host and Lab Controller tables in section 1.5..
The clustered Data ONTAP command lines supports a number of usability features that make the
command line much easier to use. If you are unfamiliar with those feature then you might want to review
Appendix 1 of this lab guide which contains a brief overview of them.
10
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
2 Cluster Setup
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that have been joined together for the
purpose of serving data to end users. The nodes in a cluster can pool their resources together and can
distribute their work across the member nodes. Communication and data transfer between member
nodes (such as when a client accesses data on a node other than the one actually hosting the data) takes
place over a 10Gb cluster-interconnect network to which all the nodes are connected, while management
and client data traffic passes over separate management and data networks configured on the member
nodes.
Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both
controllers in an HA pair actively host and serve data but they are also capable of taking over their
partners responsibilities in the event of a service disruption by virtue of their redundant cable paths to
each others disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle
greater workloads and to support non-disruptive migrations of volumes and client connections to other
nodes in the cluster resource pool, which means that cluster expansion and technology refreshes can
take place while the cluster remains fully online and serving data.
Data ONTAP 8.2 clusters that will be only be serving NFS and CIFS can scale up to a maximum of 24
nodes, although the node limit may be lower depending on the model of FAS controller in use. Data
ONTAP 8.2 clusters that will also host iSCSI and FC can scale up to a maximum of 8 nodes.
At a high level the procedure for creating a cluster with NetApp physical controllers usually involves steps
similar to the following:
1) Cable up all the components (heads, disks, SAN/Ethernet NICs, power, serial console, etc.),
including redundant cable paths for each controller in an HA pair.
2) Connect a PC to the controllers serial console port using a terminal emulation program.
3) Power on the controller hardware and use the terminal connection to initiate a controller boot.
4) If the disks are not already assigned to the controller head, boot the head into maintenance
mode, assign the disks to the controller, then reboot the head to normal mode. If the disks are
already assigned then the maintenance mode boot is skipped and the controller is instead booted
straight into normal mode.
5)
At the end of the boot process the cluster setup wizard automatically launches on the serial
console connection and prompts the administrator for the information necessary to create the
cluster.
The controllers used in this lab are vsims (i.e. virtual NetApp storage controllers), meaning that some of
the physical controller capabilities and creation steps we just listed do not apply for this lab. For example,
vsims do not support Fibre Channel and so we will be using iSCSI to demonstrate block storage
functionality. The vsims provided in this lab also do not support HA so we will not be demonstrating HA
failover in this lab.
Lab on Demand has already handled the vsim equivalents to the physical controller setup steps 1-4 as a
part of provisioning this lab environment. Step 5 is an activity covered by this lab guide, but since the
vsims used in this lab dont provide a user accessible serial console port we instead emulate that console
connection by establishing an ssh connection to the controller node. This workaround required that we
preconfigure the vsims IP network settings to support ssh connections over the lab network. One
consequence of that preconfiguration is that some of the default values offered during cluster setup
wizard prompting will be different from what you might see when using real hardware. However, the
overall flow of the cluster setup wizard is still the same and so the vsims in this lab still provide a good
example of how the wizard behaves during cluster setup.
11
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
2.1
The cluster setup wizard gathers the data necessary to create a brand new cluster or to add a new node
to a pre-existing cluster. In this exercise we will be creating a brand new cluster named cluster1 using
the vsim named unjoined1.There are two methods available for accomplishing this task in this lab.
Manual (section 2.1.1): Using this method you will manually run the Data ONTAP setup wizard to
create the cluster. The setup wizard is a text driven tool that will prompt you for information such
as the name of the cluster you want to create, your Data ONTAP license keys, the TCP/IP
address information for the cluster and the node, and so on. If you have never run through this
procedure before then we recommend you use this method to complete this lab section. It takes
approximately 10-15 minutes to create a cluster in this manner.
Automatic (section 2.1.2): Using this method you will run a custom script included in this lab that
will automatically run through the setup wizard on your behalf. The script takes 1-2 minutes to
complete.
This sections tasks can only be performed from the command line
Launch PuTTY as described in section 1.6, and connect to the host unjoined1 using the username
admin and the password Netapp1!. Once you are logged in run the cluster setup wizard and supply it
the inputs shown in blue in the following example. In places where the bracketed default value provided
by the prompt contains the actual value we desire we have displayed ACCEPT DEFAULT DO NOT ENTER
THIS TEXT. In places where you see this string, just accept the default value by hitting the Enter key.
As a part of creating a new cluster you are prompted to input the required Data ONTAP license keys. For
your convenience, the license keys shown in the following example command text are the minimum set of
keys needed to complete the scope of this lab guide.
If you want to enter additional keys beyond those listed here, you can find the full set of license keys in
the README.txt file stored on the desktop of the Windows system named jumphost; you can easily copy
& paste these keys one at time into your PuTTY terminal session when prompted by the cluster setup
wizard. To copy & paste in this manner, open the README.txt file on the desktop using notepad.exe,
highlight a desired license key, enter Ctrl-c to copy the text, then right-click inside the PuTTY window
which will paste the copied text into the wizard.
The cluster creation script in found in section 2.1.2 will populate the full set of license keys.
12
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
MTU
1500
1500
IP
169.254.207.173
169.254.250.79
Netmask
255.255.0.0
255.255.0.0
13
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
NetApp offers a graphical tool named System Manager that you can use to configure and manage
clusters and storage controllers once youve completed the initial cluster creation. System Manager can
use SNMP to discover new clusters and controllers, but you must first enable SNMP on the cluster. The
following command will grant the public SNMP community read-only access on our newly created cluster.
cluster1::> system snmp community add community-name public type ro
Ordinarily that completes the steps required to create a cluster, but in our case there is an additional step
needed because we pre-configured the vsim to support ssh console access. That pre-configuration
resulted in the nodes name being set to unjoined1, and now we need to manually change the nodes
name to the value the cluster setup wizard would have otherwise assigned to it; in this case the
otherwise-assigned name would have been cluster1-01.
14
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Close your PuTTY connection to the node by entering the command exit at the cluster1::> prompt,
and then proceed directly to section 2.2 to continue the lab.
15
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) This window shows the output generated by the labs custom cluster create script. This script
takes 1-2 minutes to complete under normal circumstances, and you will not be prompted for any
inputs to the script other than to accept its completion. When you see the Press any key to
continue prompt simply hit any key to exit the script.
At this point cluster1 has been created and exists in exactly the same state as if you had manually run
the cluster setup wizard as described in section 2.1.1. Continue on to section 2.2.
2.2
Clusters almost always contain an even number of controller nodes since clusters are usually created
using HA controller pairs. As mentioned previously, this lab uses non-HA vsims for its storage controllers,
which is a configuration that NetApp does not recommend or support for customers, but this configuration
is acceptable for the purpose of demonstrating the clustered Data ONTAP capabilities that fall within the
scope of this lab.
16
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
There is one exception to the rule that a cluster must always contain an even number of nodes and that is
the single node cluster, which is a special cluster configuration intended to support small storage
deployments that only need a single physical controller head. The primary noticeable difference between
single node and standard clusters is that a single node cluster does not have a cluster network. Single
node clusters can later be converted into traditional multi-node clusters and at that point become subject
to all the standard cluster requirements like the need to utilize an even number of nodes consisting of HA
pairs. Since we will not be using a single node cluster in this lab we will not discuss them any further here.
nd
In this section we are going to add a 2 node to the new cluster we created in section 2.1. As was the
case in that section, there are two methods available in this section for accomplishing this task.
Manual (section 2.2.1): Using this method you will run directly run the Data ONTAP setup wizard
to add the node named unjoined2 to the cluster cluster1. Adding a node to an existing cluster
involves much less text entry than does creating a brand new cluster. If you have never run
through this procedure before then we recommend that you use this method to perform this task,
which takes approximately 5 minutes to complete.
Automatic (section 2.2.2): Using this method you will run a custom script included in this lab that
will automatically run through the setup wizard on your behalf. The script takes approximately 2
minutes to complete.
This sections tasks can only be performed from the command line:
Launch PuTTY as described in section 1.6, and connect to the host unjoined2 using the username
admin and the password Netapp1!. Once you are logged in run the cluster setup wizard and feed it the
input shown in blue. In places where the default value provided by the prompt (in brackets) contains the
value we desire we have instead displayed ACCEPT DEFAULT DO NOT ENTER THIS TEXT. In places
where you see this string just accept the default value by hitting the Enter key.
unjoined2::> cluster setup
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster? {create, join}: join
Existing cluster interface configuration found:
17
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
(Note: The Existing cluster interface IP addresses shown here are autogenerated and
may vary in your instance of the lab.)
Port
e0a
e0b
MTU
1500
1500
IP
169.254.254.105
169.254.111.119
Netmask
255.255.0.0
255.255.0.0
Do you want to use this configuration? {yes, no} [yes]: ACCEPT DEFAULT DO NOT ENTER
THIS TEXT
Step 1 of 3: Join an Existing Cluster
You can type "back", "exit", or "help" at any question.
Enter the name of the cluster you would like to join [cluster1]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Joining cluster cluster1
Network set up ..........
Node check ...
Joining cluster ...
System start up .....................................
Updating volume location database
Starting cluster support services ..
This node has joined the cluster cluster1.
Step 2 of 3: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.
SFO will not be enabled on a non-HA system.
Step 3 of 3: Set Up the Node
You can type "back", "exit", or "help" at any question.
Enter the node management interface port [e0c]: ACCEPT DEFAULT DO NOT ENTER THIS
TEXT
Enter the node management interface IP address [192.168.0.112]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Enter the node management interface netmask [255.255.255.0]: ACCEPT DEFAULT DO NOT
ENTER THIS TEXT
Enter the node management interface default gateway [192.168.0.1]: ACCEPT DEFAULT DO
NOT ENTER THIS TEXT
Cluster setup is now complete.
To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:
- Join additional nodes to the cluster by running "cluster setup" on
those nodes.
- For HA configurations, verify that storage failover is enabled by
running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.
In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.
Exiting the cluster setup wizard.
cluster1::>
Ordinarily that completes the steps required to join a node to a cluster, but in our case there are a couple
of additional step needed because we pre-configured the vsim for ssh console access. That preconfiguration resulted in the node being named unjoined2, and now we need to manually change the
18
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
nodes name to the value the cluster setup would otherwise have assigned to it; in this case that
otherwise-assigned name would have been cluster1-02.
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
unjoined2
true
true
2 entries were displayed.
cluster1::> node rename -node unjoined2 newname cluster1-02
[Job 14] Job is queued: Renaming node unjoined2 to cluster1-02.
[Job 14] Job is running.
[Job 14] Job succeeded: Rename of the node "unjoined2" to "cluster1-02" is successful.
cluster1::> cluster show
Node
Health Eligibility
--------------------- ------- -----------cluster1-01
true
true
cluster1-02
true
true
2 entries were displayed.
cluster1::>
We also need to rename the newly joined nodes root aggregate to match that value that Data ONTAP
would have otherwise assigned it.. Well discuss root aggregates in section 2.4, so for now lets just enter
the following commands.
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
7.98GB
381.1MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_unjoined2_0
7.98GB
382.8MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::> aggr rename
[Job 15] Job is queued:
[Job 15] Job is queued:
[Job 15] Job succeeded:
Close your PuTTY connection to the node by entering the command exit at the cluster1::> prompt,
and then proceed directly to section 2.3 to continue the lab.
19
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
20
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) This window shows the output generated by the labs custom cluster add node script. You will not
be prompted for any inputs to the script, and under normal circumstances this script takes
approximately 2 minutes to complete. When you see the Press any key to continue prompt
simply hit any key to exit the script.
At this point the new node cluster1-02 has been added to the cluster cluster1, and exists in exactly
the same state as if you had run all the manual configuration steps listed in section 2.1.2.
2.3
OnCommand System Manager is NetApps browser-based management tool for configuring and
managing NetApp storage systems and clusters. Now that we have a working cluster as created in
sections 2.1 and 2.2 we can connect to the cluster using System Manager.
1) Double-click to launch System Manager. Be patient; it may not appear to start right away but will
open after a 10 second or so delay.
21
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Notice that initially no storage systems are shown because we have not yet added any to System
Manager.
2) Click Add to add a controller.
The Add a System window now opens.
1) Enter the hostname as shown in the screenshot, then click Add to add the cluster to System
Manager. This will cause System Manager to discover the cluster using SNMP. (You can click the
More down arrow to see the SNMP credential details)
22
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the credentials fields as shown (using Netapp1! as the password) and then click Sign
In.
System Manager is now logged in to cluster1.
23
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The tabs on the left side of the window are used to manage various aspects of the cluster. The Cluster
tab (1) accesses configuration settings that apply to the cluster as a whole. The Storage Virtual
Machines tab (2) is used to manage individual Storage Virtual Machines (SVMs, also known as
Vservers). The Nodes tab (3) contains configuration settings that are specific to individual controller
nodes. Please take a few moments to expand and browse these tabs to familiarize yourself with their
contents.
NOTE: As you use System Manager in this lab you may encounter situations where buttons at the bottom
of a System Manager pane are beyond the viewing size of the window and there is no scroll bar provided
to scroll down to see them. If this happens you have two options; either increase the screen size of the
desktop on jumphost (right-click in the background of jumphost and select Screen Resolution from the
pop-up menu), or else in the System Manager window use the tab key to cycle through all the various
fields and buttons, which will eventually force the window to scroll down to the non-visible items.
2.4
Disks are the fundamental unit of physical storage in clustered Data ONTAP and are tied to a specific
cluster node by virtue of their physical connectivity (i.e. cabling) to a given controller head.
By default each node has one aggregate known as the root aggregate, which is a group of the nodes
local disks that host the nodes Data ONTAP operating system. A nodes root aggregate is created during
Data ONTAP installation in a minimal RAID-DP configuration, meaning it is initially comprised of 3 disks
(1 data, 2 parity), and is assigned the name aggr0. Aggregate names must be unique within a cluster so
when the cluster setup wizard joins a node it must rename that nodes root aggregate if there is a conflict
24
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
with the name of any aggregate that already exists in the cluster. If aggr0 is already in use elsewhere in
the cluster then it renames the new nodes aggregate according to the convention aggr0_<nodename>_0.
For the sake of clarity and consistency we will rename the root aggregates of all our nodes in this lab to
follow our own convention of aggr0_<clustername>_<nodenumber>, which in the case of our newly
created cluster means the root aggregate for the node cluster1-01 will be named aggr0_cluster1_01
and the root aggregate for the node cluster1-02 will be named aggr0_cluster1_02.
*** NOTE *** : If you used the scripts in sections 2.1.2 and 2.2.2 to automatically create the cluster
nd
node to that cluster then you can skip straight to section 2.5 as those scripts
have automatically renamed each nodes root aggregate as described in the preceding paragraph. Only if
nd
you manually created the cluster or manually added the 2 node to the cluster do you need to complete
the configuration steps in section 2.4 of this lab guide.
25
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the Aggregate Name field as shown and then click the Save & Close button.
Back in the System Manager repeat the process for the node cluster1-02s root aggregate.
26
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the Aggregate Name field as shown and then click the Save & Close button.
27
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
2.5
Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a
group of disks that are all physically attached to the same node. A given disk can only be a member of a
single aggregate.
As we discussed in section 2.4, the only aggregate that is automatically created on a cluster node is the
root aggregate, which hosts the Data ONTAP operating system for that node. The root aggregate should
not be used to host user data, so in this section we will be creating a new aggregate on each of the nodes
in cluster1 so they can later host the storage virtual machines, volumes, and LUNs that we will be
creating in this lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of
the storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you
assign it to use one or more specific aggregates to host the SVMs volumes. Multiple SVMs can be
assigned to use the same aggregate, which offers greater flexibility in managing storage space, whereas
dedicating an aggregate to just a single SVM provides greater workload isolation.
For this lab we will be creating a single user data aggregate on each node in the cluster.
You can view the list of disks connected to a node by using System Manager and looking under the
Nodes tab:
28
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
29
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the Cluster tab. Double-check to make sure that youve done this to avoid problems later!
2) Go to cluster1->Storage->Aggregates.
3) Click on the Create button to launch the Create Aggregate Wizard.
1) Click Next to continue the wizard. If you cant see the buttons at the bottom of the window try
resizing the whole System Manager window.
30
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
31
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Click the Select Disks button so we can specify how many disks to include in the aggregate.
32
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the line for cluster1-01, then set the Number of capacity disks to use: to 6 as shown.
2) Click Save and Close.
33
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Weve finished specifying the configuration for the new aggregate so click Create to create the
aggregate and close the wizard.
34
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The newly created aggregate should now be visible in the list of aggregates. Notice aggr1_cluster1_01
in the following screenshot.
Now repeat the same process to create a new aggregate on the node cluster1-02.
35
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
36
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
37
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Click the Select Disks button so we can specify how many disks to include in the aggregate.
38
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the line for cluster1-02, then set the Number of capacity disks to use: to 6 as shown.
2) Click Save and Close.
39
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Weve finished specifying the configuration for the new aggregate so click Create to create the
aggregate and close the wizard.
40
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Our complete list of aggregates is now displayed in the System Manager Aggregates pane.
41
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Create the aggregate named aggr1_cluster1_01 on the node cluster1-01 and the aggregate named
aggr1_cluster1_02 on the node cluster1-02.
42
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
43
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
44
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS
LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and
continues servicing network requests from that new node/port. Throughout this operation the NAS LIF
maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in
progress but as soon as it completes the clients resume any in-process NAS operations without any loss
of data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each
storage controller node can host a maximum of 125 SVMs, so you can calculate the clusters effective
SVM limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM
can host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per
node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs
per node (so that a node can also accommodate its HA partners LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVMs CIFS and NFS volumes into a
single logical filesystem view. Clients can access the entire namespace by mounting a single share or
export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and
present a consistent view of the SVMs data to all clients rather than having to reproduce that view
structure on each individual client. As an Administrator maps and unmaps volumes from the namespace
those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS
volumes higher in the SVMs namespace. Administrators can also create NFS exports at individual
junction points within the namespace and can create CIFS shares at any directory path in the
namespace.
3.1
In this section we will create a new SVM named svm1 on our cluster and will configure it to serve out a
volume over NFS and CIFS. We will be configuring two NAS data LIFs on the SVM, one per node in the
cluster.
In System Manager navigate to the Storage Virtual Machines tab so that we can launch the Storage
Virtual Machine Setup wizard.
45
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Proceed to fill out the Storage Virtual Machine details in the setup wizard.
46
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the indicated fields as shown, then click Submit & Continue.
Note that the list of available Data Protocols in your lab may differ somewhat from what is shown in the
preceding screenshot; the contents of that list depend upon on what protocol licenses you entered when
setting up your cluster. If you used the cluster setup script from section 2.1.2 to create your cluster then
all protocols i(including FC) will be available, even lthough this lab does not include Fibre Channel
connectivity.
The next window in the wizard, the Configure CIFS/NFS protocol window, is rather large and you may
not be able to view its whole contents without scrolling so we will present it here as two partial
screenshots:
47
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
48
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
49
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Specify the password for an SVM specific administrator account for the SVM, which can then be
used to delegate admin access for just this SVM. Enter Netapp1! in the password field, then
click Submit & Continue.
The New Storage Virtual Machine Summary window opens displaying the details of the newly created
SVM.
50
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The new SVM now also shows up in the list of available Storage Virtual Machines.
51
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) The SVM svm1 is now listed under cluster1 on the Storage Virtual Machines tab.
2) The NFS and CIFS protocols are shown encapsulated in green boxes, which indictates that those
protocols are enabled for the selected SVM svm1.
The Storage Virtual Machines Setup Wizard only provisions a single LIF when it creates a new SVM. We
want to have a LIF available on both cluster nodes so that a client can access the SVMs shares through
either node. To do that we will now create a 2nd LIF hosted on the other node in the cluster.
52
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Under the Storage Virtual Machines tab navigate to cluster1->svm1->Configuration>Network Interfaces. Notice that in the main pane of the window there is only a single LIF
named svm1_cifs_nfs_lif1 specified for the SVM svm1.
2) Click on the Create button to launch the Network Interface Create Wizard.
53
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
54
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the fields as shown. Note that we are setting the Role to Both. The existing LIF was
configured for both when we created the SVM because we did not create a dedicated
management LIF. and we want this new LIF to have a matching configuration. Click Next to
continue.
55
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) We want to use the new LIF for both CIFS & NFS so accept the default selections and advance
the wizard by clicking Next.
1) In the Network Properties step click on the Browse button to open the port selection window.
56
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Expand the Ports/Adapters list entry for cluster1-02 and select port e0c.
2) Click OK to accept the selection and return to the Network Properties step in the wizard.
57
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Complete the remainder of the fields in the Network Properties window and click Next to continue
the wizard.
58
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Review the summary of the settings to make sure everything is set correctly as shown. This lab
only uses a single subnet so the fact that the new interface will be assigned to the default failover
group is perfectly acceptable. If everything is correct then click Next to continue.
59
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
60
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Notice that our new LIF named svm1_lif_nfs_lif2 is now displayed in the list of the SVMs
network interfaces.
2) Notice how various properties for the selected LIF are listed in the details pane at the bottom of
the window.
Lastly, we need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of the svm1 SVMs configured NAS LIFs. To achieve this objective the DNS server
must delegate to the cluster the responsibility for the DNS zone corresponding to the SVMs hostname,
which in our case will be svm1.demo.netapp.com. We have preconfigured the labs DNS server to
delegate this responsibility, but the cluster must also be configured to accept it. You will be completing
that acceptance task now, but since it cannot be accomplished through System Manager you must
instead use the Data ONTAP command line.
Open a PuTTY connection to cluster1 following the instructions from section 1.6. Log in using the
username admin and the password Netapp1!, then enter the following commands.
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif1 dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif2 dns-zone
svm1.demo.netapp.com
cluster1::> network interface show vserver svm1 fields dns-zone,address
vserver lif
------- ----------------- ------------- ------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>
Validate that delegation is working correctly by opening a command prompt on jumphost (launch a
command prompt by going to Start->All Programs->Accessories->Command Prompt) and use the
nslookup command as shown in the following screenshot. If the nslookup command returns IP addresses
as identified by the yellow highlighted text then delegation is working correctly. If nslookup returns a Nonexistent domain error then delegation is not working correctly and you will need to review the Data
ONTAP commands you just entered as they most likely contained an error. Also notice from the following
61
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
screenshot that different executions of the nslookup command return different addresses, demonstrating
that DNS load balancing is working correctly.
Configure the NIS domain to match how System Manager configured the SVM in the GUI lab workflow.
cluster1::> vserver services nis-domain create -vserver svm1 domain demo.netapp.com
tr
active true servers 192.168.0.253
cluster1::>
62
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
-allowed-protocols nfs,cifs
Aggregate
---------aggr1_
cluster1_
01
Name
Service
------file,
nis
Name
Mapping
------file
Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01
e0c
true
cluster1-01
cluster1-01
cluster1-01
e0a
e0b
e0c
true
true
true
cluster1-01
cluster1-01
cluster1-01
e0a
e0b
e0c
true
true
true
Notice that there are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data
LIF for svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -address 192.168.0.131
-netmask 255.255.255.0 -firewall-policy mgmt
Info: Your interface was created successfully; the routing group d192.168.0.0/24 was
created
cluster1::>
63
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01
e0c
true
cluster1-02
e0c
true
Configure the DNS domain and nameservers for the svm1 SVM:
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253
-domains demo.netapp.com
cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
svm1
enabled
demo.netapp.com
192.168.0.253
2 entries were displayed.
cluster1::>
Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
we can advertise addresses for both of the NAS data LIFs that belong to svm1. We could have done this
as part of the network interface create commands but we opted to do it separately here to show you how
you can modify an existing LIF.
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif1 dns-zone
svm1.demo.netapp.com
(network interface modify)
cluster1::> network interface modify vserver svm1 lif svm1_cifs_nfs_lif2 dns-zone
svm1.demo.netapp.com
(network interface modify)
cluster1::> network interface show vserver svm1 fields dns-zone,address
vserver lif
------- ------------------ ------------- -------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>
Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username root and password Netapp1!) and executing the following commands. If the delegation is
working correctly then you should see IP addresses returned for the host svm1.demo.netapp.com, and if
you run the command several times you will eventually see that the responses vary the returned address
between the SVMs two LIFs.
64
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
This completes the planned LIF configuration for svm1, so now display a detailed configuration report for
the LIF svm1_cifs_nfs_lif1:
cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance
Vserver Name: svm1
Logical Interface Name: svm1_cifs_nfs_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.131
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Routing Group Name: d192.168.0.0/24
Administrative Status: up
Failover Policy: nextavail
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: svm1.demo.netapp.com
DNS Query Listen Enable: true
Failover Group Name: system-defined
FCP WWPN: Address family: ipv4
Comment: cluster1::>
65
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
When we issued the vserver create command to create svm1 we included an option to enable CIFS for
it, but that command did not actually create a CIFS server for the svm. Now lets create that CIFS server.
cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain
demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you must
supply the name and
password of a Windows account with sufficient privileges to add computers to the
"CN=Computers" container within the "DEMO.NETAPP.COM" domain.
Enter the user name: Administrator
Enter the password: Netapp1!
cluster1::> vserver cifs show
Server
Status
Domain/Workgroup Authentication
Vserver
Name
Admin
Name
Style
----------- --------------- --------- ---------------- -------------svm1
SVM1
up
DEMO
domain
cluster1::>
As with CIFS we enabled the SVM svm1 to support NFS at SVM creation time but that action did not
actually start up an NFS server for the SVM. Well do that now.
cluster1::> vserver nfs status -vserver svm1
The NFS server is not running.
cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::> vserver nfs status -vserver svm1
The NFS server is running.
cluster1::> vserver nfs show
Vserver: svm1
General Access: true
v3: enabled
v4.0: disabled
4.1: disabled
UDP: enabled
TCP: enabled
Default Windows User: Default Windows Group: cluster1::>
3.2
Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When we created the svm1 SVM
in the previous section we set up and enabled CIFS and NFS for it. However, it is important to understand
that clients cannot yet access the SVM using CIFS and NFS. That is partially because we have not yet
created any volumes on the SVM but also because we have not told the SVM what we want to share and
who we want to share it with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVMs volumes into a
directory hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVMs root
volume (svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares
data to CIFS and NFS clients. The SVMs other volumes are junctioned (i.e. mounted) within that root
volume or within other volumes that are already junctioned into the namespace. This hierarchy presents
NAS clients with a unified, centrally maintained view of the storage encompassed by the namespace,
regardless of where the junctioned volumes physically reside in the cluster. CIFS and NFS clients cannot
access a volume that has not been junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share
declared at the top of the namespace. While this is a very powerful capability, there is no requirement to
make the whole namespace accessible. You can create CIFS shares at any directory level in the
66
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
namespace, and you can create different NFS export rules at junction boundaries for individual volumes
and for individual qtrees within a junctioned volume.
Clustered Data ONTAP does not utilize an /etc/exports file for exporting NFS volumes; instead it uses
a policy model that dictates the NFS client access rules for the associated volumes. An NFS-enabled
SVM implicitly exports the root of its namespace and automatically associates that export with the SVMs
default export policy, but that default policy is initially empty and until it is populated with access rules no
NFS clients will be able to access the namespace. The SVMs default export policy applies to the root
volume and also to any volumes that an administrator junctions into the namespace, but an administrator
can optionally create additional export policies in order to implement different access rules within the
namespace. You can apply export policies to a volume as a whole and to individual qtrees within a
volume, but a given volume or qtree can only have one associated export policy. While you cant create
NFS exports at any other directory level in the namespace, NFS clients can mount from any level in the
namespace by leveraging the namespaces root export.
In this section of the lab we are going to configure a default export policy for our SVM so that any
volumes we junction into its namespace will automatically pick up the same NFS export rules. We will
also create a single CIFS share at the top of our namespace so that all the volumes we junction into our
namespace can be accessed via that one share. Finally, since our SVM will be sharing the same data
over NFS and CIFS, we will be setting up name mapping between UNIX and Windows user accounts to
facilitate smooth multiprotocol access to the volumes and files in the namespace.
67
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
2) Note the existence of the svm1_root volume, which hosts the namespace for the SVM svm1.
The root volume is not large; only 20 MB in this example. Root volumes are small because they
only intended to house the junctions that organize the SVMs volumes; all of the files hosted on
the SVM should reside inside the volumes that are junctioned into the namespace rather than
directly in the SVMs root volume.
Lets confirm that CIFS and NFS are running for our SVM using System Manager. Well check CIFS first.
If the Service Status property shows Started as it is the screenshot then CIFS is running for
this SVM.
If you were dealing with an SVM on which CIFS had not been previously setup then you could use the
Setup button in this window to accomplish that task.
Now check that NFS is enabled for our SVM.
68
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
69
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) In System Manager select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2) In the Export Polices window select the default policy.
3) Click on Add Rule.
The Create Export Rule window opens. Using this dialog you can create any number of rules that provide
fine grained access control for clients and also specify their order of application. For this lab we are going
to create a single rule that grants unfettered access to any host on the labs private network.
70
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Configure the fields as shown in the screenshot. This will create a single access rule that grants
read-write and root access to any node on the network without regard to which NAS protocol they
are using. Click OK to create the rule.
Returning to the Export Policies window in System Manager we now see our newly added rule under the
default policy.
71
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
With this updated default export policy in place now NFS clients will be able to mount the root of the svm1
SVMs namespace and use that mount to access any volumes that we junction into the namespace.
We next need to configure a CIFS share for our SVM. We are going to create a single share named
nsroot at the root of our SVMs namespace.
72
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the fields as shown to make the root folder of the namespace available as a CIFS share
named nsroot, then push the Create button.
The new nsroot share now shows up in the System Manager Shares window.
73
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
74
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the Permissions tab. Make sure that the group Everyone is granted the Full Control
permission. You can set more fine grained permissions on the share from this tab but this
configuration is sufficient for the purpose of this lab.
75
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the Options tab at the top of the window and make sure the settings are as shown in the
screenshot.
2) If any of the settings differ from those shown correct them and hit the Save and Close button. If
everything matches hit the Cancel button instead.
Setup of the \\svm1\nsroot CIFS share is now complete.
For this lab we have created just one share at the root of our namespace which allows users to access
any volume mounted in the namespace via that share. The advantage of this approach is that it reduces
the number of mapped drives that you have to manage on your clients; any changes you make to the
namespace become instantly visible and accessible to your clients. If you prefer to use multiple shares
then clustered Data ONTAP allows you to create additional shares rooted at any directory level within the
namespace.
Since we have configured our SVM to support both NFS and CIFS, we next want to set up username
mapping so that our UNIX root users and the DEMO\Administrator account will have synonymous
access to each others files. Setting up such a mapping may not be desirable in all environments, but it
will simplify data sharing for us since these are the two primary accounts we are using in this lab.
76
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) In System Manager open the Storage Virtual Machines tab and navigate to cluster1->svm1>Configuration->Local Users and Groups->Name Mapping.
2) In the Name Mapping pane click the Add button.
The Add Name Mapping Entry window opens.
1) Create a Windows to UNIX mapping by completing all of the fields as shown (the two
backslashes in the Pattern field is not a typo, and administrator should not be capitalized) and
then click on the Add button.
Repeat the process to create another mapping.
77
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Create a UNIX to Windows mapping by completing all of the fields as shown and then click on the
Add button.
You should now see two mappings in the Name Mappings window that together make the root and
DEMO\Administrator accounts equivalent to each other for the purpose of file access within the SVM.
78
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Domain/Workgroup
Name
---------------DEMO
Authentication
Style
-------------domain
Create an export policy for the SVM svm1and configure the policys rules.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy rule show
This table is currently empty.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname default
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------svm1
default
1
any
0.0.0.0/0
any
cluster1::> vserver export-policy rule show policyname default -instance
Vserver: svm1
Policy Name: default
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>
Create a share at the root of the namespace for the SVM svm1:
79
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Properties
---------browsable
oplocks
Comment ACL
-------- -----------
browsable
Comment
--------
ACL
----------BUILTIN\Admin
browsable
oplocks
Everyone / Fu
browsable
changenotify
4 entries were displayed.
cluster1::>
Set up CIFS <-> NFS user name mapping for the SVM svm1:
cluster1::> vserver name-mapping show
This table is currently empty.
cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1
-pattern demo\\administrator -replacement root
cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1
-pattern root -replacement demo\\administrator
cluster1::> vserver name-mapping show
Vserver
Direction Position
-------------- --------- -------smv1
win-unix 1
Pattern: demo\\administrator
Replacement: root
svm1
unix-win 1
Pattern: root
Replacement: demo\\administrator
2 entries were displayed.
cluster1::>
3.3
Volumes, or FlexVols, are the logical containers used to store data. Each volume is hosted in a single
aggregate, but any given aggregate can host multiple volumes. Unlike an aggregate, each volume can be
associated with no more than a single SVM. The maximum size of a volume is dictated by the model of
the storage controller hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000
FlexVols (depending on controller model), which means that there is an effective limit on the total number
of volumes that a cluster can host.
Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the nodes
Data ONTAP operating system. Do not use the nodes root aggregate to host any other volumes or user
data; always create additional aggregates and volumes for that purpose.
Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin
provisioning, deduplication, and compression. One specific storage efficiency feature we will be showing
in the section of the lab is thin provisioning, which dictates how space for a FlexVol is allocated in its
containing aggregate. When you create a FlexVol with a volume guarantee of type volume you are
80
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
thickly provisioning the volume, pre-allocating all of the space for the volume on the containing aggregate,
which ensures that the volume will never run out of space unless the volume reaches 100% capacity.
When you create a FlexVol with a volume guarantee of none you are thinly provisioning the volume, only
allocating space for it on the containing aggregate at the time and in the quantity that the volume actually
needs it to store the data. This latter configuration allows you to increase your overall space utilization
and even oversubscribe an aggregate by allocating more volumes on it than the aggregate could actually
accommodate if all the subscribed volumes reached their full size. However, if an oversubscribed
aggregate does fill up then all its volumes will out of space before they reach their maximum volume size,
therefore oversubscription deployments generally requires a greater degree of administrative vigilance
around space utilization.
In section 2.5 we created a new aggregate named aggr1_cluster1_01; we will now use that aggregate
to host a new thinly provisioned volume named engineering for the SVM named svm1.
81
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the data fields as shown to specify a new 1 GB thin provisioned volume named
engineering in the aggregate aggr1_cluster1_01. Click the Create button to complete the
volume creation process.
The newly created thin provisioned volume should now display in the Volumes list.
82
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) If you are not already there then navigate to Storage Virtual Machines->cluster1->svm1>Storage->Volumes.
2) Notice that engineering is now listed as a volume for the SVM.
System Manager has also automatically mapped engineering into the SVMs NAS namespace.
83
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
84
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
85
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Populate the data fields as shown to specify a new 1 GB thin provisioned volume named
eng_users in the aggregate aggr1_cluster1_01. Click the Create button to complete the
volume creation process.
86
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Now look at how System Manager junctioned in the new volume by default:
87
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
88
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
89
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) As you can see, eng_users has disappeared from the namespace. Since it is no longer
junctioned in the namespace that means clients can no longer access it or even see it. Click
Mount so we can junction the volume in at a different location.
1) Fill out the name fields as shown, noting that we will be junctioning this volume in as users
rather than as eng_users. Click the Browse button so we can choose where in the namespace
to create the junction.
90
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) The fields should now all be populated as shown. Click Mount to mount the volume in the
namespace.
91
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
92
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the Details tab and then populate the fields as shown in the screenshot.
2) Click on the Quota tab
93
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) The Quota tab is where you define the space usage limits you want to apply to the qtree. We will
not be implementing any quota limits in this lab to click the Create button.
Now create a second qtree for the user account susan.
94
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Select the Details tab and then populate the fields as shown in the screenshot.
2) Click the Create button.
At this point you should see both of our newly created user qtrees in System Manager.
95
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Show the volumes for the SVM svm1 and list its junction points:
cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
engineering aggr1_cluster1_01
online
RW
1GB
972.7MB
5%
svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.88MB
5%
2 entries were displayed.
cluster1::> volume show -vserver svm1 -junction
Junction
Junction
Vserver
Volume
Language Active
Junction Path
Path Source
--------- ------------ -------- -------- ------------------------- ----------svm1
engineering C.UTF-8 true
/engineering
RW_volume
svm1
svm1_root
C.UTF-8 /
2 entries were displayed.
cluster1::>
96
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Display detailed information about the volume engineering. Notice here that the volume is reporting as
thin provisioned (Space Guarantee Style is set to none) and that the Export Policy is set to default.
cluster1::> volume show -vserver svm1 volume engineering -instance
Vserver Name: svm1
Volume Name: engineering
Aggregate Name: aggr1_cluster1_01
Volume Size: 1GB
Volume Data Set ID: 1026
Volume Master Data Set ID: 2147484674
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: Group ID: Security Style: ntfs
UNIX Permissions: -----------Junction Path: /engineering
Junction Path Source: RW_volume
Junction Active: true
Junction Parent Volume: svm1_root
Comment:
Available Size: 972.6MB
Filesystem Size: 1GB
Total User-Visible Size: 972.8MB
Used Size: 180KB
Used Percentage: 5%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 1.20GB
Autosize Increment (for flexvols only): 51.20MB
Minimum Autosize: 1GB
Autosize Grow Threshold Percentage: 85%
Autosize Shrink Threshold Percentage: 50%
Autosize Mode: off
Autosize Enabled (for flexvols only): false
Total Files (for user-visible data): 31122
Files Used (for user-visible data): 97
Space Guarantee Style: none
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshots: 5%
97
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
View how much disk space this volume is actually consuming in its containing aggregate; the Total
Footprint value represents the volumes total consumption. The value here is so small because this
volume is thin provisioned and we have not yet added any data to it. If we had thick provisioned the
volume then the footprint here would have been 1 GB, the full size of the volume.
cluster1::> volume show-footprint -volume engineering
Vserver : svm1
Volume : engineering
Feature
-------------------------------Volume Data Footprint
Volume Guarantee
Flexible Volume Metadata
Delayed Frees
Total Footprint
Used
---------256KB
0B
5.78MB
672KB
Used%
----0%
0%
0%
0%
6.68MB
0%
cluster1::>
Create qtrees in the eng_users volume for the users bob and susan, then generate a list of all the qtrees
that belong to svm1, and finally produce a detailed report of the configuration for the qtree bob.
98
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
cluster1::> volume qtree create vserver svm1 volume eng_users qtree bob
cluster1::> volume qtree create vserver svm1 volume eng_users qtree susan
cluster1::> volume qtree show vserver svm1
Vserver
Volume
Qtree
Style
Oplocks
Status
---------- ------------- ------------ ------------ --------- -------svm1
eng_users
""
ntfs
enable
normal
svm1
eng_users
bob
ntfs
enable
normal
svm1
eng_users
susan
ntfs
enable
normal
svm1
engineering
""
ntfs
enable
normal
svm1
svm1_root
""
ntfs
enable
normal
5 entries were displayed.
cluster1::> volume qtree show -qtree bob -instance
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
svm1
eng_users
bob
/vol/eng_users/bob
ntfs
enable
1
normal
default
true
cluster1::>
3.4
The SVM svm1 is up and running and is configured for NFS and CIFS access, so its time to validate that
everything is working properly by mounting the NFS export on a Linux host and the CIFS share on a
Windows host. You will want to complete both parts of this section so you can see that both hosts are
able to seamlessly access the volume and its files.
In this part of the lab section we will demonstrate connecting the Windows client jumphost to the CIFS
share \\svm1\nsroot using the Windows GUI.
1) On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the
taskbar.
99
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
100
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Set the Drive and Folder fields as shown, then click the Finish button.
101
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Note that the engineering volume we created in section 3.3 is visible at the top of the nsroot
share, which points to the root of the namespace. If we created another volume on svm1 right now
and mounted it under the root of the namespace then that new volume would instantly become
visible in this share and to jumphost. Double-click on the engineering folder to open it.
102
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Notice that engineering contains the users folder we earlier junctioned into our namespace to
represent the volume eng_users.
2) Inside engineering create a text file name cifs.txt.
3) Edit cifs.txt, enter some text (make sure you put a carriage return at the end of the line or else
when we later view the contents of this file on linux the command shell prompt will appear on the
same line as the file contents) and save out the file to verify that write access is working.
In this part of the lab section we will demonstrate connecting a Linux client to the NFS volume svm1:/
using the Linux command line. Follow the instructions in section 1.6 to open PuTTY and connect to the
system rhel1.
Log in as the user root with the password Netapp1!, then issue the following command to see that we
currently have no NFS volumes mounted on this Linux host.
[root@rhel1 /]# df
Filesystem
1K-blocks
/dev/mapper/vg_rhel1-lv_root
11877388
tmpfs
510320
/dev/sda1
495844
[root@rhel1 /]#
6638636
510208
432505
42% /
1% /dev/shm
9% /boot
Create a mountpoint and mount the NFS export corresponding to the root of our SVMs namespace on
that mountpoint. When you run the df command again after this youll see that the NFS export svm1:/ is
mounted on our Linux host as /svm1.
103
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Navigate into the /svm1 directory and notice that you can see the engineering volume that we previously
junctioned into the SVMs namespace. Navigate into engineering and verify that you can access and
create files.
NOTE: The output shown here assumes that you have already performed the Windows client connection
steps found earlier in this section. When you cat the cifs.txt file if the shell prompt winds up on the same
line as the file output then It indicates that when you created the file on Windows you forgot to include a
newline at the end of the file.
[root@rhel1 /]# cd /svm1
[root@rhel1 svm1]# ls
engineering
[root@rhel1 /]# cd engineering
[root@rhel1 svm1]# ls
cifs.txt users
[root@rhel1 svm1]# cat cifs.txt
write test from jumphost
[root@rhel1 svm1]# echo "write test from
[root@rhel1 svm1]# cat nfs.txt
write test from rhel1
[root@rhel1 svm1]# ll
total 3
-rwxrwxrwx 1 root bin
24 Jul 25 16:20
-rwxrwxrwx 1 root root
22 Jul 25 16:27
drwxrwxrwx 1 root root 4096 Jul 25 16:10
[root@rhel1 svm1]#
cifs.txt
nfs.txt
users
You may be wondering why the cifs.txt file shows a group membership of bin rather that root like the
nfs.txt file. This is the result of a bug in RHEL and/or Data ONTAP. For more information see BURT
723323.
104
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
3.5
New in clustered Data ONTAP 8.2.1 is the ability to NFS export qtrees. This optional section explains how
to configure qtree exports and will demonstrate how to set different export rules for a given qtree. For this
exercise we will be working with the qtrees we created in section 3.3.
Qtrees had many capabilities in Data ONTAP 7-mode that have been significantly pared back in cluster
mode. Qtrees do still exist in cluster mode, but their purpose was essentially limited to just quota
management, with most other 7-mode qtree features, including NFS exports, now the exclusive purview
of volumes. This functionality change created challenges for 7-mode customers with large numbers of
NFS qtree exports who were trying to transition to cluster mode and could not convert those qtrees to
volumes because they would exceed clustered Data ONTAPs maximum number of volumes limit.
The introduction of qtree NFS exports to clustered Data ONTAP 8.2.1 resolves this problem. NetApp
continues to recommend that customers favor volumes over qtrees in cluster mode whenever practical,
but customers requiring large numbers of qtree NFS exports now have a supported solution under
clustered Data ONTAP.
While this section provides both graphical and command line methods for configuring qtree NFD exports,
some configuration steps can only be accomplished via the command line.
We will begin by creating a new export policy that we configure with rules that NFS allow access from the
Linux host rhel1.
1) In System Manager select the Storage Virtual Machines tab and then go to cluster1->svm1>Policies->Export Policies.
2) Click the Create Policy button.
105
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Complete the Policy Name field as shown and click the Add button.
1) Set the Client Specification to 192.168.0.12 and then click the OK button.
106
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) The fields in the Create Export Policy window should now be populated as in the screenshot.
Click the Create button.
107
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Now we need to apply this new export policy to the qtree. System Manager 3.1 does not support this
capability so we will have to use the clustered Data ONTAP command line. Open a PuTTY connection to
cluster1 following the instructions from section 1.6. Log in using the username admin and the password
Netapp1!, then enter the following commands.
Produce a list of svm1s export policies and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>
Oplocks
--------enable
enable
enable
enable
enable
Status
-------normal
normal
normal
normal
normal
Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is
using the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name:
Volume Name:
Qtree Name:
Qtree Path:
Security Style:
Oplock Mode:
Unix Permissions:
Qtree Id:
Qtree Status:
Export Policy:
Is Export Policy Inherited:
cluster1::>
svm1
eng_users
susan
/vol/eng_users/susan
ntfs
enable
2
normal
rhel1-only
false
Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
108
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Now we need to validate that the more restrictive export policy that weve applied to the qtree susan is
working as expected. If you still have an active PuTTY session open to the the Linux host rhel1 then
bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1
[root@rhel1
bob susan
[root@rhel1
[root@rhel1
[root@rhel1
hello
[root@rhel1
~]# cd /sv1/engineering/users
users]# ls
users]# cd susan
susan]# echo "hello" rhel1.txt > rhel1.txt
susan]# cat rhel1.txt
susan]#
Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!), This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]
We need to first create a new export policy and configure it with rules so that only the Linux host rhel1 will
be granted access to the associated volume and/or qtree. First create the export policy.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::> vserver export-policy create vserver svm1 policyname rhel1-only
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
cluster1::>
109
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Next add a rule to the policy so that only the Linux host rhel1 will be granted access.
cluster1::> vserver export-policy rule show -vserver svm1 policyname rhel1-only
There are no entries matching your query.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname rhel1-only
-clientmatch 192.168.0.12 -rorule any -rwrule any -superuser any -anon 65534
-ruleindex 1
cluster1::> vserver export-policy rule show
Policy
Rule
Access
Client
RO
Vserver
Name
Index
Protocol Match
Rule
------------ --------------- ------ -------- --------------------- --------svm1
default
1
any
0.0.0.0/0
any
svm1
rhel1-only
1
any
192.168.0.12
any
cluster1::> vserver export-policy rule show vserver svm1 policyname rhel1-only
-instance
Vserver: svm1
Policy Name: rhel1-only
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 192.168.0.12
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>
The remaining steps for applying and testing the rhel1-only export policy against the exported susan
qtree are exactly the same as the command line steps shown under the To perform this sectionss tasks
from the GUI heading found earlier in this section of the lab guide (section 3.5). Please complete those
command line instructions now..
110
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
111
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Select cluster1.
3) Click the Create button to launch the Storage Virtual Machine Setup wizard.
Fill out the Storage Virtual Machine details in the setup wizard. Note that the wizard window doesnt
include scrollbars so you may need to expand the System Manager window in order to see all the fields.
112
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Complete the fields as shown. Note that we are using the same aggregate here that is hosting
the SVM svm1 that we created in section 3. Multiple SVMs can share the same aggregate.
2) Click Submit & Continue to move to the next step in the wizard.
Note that in your lab the list of available Data Protocols may differ somewhat from what is shown in the
preceding screenshot, depending on what protocol licenses you entered when setting up your cluster. If
you used the cluster setup script from section 2.1.2 to create your cluster then all protocols will be
available.
113
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Configure the LIFs Per Node and IP address fields as shown, then click the Review or Modify
LIF configuration (Advanced Settings) checkbox. Note that the checkbox will not be selectable
until after you finish filling in all the IP address related fields as shown.
The bottom half of the wizard window now displays configuration details for the 4 LIFs it plans to
configure (2 LIFs per cluster node).
1) Review the settings for the LIFs for your cluster and make sure that they match those shownin
the screenshot. If any of the settings arent correct you can double-click on the line in question
and change its settings. There should be a LIF assigned to port e0d and e0e on each node.
2) Click Submit & Continue to advance the wizard.
114
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Complete the fields as shown, using Netapp1! as the SVM Administrator password. Then click
the Submit & Continue button.
115
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
116
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) The new svmluns SVM now shows up under the Storage Virtual Machines tab.
2) The green box around the iSCSI protocol indicates that iSCSI is enabled on the SVM.
Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the clustered Data ONTAP
command line syntax still refers to storage virtual machines as vservers.
117
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Create 4 SAN LIFs for the SVM svmluns, 2 per node. Dont forget you can save some typing here by
using the up arrow to recall previous commands that you can edit and then execute.
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0d -address
192.168.0.133 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
Info: Your interface was created successfully; the routing group
d192.168.0.0/24 was created
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0e -address
192.168.0.134 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0d -address
192.168.0.135 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0e -address
192.168.0.136 -netmask 255.255.255.0 -failover-policy disabled -firewall-policy data
cluster1::>
118
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01
e0c
true
cluster1-01
cluster1-01
cluster1-01
e0a
e0b
e0c
true
true
true
cluster1-02
cluster1-02
cluster1-02
e0a
e0b
e0c
true
true
true
cluster1-01
e0c
true
cluster1-02
e0c
true
cluster1-01
e0d
true
cluster1-01
e0e
true
cluster1-02
e0d
true
cluster1-02
e0e
true
cluster1-01
e0c
true
119
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
d192.168.0.0/24
up
disabled
data
false
none
false
ipv4
-
Create a portset named iscsi_pset_1 for the svmluns SVM and add the newly created SAN LIFs to the
portset. Note that you can save yourself some typing by taking advantage of command line completion
when entering the port-name list.
cluster1::> portset create -vserver svmluns -portset iscsi_pset_1 -protocol iscsi -port-name
cluster1-01_iscsi_lif_1,cluster1-01_iscsi_lif_2,cluster1-02_iscsi_lif_1,cluster1-02_iscsi_lif_2
Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.
cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
engineering aggr1_cluster1_01
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
6 entries were displayed.
cluster1::>
4.1
Type
Size Available Used%
---- ---------- ---------- ----RW
7.56GB
5.47GB
27%
RW
7.56GB
5.50GB
27%
RW
1GB
972.6MB
5%
RW
1GB
972.6MB
5%
RW
20MB
18.88MB
5%
RW
20MB
18.89MB
5%
In section 4.1 we created a new SVM and configured it for iSCSI. In the following sub-sections we will
perform the remaining steps needed to configure and use a LUN under Windows:
1) Gather the iSCSI Initiator Name of the Windows client.
2) Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that
volume, and map the LUN so it can be accessed by the Windows client.
3) Mount the LUN on a Windows client leveraging multi-pathing.
120
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
You must complete all of the subsections of this section in order to use the LUN from the Windows client.
On the desktop of the Windows client named jumphost (the main Windows host you use in the lab) click
the Start button and navigate to Administrative Tools->iSCSI Initiator to open the iSCSI Initiator
Properties window.
121
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Click the Configuration tab and note the value in the Initiator Name box (highlighted in the
screenshot). The value is:
iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
You can highlight the value in your iSCSI Initiator Properties window and use Ctrl-c to copy it for
later use.
2) Click the Cancel button to close the window.
122
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
123
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
124
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Set the fields as shown in the screenshot and then click the Next button to advance the wizard.
Note that we are creating a thin provisioned LUN here; if we created the LUN without setting this
Thin Provisioned checkbox then the total size of the LUN would get pre-allocated in the volume.
By setting thin provisioning here the LUN will only allocate space as it actually needs to consume
it.
125
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Choose Create a new flexible volume in, populate the fields as shown, and then click the Next
button.
126
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) In the Initiators Mapping Window click the Add Initiator Group button.
127
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Fill out the fields as shown in the screenshot and then click the Choose button to select the
Portset.
128
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
129
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) At this point all of the fields in the Create Initiator Group window should appear as shown. Click
the Initiators tab.
130
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
131
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Enter the iSCSI initiator name for the Windows host jumphost that we gathered in section 4.2.1;
that initiator name was iqn.1991-05.com.microsoft:jumphost.demo.netapp.com. If it is still in
your copy/paste buffer you can paste the value in here by using Ctrl-v. Afterwards click the OK
button.
132
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
133
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) You should see a message stating that the winigrp initiator group was created successfully.
Click OK to acknowledge the message.
134
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Make sure you select the checkbox so that the LUN will be mapped to this igroup.
2) Click the Next button.
135
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) We are not going to set any Quality of Service properties for this LUN, so just click the Next
button.
136
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Review the settings and if everything is correct click the Next button.
137
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
138
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
139
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Display a list of the defined portsets and igroups, then create a new portset named iscsi_pset_1 and a
new igroup named winigrp that we will use to manage access to the new LUN. Finally, add the Windows
clients initiator name to the igroup.
140
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Map the LUN windows.lun to the igroup winigrp, then display a list of all the LUNs, all the mapped
LUNs, and finally a detailed report on the configuration of the LUN windows.lun.
cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrp
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
cluster1::> lun mapped show
Vserver
Path
Igroup
LUN ID Protocol
---------- ---------------------------------------- ------- ------ -------svmluns
/vol/winluns/windows.lun
winigrp
0 iscsi
cluster1::> lun show -lun windows.lun -instance
Vserver Name: svmluns
LUN Path: /vol/winluns/windows.lun
Volume Name: winluns
Qtree Name: ""
LUN Name: windows.lun
LUN Size: 204.0MB
OS Type: windows_2008
Space Reservation: disabled
Serial Number: BLH0T?DDsJWb
Comment: Windows LUN
Space Reservations Honored: false
Space Allocation: disabled
State: online
LUN UUID: e8a93e14-4730-49e0-bd3f-5c4d7fbabb6a
Mapped: mapped
Block Size: 512
Device Legacy ID: Device Binary ID: Device Text ID: Read Only: false
Inaccessible Due to Restore: false
Used Size: 0
Maximum Resize Size: 502.0GB
Creation Time: 2/18/2014 16:47:49
Class: regular
Clone: false
Clone Autodelete Enabled: false
QoS Policy Group: cluster1::>
141
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
On the desktop of the Windows client named jumphost, click the Start button and navigate to
Administrative Tools->MPIO to open the MPIO Properties window. We are going to validate that the
Multi-Path I/O (MPIO) software is working properly before we attempt to mount the LUN.
142
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) If the Add support for iSCSI devices checkbox is NOT greyed out then MPIO is NOT configured
properly. If this is the case then check that checkbox, click the Add button, and then click Yes on
the reboot dialog to reboot your Windows host. After the reboot completes return here to verify
that the Add support for iSCSI devices checkbox is now greyed out. Once again, under a
proper MPIO configuration this checkbox should be greyed out.
2) Click OK to close the window.
Now we will begin the process of connecting to the LUNs. On the desktop of jumphost click the Start
button again and navigate to Administrative Tools->iSCSI Initiator to open the iSCSI Initiator Properties
window.
143
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Note that there are currently no targets listed in the Discovered targets text window as we have
not yet mapped any iSCSI targets to this host. Click the Discovery tab.
144
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) We are now going to manually add a target portal to jumphost. Click the Discover Portal
button to open the Discover Target Portal dialog window.
145
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Note that the Target portals box under the Discovery tab now shows an entry for the IP address
we specified in the preceding step.
146
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
147
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Check the Enable multi-path checkbox, then click the Advanced button.
148
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the LIF
(should be 192.168.0.133/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.
149
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Back in the iSCSI Initiator Properties window note that the status of the Discovered target has now
changed to Connected.
Thus far we have added a single path to our iSCSI LUN using the cluster1-01_iscsi_lif_1 LIF on
the node cluster1-01. We are now going to add in additional paths using each of the other SAN LIFs
we created for the SVM svmluns.
150
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) In the iSCSI Initiator Properties window select the target in the Discovered targets list.
2) Click the Properties button to open the Properties dialog.
This starts the sequence for adding a path for the cluster1-01_iscsi_lif_2 LIF.
151
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Click the Add session button to open the Connect To Target dialog.
1) Check the Enable multi-path checkbox, then click the Advanced button to open the Advanced
Setting dialog box.
152
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the 2nd LIF
(should be 192.168.0.134/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.
153
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Click the Add session button to open the Connect To Target dialog.
154
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Check the Enable multi-path checkbox, then click the Advanced button to open the Advanced
Setting dialog box.
155
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the 3rd LIF
(should be 192.168.0.135/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.
156
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Click the Add session button to open the Connect To Target dialog.
157
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Check the Enable multi-path checkbox, then click the Advanced button to open the Advanced
Setting dialog box.
158
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
th
1) Set the Target portal IP dropdown to the value of the IP Address/Port we specified for the 4 LIF
(should be 192.168.0.136/3260 as shown in the screenshot). Click the OK button to close the
Advanced Settings dialog box.
159
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
At this point we have finished adding all four paths so we can move on with the rest of the
configuration process.
160
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) The Identifiers list in the Properties dialog box now shows entries for four sessions, one for each
path we just configured. Click OK to close the Properties dialog.
161
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The LUN should now be properly connected to the LUN using multi-pathing so it is time to format our LUN
and build a filesystem on it. Launch Windows Server Manager by clicking the Start button on the desktop
of jumphost, and then go to Administrative Tools->Server Manager. It may take 10-20 seconds for
Server Manager to open so please be patient.
162
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
When you launch Disk Management an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it. If you see more than one disk listed then
MPIO has not correctly recognized that the multiple paths we set up are all for the same LUN, so please
review your steps to find and correct any configuration errors.
1) This screenshot correctly shows only a single disk. Click OK to initialize the disk.
The Disk Management window is now visible and shows a new unallocated disk as shown in the following
screenshot excerpt:
163
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Right-click inside the Unallocated disk and select New Simple Volume.
164
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
165
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
166
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The new LUN is now ready as shown in the following screenshot. Before we complete this section of the
lab, lets take a look at the MPIO configuration for the new LUN we just mounted.
167
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
168
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
169
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Back in the WINLUN (E:) Properties dialog click OK to close the window.
4.2
In section 4.1 we created a new SVM and configured it for iSCSI. In the following sub-sections we will
perform the remaining steps needed to configure and use a LUN under Linux:
1) Gather the iSCSI Initiator Name of the Linux client.
2) Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun
within that volume, and map the LUN to the Linux client.
3) Mount the LUN on the Linux client.
You must complete all of the following subsections in order to use the LUN from the Linux client. Note
that there is no requirement to complete section 4.2 (the Windows LUN section) before starting this
section of the lab guide but the screenshots and command line output shown here assume that you have;
if you did not complete section 4.2 then the differences will not affect your ability to create and mount the
Linux LUN.
170
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Run the following command on rhel1 to find the name of its iSCSI initiator.
[root@rhel1 ~]# cd /etc/iscsi
[root@rhel1 iscsi]# ls
initiatorname.iscsi iscsid.conf
[root@rhel1 iscsi]# cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 iscsi]#
171
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
172
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
173
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Set the fields as shown in the screenshot and then click the Next button to advance the wizard.
174
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
2) Choose Create a new flexible volume in, populate the fields as shown, and then click the Next
button.
175
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) In the Initiators Mapping Window click the Add Initiator Group button.
176
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Complete the fields in the Create Initiator Group window as shown. The name of the group
should be linigrp. When selecting the portset you can click the Choose button to select
iscsi_pset_1 from the list of existing portsets. IMPORTANT!!! Do not click the Create button
yet! Instead click the Initiators tab to continue.
177
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
178
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
2) Enter the iSCSI initiator name for the rhel1 host that we gathered in section 4.3.1. That initiator
name was iqn.1994-05.com.redhat:rhel1.demo.netapp.com, then click the OK button.
179
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
180
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) You should see a message stating that the linigrp initiator group was created successfully. Click
OK to acknowledge the message.
181
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Make sure you check the checkbox so that the LUN will be mapped to this igroup.
2) Click Next.
182
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) We are not going to set any Quality of Service properties for this LUN, so just click the Next
button.
183
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
1) Review the settings and if everything is correct click the Next button.
184
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
185
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Our new Linux LUN exists and is mapped so that our rhel1 client can see it, but we still have one more
configuration step remaining for this LUN as follows:
New in Data ONTAP 8.2 is a space reclamation feature that allows Data ONTAP to reclaim space from a
thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify the client
when the LUN cannot accept writes due to lack of space on the volume. This feature is supported by
VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The
RHEL clients used in this lab are running version 6.3 and so we will enable the space reclamation feature
for our Linux LUN. Space reclamation can only be enabled through the Data ONTAP command line, so if
you do not already have a PuTTY session open to cluster1 then open one now following the directions
shown in section 1.6. The username will be admin and the password will be Netapp1!.
Enable space reclamation for the LUN.
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation
enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>
186
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Create the thin provisioned Linux LUN linux.lun on the volume linluns:
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 200MB
-ostype linux -space-reserve disabled
Created a LUN of size 200m (209715200)
cluster1::> lun modify -vserver svmluns -volume linluns -lun linux.lun -comment
"Linux LUN"
cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/linluns/linux.lun
online unmapped linux
200MB
svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
204.0MB
2 entries were displayed.
cluster1::>
187
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Display a list of the clusters igroups and portsets, then create a new igroup named linigrp that we will
use to manage access to the LUN linux.lun. Add the iSCSI initiator name for the Linux host rhel1 to the
new igroup.
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::> portset show
Vserver
Portset
Protocol Port Names
Igroups
--------- ------------ -------- ----------------------- -----------svmluns
iscsi_pset_1 iscsi
cluster1-01_iscsi_lif_1, cluster1-01_iscsi_lif_2,
cluster1-02_iscsi_lif_1, cluster1-02_iscsi_lif_2
winigrp
cluster1::> igroup create -vserver svmluns -igroup linigrp -protocol iscsi -ostype
linux -portset iscsi_pset_1 -initiator iqn.1994-05.com.redhat:rhel1.demo.netapp.com
cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
linigrp
iscsi
linux
iqn.1994-05.com.redhat:rhel1.demo.
netapp.com
svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
2 entries were displayed.
cluster1::>
188
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
false
disabled
online
b6cd6dc9-b021-4155-af42-ad6b9a1571c7
mapped
512
false
false
0
64.00GB
2/18/2014 22:35:21
regular
false
false
-
New in Data ONTAP 8.2 is a space reclamation feature that allows Data ONTAP to reclaim space from a
thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to notify the client
when the LUN cannot accept writes due to lack of space on the volume. This feature is supported by
VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft Windows 2012. The
RHEL clients used in this lab are running version 6.3 and so we will enable the space reclamation feature
for our Linux LUN.
Configure the LUN to support space reclamation:
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation
enabled
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled
cluster1::>
The steps in this section assume some familiarity with how to use the Linux command line. If you are not
familiar with those concepts then we recommend that you skip this section of the lab.
If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root with
the password Netapp1!.
189
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, and the
iSCSI initiator name has already been configured for each host. Confirm that is the case:
[root@rhel1 ~]# rpm qa | grep netapp
netapp_linux_host_utilities-6-1.x86_64
[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 ~]#
"^sda"
"^hd[a-z]"
"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
"^ccis.*"
devices {
# NetApp iSCSI LUNs
device {
vendor
"NETAPP"
product
"LUN"
path_grouping_policy
group_by_prio
190
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
features
prio
path_checker
failback
path_selector
hardware_handler
rr_weight
rr_min_io
getuid_callout
}
}
[root@rhel1 ~]#
We now need to start the iSCSI software service on rhel1 and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
[root@rhel1 ~]# service iscsid status
iscsid is stopped
[root@rhel1 ~]# service iscsid force-start
Starting iscsid: OK
[root@rhel1 ~]# service iscsi status
No active sessions
[root@rhel1 ~]# chkconfig iscsi on
[root@rhel1 ~]# chkconfig --list iscsi
iscsi
0:off 1:off 2:on
3:on
[root@rhel1 ~]#
4:on
5:on
6:off
Next discover the available targets using the iscsiadm command. Note that the exact values used for the
node paths may differ in your lab from what is shown in this example, and that after running this
command there will not as of yet be active iSCSI sessions because we have not yet created the
necessary device files.
[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets --portal
192.168.0.133
192.168.0.133:3260,1028 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
192.168.0.136:3260,1031 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
192.168.0.135:3260,1030 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
192.168.0.134:3260,1029 iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4
[root@rhel1 ~]# iscsiadm --mode session
iscsiadm: No active sessions.
[root@rhel1 ~]#
Create the devices necessary to support the discovered nodes, after which the sessions become active.
[root@rhel1 ~]# iscsiadm --mode node -l all
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
Logging in to [iface: default, target: iqn.199208.com.netapp:sn.04c4e3d102ff11e39fdb123478563412:vs.4,
(multiple)
191
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
portal: 192.168.0.133,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.134,3260]
portal: 192.168.0.133,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.134,3260]
At this point the Linux client sees the LUN over all four paths but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol
size
mode
---------------------------------------------------------------------------------------------svmluns
/vol/linluns/linux.lun /dev/sde
host3
iSCSI
200m
C
svmluns
/vol/linluns/linux.lun /dev/sdd
host4
iSCSI
200m
C
svmluns
/vol/linluns/linux.lun /dev/sdc
host5
iSCSI
200m
C
scmluns
/vol/linluns/linux.lun /dev/sdb
host6
iSCSI
200m
C
[root@rhel1 ~]#
Since the lab includes a pre-configured /etc/multipath.conf file we just need to start the multipathd service
to handle the multiple path management and configure it to start automatically at boot time.
[root@rhel1 ~]# service multipathd status
multipathd is stopped
[root@rhel1 ~]# service multipathd start
Starting multipathd daemon: OK
[root@rhel1 ~]# service multipathd status
multipathd (pid 10408) is running...
[root@rhel1 ~]# chkconfig multipathd on
[root@rhel1 ~]# chkconfig --list multipathd
multipathd
0:off 1:off 2:on
3:on
[root@rhel1 ~]#
4:on
5:on
6:off
The multipath command displays the configuration of DM-Multipath, and the multipath ll command
displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/mapper that
you use to access the multipathed LUN (in order to create a filesystem on it and to mount it); the first line
of output from the multipath ll command lists the name of that device file (in this example
3600a0980424c4830543f443061704343). The autogenerated name for this device file will likely differ in your
copy of the lab. Also pay attention to the output of the sanlun lun show p command which shows
information about the Data ONTAP path of the LUN, the LUNs size, its device file name under
/dev/mapper, the multipath policy, and also information about the various device paths themselves.
192
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
You can see even more detail about the configuration of multipath and the LUN as a whole by running the
commands multipath v3 d ll or iscsiadm m session P 3. As the output of these commands
is rather lengthy we have omitted it here.
The LUN is now fully configured for multipath access, so the only steps remaining before you can use the
LUN on the Linux host is to create a filesystem and mount it. When you run the following commands in
your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that string
from the output of ls l /dev/mapper):
[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980424c4830543f444472796366
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks:
done
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4 blocks, Stripe width=64 blocks
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
193
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
The discard option shown in the mount command allows the Red Hat host to take advantage of space
reclamation for the LUN as we discussed in section 4.3.3.
To have the LUNs filesystem automatically mounted at boot time run the following command (modified to
reflect the multipath device path being used in your instance of the lab) to add the mount information to
the /etc/fstab file. The following command should be entered as a single line (with a space character
separating the text from each line).
[root@rhel1 ~]# echo '/dev/mapper/3600a0980424c4830543f444472796366
_netdev,discard,defaults 0 0' >> /etc/fstab
[root@rhel1 ~]#
/linuxlun ext4
194
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
When using tab completion if the Data ONTAP command interpreter is unable to identify a unique
expansion it will display a list of potential matches similar to what using the ? character does.
cluster1::> cluster s
Error: Ambiguous command.
cluster setup
cluster show
cluster statistics
cluster1::>
The Data ONTAP commands are structured hierarchically. When you log in you are placed at the root of
that command hierarchy, but you can step into a lower branch of the hierarchy by entering one of the
base commands. For example, when you first log in to the cluster enter the ? command to see the list of
available base commands, as follows:
cluster1::> ?
up
cluster>
dashboard>
event>
exit
history
job>
lun>
man
network>
qos>
redo
rows
run
security>
set
sis
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>
cluster1::>
Go up one directory
Manage clusters
Display dashboards
Manage system events
Quit the CLI session
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage physical and virtual network connections
QoS settings
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the node shell
The security directory
Display/Set CLI session settings
Manage volume efficiency
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers
The > character at the end of a command signifies that it has a sub-hierarchy; enter the vserver
command to enter the vserver sub-hierarchy.
195
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
cluster1::> vserver
cluster1::vserver> ?
audit>
cifs>
context
create
dashboard>
data-policy>
delete
export-policy>
fcp>
fpolicy>
group-mapping>
iscsi>
locks>
modify
name-mapping>
nfs>
peer>
rename
security>
services>
setup
show
smtape>
start
stop
cluster1::vserver>
Notice how the prompt changed to reflect that you are now in the vserver sub-hierarchy, and that some
of the subcommands here have sub-hierarchies of their own. To return to the root of the hierarchy enter
the top command; you can also navigate upwards one level at a time by using the up or .. commands.
cluster1::vserver> top
cluster1::>
The Data ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key
you can step through the series of commands you ran earlier and you can selectively execute a given
command again when you find it by hitting the Enter key. You can also use the left and right arrow keys to
edit the command before you run it again.
References
The following references were used in writing this lab guide.
196
Lab on Demand NetApp Introduction Lab for clustered Data ONTAP 8.2 v1.1
Version History
Version
Date
Version 1.1
August 2013
Initial Release
September 2013
Version 1.2
February 2014
November 2014
November 2015
NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be
obtained by the use of the information or observance of any recommendations provided herein. The
information in this document is distributed AS IS, and the use of this information or the implementation of
any recommendations or techniques herein is a customers responsibility and depends on the customers
ability to evaluate and integrate them into the customers operational environment. This document and
the information contained herein may be used solely in connection with the NetApp products discussed
in this document.
197
2013 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp,
Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Go further, faster, xxx, and xxx are trademarks or
registered trademarks of NetApp, Inc. in the United States and/or other countries. <<Insert third-party trademark notices here.>> All
Lab on Demand other
NetApp
Introduction
Lab
clustered or
Data
ONTAP
8.2 v1.1 of their respective holders and should be treated as such.
brands
or products
arefor
trademarks
registered
trademarks