Sunteți pe pagina 1din 246

Basic Concepts for Clustered Data

ONTAP 8.3.1
December 2015 | SL10237 Version 1.2

Before You Begin

Figure :
You must choose whether you want to complete this lab using OnCommand System Manager, NetApp's
GUI management tool, or the Command Line Interface (CLI) for configuring the clustered Data ONTAP
system in this lab.
This document contains two complete versions of the lab guide, one which utilizes System Manager for the lab's
clustered Data ONTAP configuration activities, and another that utilizes the CLI. Both versions walk you through
the same set of management tasks.

If you want to use System Manager, begin here.


If you want to use the CLI, begin here.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

TABLE OF CONTENTS

1 GUI Introduction.............................................................................................................................. 5
2 Introduction...................................................................................................................................... 6
2.1 Why clustered Data ONTAP?................................................................................................... 6
2.2 Lab Objectives........................................................................................................................... 7
2.3 Prerequisites.............................................................................................................................. 7
2.4 Accessing the Command Line.................................................................................................8
3 Lab Environment........................................................................................................................... 10
4 Lab Activities................................................................................................................................. 12
4.1 Clusters.....................................................................................................................................12
4.1.1 Connect to the Cluster with OnCommand System Manager............................................................................. 12
4.1.2 Advanced Drive Partitioning............................................................................................................................... 14
4.1.3 Create a New Aggregate on Each Cluster Node...............................................................................................18
4.1.4 Networks............................................................................................................................................................. 25

4.2 Create Storage for NFS and CIFS..........................................................................................31


4.2.1 Create a Storage Virtual Machine for NAS........................................................................................................ 33
4.2.2 Configure CIFS and NFS................................................................................................................................... 45
4.2.3 Create a Volume and Map It to the Namespace............................................................................................... 58
4.2.4 Connect to the SVM From a Windows Client.................................................................................................... 76
4.2.5 Connect to the SVM From a Linux Client.......................................................................................................... 81
4.2.6 NFS Exporting Qtrees (Optional)....................................................................................................................... 83

4.3 Create Storage for iSCSI........................................................................................................ 88


4.3.1 Create a Storage Virtual Machine for iSCSI...................................................................................................... 89
4.3.2 Create, Map, and Mount a Windows LUN......................................................................................................... 96
4.3.3 Create, Map, and Mount a Linux LUN............................................................................................................. 141

5 References....................................................................................................................................159
6 Version History............................................................................................................................ 160

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7 CLI Introduction........................................................................................................................... 161


8 Introduction.................................................................................................................................. 162
8.1 Why clustered Data ONTAP?............................................................................................... 162
8.2 Lab Objectives....................................................................................................................... 163
8.3 Prerequisites.......................................................................................................................... 163
8.4 Accessing the Command Line............................................................................................. 164
9 Lab Environment......................................................................................................................... 166
10 Using the clustered Data ONTAP Command Line..................................................................168
11 Lab Activities............................................................................................................................. 170
11.1 Clusters.................................................................................................................................170
11.1.1 Advanced Drive Partitioning........................................................................................................................... 170
11.1.2 Create a New Aggregate on Each Cluster Node........................................................................................... 173
11.1.3 Networks......................................................................................................................................................... 175

11.2 Create Storage for NFS and CIFS......................................................................................177


11.2.1 Create a Storage Virtual Machine for NAS.................................................................................................... 178
11.2.2 Configure CIFS and NFS............................................................................................................................... 182
11.2.3 Create a Volume and Map It to the Namespace Using the CLI.....................................................................185
11.2.4 Connect to the SVM From a Windows Client................................................................................................ 189
11.2.5 Connect to the SVM From a Linux Client...................................................................................................... 194
11.2.6 NFS Exporting Qtrees (Optional)................................................................................................................... 196

11.3 Create Storage for iSCSI.................................................................................................... 199


11.3.1 Create a Storage Virtual Machine for iSCSI.................................................................................................. 200
11.3.2 Create, Map, and Mount a Windows LUN..................................................................................................... 202
11.3.3 Create, Map, and Mount a Linux LUN........................................................................................................... 236

12 References..................................................................................................................................244
13 Version History.......................................................................................................................... 245

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1 GUI Introduction
This begins the GUI version of the Basic Concepts for Clustered Data ONTAP 8.3.1.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2 Introduction

This lab introduces the fundamentals of clustered Data ONTAP . In it you will start with a pre-created 2-node
cluster, and configure Windows 2012R2 and Red Hat Enterprise Linux 6.6 hosts to access storage on the cluster
using CIFS, NFS, and iSCSI.

2.1 Why clustered Data ONTAP?


One of the key ways to understand the benefits of clustered Data ONTAP is to consider server virtualization.
Before server virtualization, system administrators frequently deployed applications on dedicated servers in order
to maximize application performance, and to avoid the instabilities often encountered when combining multiple
applications on the same operating system instance. While this design approach was effective, it also had the
following drawbacks:

It did not scale well adding new servers for every new application was expensive.
It was inefficient most servers are significantly under-utilized, and businesses are not extracting the
full benefit of their hardware investment.
It was inflexible re-allocating standalone server resources for other purposes is time consuming, staff
intensive, and highly disruptive.

Server virtualization directly addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware, allowing
businesses to consolidate their server workloads to a smaller set of more effectively utilized physical servers.
Additionally, the ability to transparently migrate running virtual machines across a pool of physical servers
reduces the impact of downtime due to scheduled maintenance activities.
Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a single
logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data ONTAP you
can:

Combine different types and models of NetApp storage controllers (known as nodes) into a shared
physical storage resource pool (referred to as a cluster).
Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the
same storage cluster.
Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage
Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data
volumes, LUNs, CIFS shares, and NFS exports.
Support multi-tenancy with delegated administration of SVMs. Tenants can be different companies,
business units, or even individual application owners, each with their own distinct administrators whose
admin rights are limited to just the assigned SVM.
Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.
Non-disruptively migrate live data volumes and client connections from one cluster node to another.
Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during
hardware refresh cycles.
Leverage multiple nodes in the cluster to simultaneously service a given SVM's storage workloads.
This means that businesses can scale out their SVMs beyond the bounds of a single physical node in
response to growing storage and performance requirements, all non-disruptively.
Apply software and firmware updates, and configuration changes without downtime.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2.2 Lab Objectives


This lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow you to
focus on the topics that specifically interest you. The "Clusters" section is prerequisite for the other sections. If you
are interested in NAS functionality then complete the Storage Virtual Machines for NFS and CIFS section. If you
are interested in SAN functionality, then complete the Storage Virtual Machines for iSCSI section, and at least
one of it's Windows or Linux subsections (you may do both if you so choose).
Here is a summary of the exercises in this lab, along with their Estimated Completion Times (ECT):

Clusters (Required, ECT = 20 minutes).

Explore a cluster

View Advanced Drive Partitioning.

Create a data aggregate.

Create a Subnet.
Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.

Configure the Storage Virtual Machine for CIFS and NFS access.

Mount a CIFS share from the Storage Virtual Machine on a Windows client.

Mount a NFS volume from the Storage Virtual Machine on a Linux client.
Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.


For Windows (Optional, ECT = 40 minutes)

Create a Windows LUN on the volume and map the LUN to an igroup.

Configure a Windows client for iSCSI and MPIO and mount the LUN.
For Linux (Optional, ECT = 40 minutes)

Create a Linux LUN on the volume and map the LUN to an igroup.
Configure a Linux client for iSCSI and multipath and mount the LUN.
This lab includes instructions for completing each of these tasks using either System
Manager, NetApps graphical administration interface, or the Data ONTAP command line.
The end state of the lab produced by either method is exactly the same so use whichever
method you are the most comfortable with.

2.3 Prerequisites
This lab introduces clustered Data ONTAP, and makes no assumptions that the user has previous experience
with Data ONTAP. The lab does assume some basic familiarity with storage system related concepts such as
RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume that
the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mounting NFS volumes and LUNs on a Linux client. All steps are performed from
the Linux command line, and assumes a basic working knowledge of the Linux command line. A basic working
knowledge of a text editor such as vi may be useful, but is not required.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2.4 Accessing the Command Line


PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order to
run command line commands.
1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host JUMPHOST as
shown in the following screenshot; just double-click on the icon to launch it.
Tip: If you already have a PuTTY session open and you want to start another (even to a different
host), you will instead need to right-click the PuTTY icon and select PuTTY from the context
menu.

Figure 2-1:

Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.
2. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in the
screenshot. If you accidentally navigate away from this view just click on the Session category item to
return to this view.
3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You can
find the correct username and password for the host in the Lab Host Credentials table found in the Lab
Environment section of this guide.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 2-2:

If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a little
initimidating. However, the commands are actually quite easy to use if you remember the following 3 tips:

Make liberal use of the Tab key while entering commands, as the clustered Data ONTAP
command shell supports tab completion. If you hit the Tab key while entering a portion of a
command word, the command shell will examine the context and try to complete the rest of
the word for you. If there is insufficient context to make a single match, it will display a list of all
the potential matches. Tab completion also usually works with command argument values, but
there are some cases where there is simply not enough context for it to know what you want,
in which case you will just need to type in the argument value.
You can recall your previously entered commands by repeatedly pressing the up-arrow
key, and you can then navigate up and down the list using the up-arrow and down-arrow
keys.When you find a command you want to modify, you can use the left-arrow, rightarrow, and Delete keys to navigate around in a selected command to edit it.
Entering a question mark character (?) causes the CLI to print contextual help information. You
can use this character on a line by itself or while entering a command.

The clustered Data ONTAP command lines supports a number of additional usability features that make
the command line much easier to use. If you are interested in learning more about this topic then please
refer to the "Hands-On Lab for Advanced Features of Clustered Data ONTAP 8.3.1" lab, which contains
an entire section dedicated to this subject.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3 Lab Environment
The following figure contains a diagram of the environment for this lab.

Figure 3-1:
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration
steps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data ONTAP
features that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of the
same functionality as physical storage controllers, they are not capable of providing the same performance as a
physical controller, which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 1: Table 1: Lab Host Credentials

Hostname

Description

IP Address(es)

Username

Password

JUMPHOST

Windows 20012R2 Remote


Access host

192.168.0.5

Demo\Administrator

Netapp1!

RHEL1

Red Hat 6.6 x64 Linux host

192.168.0.61

root

Netapp1!

RHEL2

Red Hat 6.6 x64 Linux host

192.168.0.62

root

Netapp1!

DC1

Active Directory Server

192.168.0.253

Demo\Administrator

Netapp1!

cluster1

Data ONTAP cluster

192.168.0.101

admin

Netapp1!

cluster1-01

Data ONTAP cluster node

192.168.0.111

admin

Netapp1!

cluster1-02

Data ONTAP cluster node

192.168.0.112

admin

Netapp1!

Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.

10

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Table 2: Table 2: Preinstalled NetApp Software

Hostname

11

Description

JUMPHOST

Data ONTAP DSM v4.1 for Windows MPIO, Windows Unified Host Utility Kit
v7.0.0, NetApp PowerShell Toolkit v3.2.1.68

RHEL1, RHEL2

Linux Unified Host Utilities Kit v7.0

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4 Lab Activities
4.1 Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that are joined together for the purpose of serving
data to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute its
work across the member nodes. Communication and data transfer between member nodes (such as when a
client accesses data on a node other than the one actually hosting the data) takes place over a 10Gb clusterinterconnect network to which all the nodes are connected, while management and client data traffic passes over
separate management and data networks configured on the member nodes.
Clusters typically consist of one, or more, NetApp storage controller High Availability (HA) pairs. Both controllers
in an HA pair actively host and serve data, but they are also capable of taking over their partners responsibilities
in the event of a service disruption by virtue of their redundant cable paths to each others disk storage. Having
multiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support nondisruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This means
that cluster expansion and technology refreshes can take place while the cluster remains fully online, and serving
data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an even
number of controller nodes. There is one exception to this rule, the single node cluster, which is a special
cluster configuration that supports small storage deployments using a single physical controller head. The primary
difference between single node and standard clusters, besides the number of nodes, is that a single node cluster
does not have a cluster network. Single node clusters can be converted into traditional multi-node clusters, and
at that point become subject to all the standard cluster requirements like the need to utilize an even number of
nodes consisting of HA pairs. This lab does not contain a single node cluster, and so this lab guide does not
discuss them further.
Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although the
node limit can be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters that also host
iSCSI and FC can scale up to a maximum of 8 nodes.
This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulated
controller, also known as a VSIM, is a virtual machine that simulates the functionality of a physical controller
without the need for dedicated controller hardware. The vsim is not designed for performance testing, but does
offer much of the same functionality as a physical FAS controller, including the ability to generate I/O to disks.
This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product features. The vsim
is limited when a feature requires a specific physical capability that the vsim does not support. For example,
vsims do not support Fibre Channel connections, which is why this lab uses iSCSI to demonstrate block storage
functionality.
This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes Data
ONTAP licenses, the clusters basic network configuration, and a pair of pre-configured HA controllers. In this
next section you will create the aggregates that are used by the SVMs that you will create in later sections of the
lab. You will also take a look at the new Advanced Drive Partitioning feature introduced in clustered Data ONTAP
8.3.

4.1.1 Connect to the Cluster with OnCommand System Manager


OnCommand System Manager is NetApps browser-based management tool for configuring and managing
NetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you had
to download and install on your client OS. In 8.3, System Manager is now moved on-board the cluster, so you
just point your web browser to the cluster management address. The on-board System Manager interface is
essentially the same that NetApp offered in the System Manager 3.1, the version you install on a client.

12

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open the web
browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if you prefer one
of those. All three browsers already have System Manager set as the browser home page.
1. Launch Chrome to open System Manager.

Figure 4-1:
2. Enter the User Name as admin, and the Password as Netapp1!, and then click Sign In.
The OnCommand System Manager Login window opens.
System Manager is now logged in to cluster1, and displays a summary page for the cluster. If you are
unfamiliar with System Manager, here is a quick introduction to its layout.

Figure 4-2:
Use the tabs on the left side of the window to manage various aspects of the cluster.
3. The Cluster tab accesses configuration settings that apply to the cluster as a whole.
4. The Storage Virtual Machines tab allows you to manage individual Storage Virtual Machines (SVMs,
also known as Vservers).

13

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. The Nodes tab contains configuration settings that are specific to individual controller nodes.
Please take a few moments to expand and browse these tabs to familiarize yourself with their contents.

4
5
Figure 4-3:

Note: As you use System Manager in this lab, you may encounter situations where buttons at
the bottom of a System Manager pane are beyond the viewing size of the window, and no scroll
bar exists to allow you to scroll down to see them. If this happens, then you have two options;
either increase the size of the browser window (you might need to increase the resolution of
your jumphost desktop to accommodate the larger browser window), or in the System Manager
window, use the tab key to cycle through all the various fields and buttons, which eventually
forces the window to scroll down to the non-visible items.

4.1.2 Advanced Drive Partitioning


Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storage
in clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e.,
cabling) to a given controller head.
Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a
group of disks that are all physically attached to the same node. A given disk can only be a member of a single
aggregate.
By default each cluster node has one aggregate known as the root aggregate, which is a group of the nodes
local disks that host the nodes Data ONTAP operating system. A nodes root aggregate is automatically created
during Data ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks

14

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

(1 data, 2 parity), and has a name that begins the string aggr0. For example, in this lab the root aggregate of
the node cluster1-01 is named aggr0_cluster1_01., and the root aggregate of the node cluster1-02 is named
aggr0_cluster1_02.
On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controllers root
aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root aggregate disk
overhead requirement signficantly reduces the disks available for storing user data. To improve usable capacity,
NetApp introduced Advanced Drive Partitioning in 8.3, which divides the Hard Disk Drives (HDDs) on nodes
that have this feature enabled into two partititions; a small root partition, and a much larger data partition. Data
ONTAP allocates the root partitions to the node root aggregate, and the data partitions for data aggregates. Each
partition behaves like a virtual disk, so in terms of RAID, Data ONTAP treats these partitions just like physical
disks when creating aggregates. The key benefit is that a much higher percentage of the nodes overall disk
capacity is now available to host user data.
Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs installed
in their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system installation
time, and there is no way to convert an existing system to use Advanced Drive Partitioning other than to
completely evacuate the affected HDDs, and re-install Data ONTAP.
All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of HDDs. The
capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3 also introduces
SSD partitioning for use with Flash Pools, but the details of that feature lie outside the scope of this lab.
In this section, you will use the GUI to determine if a cluster node is utilizing Advanced Drive Partitioning. System
Manager provides a basic view into this information, but if you want to see more detail then you will want to use
the CLI.
1.
2.
3.
4.
5.
6.

15

In System Managers left pane, navigate to the Cluster tab.


Expand cluster1.
Expand Storage.
Click Disks.
In the main window, click on the Summary tab.
Scroll the main window down to the Spare Disks section, where you will see that each cluster node
has 12 spare disks with a per-disk size of 26.88 GB. These spares represent the data partitions of the
physical disks that belong to each node.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1
2

5
3
4
6

Figure 4-4:

If you scroll back up to look at the Assigned HDDs section of the window, you will see that there are
no entries listed for the root partitions of the disks. Under daily operation, you will be primarly concerned
with data partitions rather than root partitions, and so this view focuses on just showing information about
the data partitions. To see information about the physical disks attached to your system you will need to
select the Inventory tab.
7. Click on the Inventory tab at the top of the Disks window.

16

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-5:
System Managers main window now shows a list of the physical disks available across all the nodes
in the cluster, which nodes own those disks, and so on. If you look at the Container Type column you
see that the disks in your lab all show a value of shared; this value indicates that the physical disk is
partitioned. For disks that are not partitioned you would typically see values like spare, data, parity,
and dparity.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically
determines the size of the root and data disk partitions at system installation time based on the quantity
and size of the available disks assigned to each node. In this lab each cluster node has twelve 32 GB
hard disks, and you can see how your nodes root aggregates are consuming the root partitions on those
disks by going to the Aggregates page in System Manager.
8. On the Cluster tab, navigate to cluster1 > Storage > Aggregates.
9. In the Aggregates list, select aggr0_cluster1_01, which is the root aggregate for cluster node
cluster1-01. Notice that the total size of this aggregate is a little over 10 GB. The Available and Used
space shown for this aggregate in your lab may vary from what is shown in this screenshot, depending
on the quantity and size of the snapshots that exist on your nodes root volume.
10. Click the Disk Layout tab at the bottom of the window. The lower pane of System Manager now
displays a list of the disks that are members of this aggregate. Notice that the usable space is 1.52 GB,
which is the size of the root partition on the disk. The Physical Space column displays to total capacity
of the whole disk that is available to clustered Data ONTAP, including the space allocated to both the
disks root and data partitions.

17

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 4-6:

4.1.3 Create a New Aggregate on Each Cluster Node


The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregate
should not be used to host user data, so in this section you will be creating a new aggregate on each of the nodes
in cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will be creating later in this
lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the
storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to use
one or more specific aggregates to host the SVMs volumes. Multiple SVMs can be assigned to use the same
aggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just a
single SVM provides greater workload isolation.
In this lab activity, you create a single user data aggregate on each node in the cluster.
You can create aggregates from either the Cluster tab, or the Nodes tab. For this exercise use the Cluster tab
as follows:
1. Select the Cluster tab.
Tip: To avoid confusion, always double-check to make sure that you are working in the correct
left pane tab context when performing activities in System Manager!
2. Go to cluster1 > Storage > Aggregates.
3. Click on the Create button to launch the Create Aggregate Wizard.

18

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3
2

Figure 4-7:
The Create Aggregate wizard window opens.
4. Specify the Name of the aggregate as aggr1_cluster1_01
5. Click Browse.

4
5

Figure 4-8:

19

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Select Disk Type window opens.


6. Select the Disk Type entry for the node cluster1-01.
7. Click OK.

Figure 4-9:
The Select DiskType window closes, and focus returns to the Create Aggregate window.
8. The Disk Type should now show as VMDISK.
9. Set the Number of Disks to 5.
10. Click Create to create the new aggregate and to close the wizard.

8
9

10
Figure 4-10:
The Create Aggregate window closes, and focus returns to the Aggregates view in System Manager.
The newly created aggregate should now be visible in the list of aggregates.
11. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected.
12. Click the Details tab to view more detailed information about this aggregates configuration.

20

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP
8, an aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 and later only supports 64-bit
aggregates. If you have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates and
you plan to upgrade that cluster to 8.3+, you must convert those 32-bit aggregates to 64-bit aggregates
prior to the upgrade. The procedure for that migration is not covered in this lab, so if you need further
details then please refer to the clustered Data ONTAP documentation.

11

12

13

Figure 4-11:
Now repeat the process to create a new aggregate on the node "cluster1-02".
14. Click the Create button again.

21

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

Figure 4-12:
The Create Aggregate window opens.
15. Specify the Aggregates Name as aggr1_cluster1_02
16. Click Browse.

22

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

15

16

Figure 4-13:
The Select Disk Type window opens.
17. Select the Disk Type entry for the node cluster1-02.
18. Click OK.

17

18

Figure 4-14:
The Select Disk Type window closes, and focus returns to the Create Aggregate window.
19. The Disk Type should now show as VMDISK.
20. Set the Number of Disks to 5.
21. Click Create to create the new aggregate.

23

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19
20

21
Figure 4-15:
The Create Aggregate window closes, and focus returns to the Aggregates view in System Manager.
22. The new aggregate, aggr1_cluster1_02 now appears in the clusters aggregate list.

22

Figure 4-16:

24

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4.1.4 Networks
This section discusses the network components that Clustered Data ONTAP provides to manage your cluster.
Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps) you
can create to aggregate those connections, and the VLANs you can use to subdivide them.
A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of associated
characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail
over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM,
and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs, in part, on
all nodes that are hosting its LIFs.
Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM has its
own routing table, changes to one SVMs routing table does not have impact on any other SVMs routing table.
IPspaces are new in Data ONTAP 8.3, and allow you to configure a Data ONTAP cluster to logically separate
one IP network from another, even if those two networks are using the same IP address range. IPspaces are a
mult-tenancy feature that allow storage service providers to share a cluster between different companies while still
separating storage traffic for privacy and security. Every cluster includes a default IPspace to which Data ONTAP
automatically assigns new SVMs, and that default IPspace is probably sufficient for most NetApp customers who
deploy a cluster within a single company or organization that uses a non-conflicting IP address range.
Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to the
same layer 2 networks, both physical and virtual (i.e., VLANs). Every IPspace has its own set of Broadcast
Domains, and Data ONTAP provides a default broadcast domain to go along with the default IPspace. Broadcast
domains are used by Data ONTAP to determine what ports an SVM can use for its LIFs.
Subnets in Data ONTAP 8.3 are a convenience feature intended to make LIF creation and management easier
for Data ONTAP administrators. A subnet is a pool of IP addresses that you can specify by name when creating
a LIF. Data ONTAP will automatically assign an available IP address from the pool to the LIF, along with a subnet
mask and a gateway. A subnet is scoped to a specific broadcast domain, so all the subnets addresses belong
to the same layer 3 network. Data ONTAP manages the pool automatically as you create or delete LIFs, and if
you manually configure a LIF with an address from the pool, it will detect that the address is in use and mark it as
such in the pool.
DNS Zones allow an SVM to manage DNS name resolution for its own LIFs, and since multiple LIFs can share
the same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To use DNS
Zones you must configure your DNS server to delegate DNS authority for the subdomain to the SVM.

4.1.4.1 Create Subnets


In this section of the lab, you will create a subnet that you will leverage in later sections to provision SVMs and
LIFs. You will not create IPspaces or Broadcast Domains, as the system defaults are sufficient for this lab.
1.
2.
3.
4.

25

In the left pane of System Manager, select the Cluster tab.


In the left pane, navigate to cluster1 > Configuration > Network.
In the right pane, select the Broadcast Domains tab.
Select the Default subnet.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1
3
2
4

Figure 4-17:
Review the Port Details section at the bottom of the Network pane and note that the e0c e0g ports on
both cluster nodes are all part of this broadcast domain. These are the network ports that you will use in
this lab.
Now create a new Subnet for this lab.
5. Select the Subnets tab, and notice that there are no subnets listed in the pane. Unlike Broadcast
Domains and IPSpaces, Data ONTAP does not provide a default Subnet.
6. Click the Create button.

26

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-18:
The Create Subnet window opens.
Set the fields in the window as follows.
7. Subnet Name: Demo.
8. Subnet IP/Subnet mask: 192.168.0.0/24.
9. The values you enter in the IP address field depend on what sections of the lab guide you intend to
complete.
Attention: It is important that you choose the right values here so that the values in your lab will
correctly match up with the values used in this lab guide.

If you plan to complete just the NAS section or both the NAS and SAN sections then enter
192.168.0.131-192.168.0.139.

If you plan to complete just the SAN section then enter 192.168.0.133-192.168.0.139.
10. Gateway: 192.168.0.1.
11. Click the Browse button.

27

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7
8
9
10
11

Figure 4-19:
The Select Broadcast Domain window opens.
12. Select the Default entry from the list.
13. Click OK.

28

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

13

Figure 4-20:
The Select Broadcast Domain window close, and focus returns to the Create Subnet window.
14. The values in your Create Subnet window should now match those shown in the following screenshot,
the only possible exception being for the IP Addresses field, whose value may differ depending on what
value range you chose to enter to match your plans for the lab.
15. If it's not already displayed, click on the the Show ports on this domain link under the Broadcast
Domain textbox to see the list of ports that this broadcast domain includes.
16. Click Create.

29

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

16
Figure 4-21:
The Create Subnet window closes, and focus returns to the Subnets tab in System Manager.
17. Notice that the main pane pane of the Subnets tab now includes an entry for your newly created
subnet, and that the lower portion of the pane includes metrics tracking the consumption of the IP
addresses that belong to this subnet.

30

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

17

Figure 4-22:

Feel free to explore the contents of the other available tabs on the Network page. Here is a brief
summary of the information available on those tabs.

The Ethernet Ports tab displays the physical NICs on your controller, which will be a superset
of the NICs that you saw previously listed as belonging to the default broadcast domain. The
other NICs you will see listed on the Ethernet Ports tab include the nodes cluster network
NICs.
The Network Interfaces tab displays a list of all of the LIFs on your cluster.
The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they
will be used for iSCSI or FCoE connections. The simulated NetApp controllers you are using
in this lab do not include FC adapters, and this lab does not make use of FCoE.

4.2 Create Storage for NFS and CIFS


Expected Completion Time: 40 Minutes
If you are only interested in SAN protocols then you do not need to complete this section. However, we
recommend that you review the conceptual information found here, and at the beginning of each of this sections
subsections, before you advance to the SAN section as most of this conceptual material will not be repeated
there.
Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operate
within a cluster that serve data out to storage clients. A single cluster can host hundreds of SVMs, with each SVM
managing its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g.,
NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients, its own namespace.

31

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and customers
are encouraged to actively embrace this feature in order to take full advantage of a clusters capabilities. We
recommend against any organization starting out on a deployment intended to scale with only a single SVM.
You explicitly configure which storage protocols you want a given SVM to support at the time you create that
SVM. You can later add or remove protocols as desired. A single SVM can host any combination of the supported
protocols.
An SVMs assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. As
you saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means that an
SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a direct
relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associated
characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail
over to, an assigned SVM, a role, a routing group, and so on. You can only assign a given LIF to a single SVM,
and since LIFs map to physical network ports on cluster nodes, this means that an SVM runs in part on all nodes
that are hosting its LIFs.
When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes hosted
by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is a
function of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility under
NetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some load
balancing by responding to different clients with different LIF addresses, but this distribution is not sophisticated
and requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFS
Servers do not handle name resolution on their own.
DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname.
DNS is supported by both NFS and CIFS clients, and works equally well with clients on local area and wide area
networks. Since DNS is an external service that resides outside of Data ONTAP, this architecture creates the
potential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline.
To compensate for this condition you can configure DNS servers to delegate the name resolution responsibility
for the SVMs hostname records to the SVM itself, so that it can directly respond to name resolution requests
involving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding what
LIF address to return in response to a DNS name resolution request.
LIFS that map to physical network ports that reside on the same node as a volumes containing aggregate offer
the most efficient client access path to the volumes data. However, clients can also access volume data through
LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP uses
the high speed cluster network to bridge communication between the node hosting the LIF and the node hosting
the volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that has
an aggregate that is hosting volumes for that SVM. If you desire additional resiliency then you can also create a
NAS LIF on nodes not hosting aggregates for the SVM.
A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to another
in the event of a component failure. Any existing connections to that LIF from NFS and SMB 2.0 (and later)
clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates to
a different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing network
requests from that new node/port. Throughout this operation the NAS LIF maintains its IP address. Clients
connected to the LIF may notice a brief delay while the failover is in progress, but as soon as it completes the
clients resume any in-process NAS operations without any loss of data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storage
controller node can host a maximum of 125 SVMs, so you can calculate the clusters effective SVM limit by
multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but there
is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node is
part of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node can
also accommodate its HA partners LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVMs CIFS and NFS volumes into a single
logical filesystem view. Clients can access the entire namespace by mounting a single share or export at the
top of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistent
view of the SVMs data to all clients rather than having to reproduce that view structure on each individual

32

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

client. As an administrator maps and unmaps volumes from the namespace, those volumes instantly become
visible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVMs namespace.
Administrators can also create NFS exports at individual junction points within the namespace, and can create
CIFS shares at any directory path in the namespace.

4.2.1 Create a Storage Virtual Machine for NAS


In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a volume
over NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the cluster.
Start by creating the storage virtual machine.
1. In System Manager, open the Storage Virtual Machines tab.
2. Select cluster1.
3. Click Create to launch the Storage Virtual Machine Setup wizard.

Figure 4-23:
The Storage Virual machine (SVM) Setup window opens.
4. Set the SVM Name: value to svm1.
5. In the Data Protocols: area, check the CIFS and NFS checkboxes.
Tip: The list of available Data Protocols is dependent upon what protocols are licensed on your
cluster; if a given protocol isnt listed, it is because you are not licensed for it. (In this lab all the
protocols are licensed.)
6. Set the Security Style: value to NTFS.
7. Set the Root Aggregate: listbox to aggr1_cluster1_01.
8. Click Submit & Continue.

33

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5
6
7

Figure 4-24:
The "Storage Virtual Machine (SVM) Setup" window opens.
9. The Subnet setting defaults to Demo, since this is the only subnet definition that exists in your lab.
10. Click Browse next to the Port textbox.

34

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9
10

Figure 4-25:
The Select Network Port or Adapter window opens.
11. Expand the list of ports for the node cluster1-01, and select port e0c.
12. Click OK.

35

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12
Figure 4-26:
The Select Network Port or Adapter window closes, and focus returns to the protocols portion of the
Storage Virtual Machine (SVM) Setup wizard.
13.
14.
15.
16.
17.
18.

36

The Port textbox should have been populated with the cluster and port value you just selected.
Set the CIFS Server Name: value to svm1.
Set the Active Directory: value to demo.netapp.com.
Set the Administrator Name: value to Administrator.
Set the Password: value to Netapp1!.
The optional Provision a volume for CIFS storage textboxes offer a quick way to provision a simple
volume and CIFS share at SVM creation time, with the caveat that his share will not be multi-protocol.
Since in most cases when you create a share it will be for an existing SVM, rather than create a share
here this lab guide will show that more full-featured procedure in the following sections.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13
14

18

15
16
17
Figure 4-27:

Scroll down in the window to see the NIS Configuration section.


19. In the NIS section the Domain Name and IP Addresses fields are blank. In a NFS environment
where you are running NIS you would want to configure these values, but this lab environment does not
utilize NIS, and populating these fields will create a name resolution problem later in the lab.
20. As was the case with CIFS, the Provision a volume for NFS storage textboxes offer a quick way
to provison a volume and create an NFS export for that volume. Once again, the volume will not be
inherently multi-protocol, and will in fact be a completely separate volume from the CIFS share volume
that you could have selected to create in the CIFS section. This lab will illustrate the more full featured
volume creation process later in the guide.
21. Click Submit & Continue to advance the wizard to the next screen.

37

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

20
21

Figure 4-28:
The SVM Administration section of the Storage Virtual Machine (SVM) Setup wizard opens. This
window allows you to set up an administrative account for this specific SVM so you can delegate
administrative tasks to an SVM-specific administrator without giving that administrator cluster-wide
privileges. As the comments in this wizard window indicate, this account must also exist for use with
SnapDrive. Although you will not be using SnapDrive in this lab, it is a good idea to create this account,
and you will do so here.
22. The User Name is pre-populated with the value vsadmin.
23. Set the Password and Confirm Password textboxes to netapp123.
24. When finished, click Submit & Continue.

38

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

22
23

24
Figure 4-29:
The New Storage Virtual Machine (SVM) Summary window opens.
25. Review the settings for the new SVM, taking special note of the IP Address listed in the CIFS/NFS
Configuration section. Data ONTAP drew this address from the Subnets pool that you created earlier in
the lab. Make sure you use the scrollbar on the right to see all the available information.
26. When finished, click OK .

39

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

26
Figure 4-30:
The window closes, and focus returns to the System Manager window, which now displays a summary
page for your newly created svm1 SVM.
27. Notice that in the main pane of the window the CIFS protocol is listed with a green background. This
indicates that a CIFS server is running for this SVM.
28. Notice too, that the NFS protocol is listed with a green background, which indicates that there is a
running NFS server for this SVM.

40

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

27

Figure 4-31:
The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM.
NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access the
SVMs shares through either node. To comply with that best practice you will now create a second LIF
hosted on the other node in the cluster.
System Manager for clustered Data ONTAP 8.2 (and earlier) presented LIF management under the
Storage Virtual Machines tab, only offering visibility to LIFs for a single SVM at a time. In clustered Data
ONTAP 8.3, that functionality has moved to the Cluster tab, where you now have a single view for
managing all the LIFs in your cluster.
29.
30.
31.
32.

Select the Cluster tab in the left navigation pane of System Manager.
Navigate to cluster1 > Configuration > Network.
Select the Network Interfaces tab in the main Network pane.
Select the only LIF listed for the svm1 SVM. Notice that this LIF is named svm1_cifs_nfs_lif1; follow
that same naming convention for the new LIF.
33. Click Create to launch the Network Interface Create Wizard.

41

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

29

31
30
33

32

Figure 4-32:
The Create Network Interface window opens.
34.
35.
36.
37.
38.
39.
40.
41.
42.

42

Set the Name: value to svm1_cifs_nfs_lif2.


Set the Interface Role: radio button to Serves Data
Set the SVM: dropdown to svm1
In the Protocol Access: area, check the CIFS and NFS checkboxes.
In the Management Access: area, check the Enable Management Access checkbox.
Set the Subnet: dropdown to Demo
Check the Auto-select the IP address from this subnet checkbox.
Also expand the Port Selection listbox, and select the entry for cluster1-02 port e0c.
Click Create to continue.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

34
35
36
37
38
39
40

41

42

Figure 4-33:
The Create Network Interface window closes, and focus returns to the Network pane in System
Manager.
43. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfaces
tab. Select this entry and review the LIFs properties in the lower pane.

43

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

43

Figure 4-34:

Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of svm1s configured NAS LIFs. To achieve this objective, the DNS server must
delegate to the cluster the responsibility for the DNS zone corresponding to the SVMs hostname,
which in this case will be svm1.demo.netapp.com. The labs DNS server is already configured to
delegate this responsibility, but you must also configure the SVM to accept it. System Manager does
not currently include the capability to configure DNS delegation so you will need to use the CLI for this
purpose.
44. Open a PuTTY connection to cluster1 following the instructions in the Accessing the Command Line
section at the beginning of this guide. Log in using the username "admin" and the password "Netapp1!",
then enter the following commands.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif
address
dns-zone
------- ----------------- ------------- ------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>

45. Validate that delegation is working correctly by opening PowerShell on the jumphost and using the
nslookup command as shown in the following CLI output. If the nslookup command returns different IP
addresses on different lookup attempts then delegation is working correctly. If the nslookup command
returns a Non-existent domain error, then delegation is not working correctly, and you will need to
review the Data ONTAP commands you entered for any errors. Also notice in the following CLI output

44

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

that different executions of the nslookup command return different addresses, demonstrating that DNS
load balancing is working correctly.
Tip: You may need to run the nslookup command more than 2 times before you see it report
different addresses for the hostname.
Windows PowerShell
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name:
svm1.demo.netapp.com
Address: 192.168.0.132
PS C:\Users\Administrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name:
svm1.demo.netapp.com
Address: 192.168.0.131
PS C:\Users\Administrator.DEMO

4.2.2 Configure CIFS and NFS


Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the svm1 SVM in the
previous section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand that
clients cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created any
volumes on the SVM, but also because you have not told the SVM what you want to share, and who you want to
share it with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVMs volumes into a directory
hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVMs root volume
(svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFS
and NFS clients. The SVMs other volumes are junctioned (i.e. mounted) within that root volume or within other
volumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified,
centrally maintained view of the storage encompassed by the namespace, regardless of where those junctioned
volumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not been
junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declared
at the top of the namespace. While this is a very powerful capability, there is no requirement to make the whole
namespace accessible. You can create CIFS shares at any directory level in the namespace, and you can
create different NFS export rules at junction boundaries for individual volumes and for individual qtrees within a
junctioned volume.
Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy model
that dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exports
the root of its namespace and automatically associates that export with the SVMs default export policy. But that
default policy is initially empty, and until it is populated with access rules no NFS clients will be able to access
the namespace. The SVMs default export policy applies to the root volume and also to any volumes that an
administrator junctions into the namespace, but an administrator can optionally create additional export policies
in order to implement different access rules within the namespace. You can apply export policies to a volume
as a whole and to individual qtrees within a volume, but a given volume or qtree can only have one associated
export policy. While you cannot create NFS exports at any other directory level in the namespace, NFS clients
can mount from any level in the namespace by leveraging the namespaces root export.
In this section of the lab, you are going to configure a default export policy for your SVM so that any volumes you
junction into its namespace will automatically pick up the same NFS export rules. You will also create a single
CIFS share at the top of the namespace so that all the volumes you junction into that namespace are accessible
through that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will be
setting up name mapping between UNIX and Windows user accounts to facilitate smooth multiprotocol access to
the volumes and files in the namespace.

45

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVMs namespace. An
SVM always has a root volume, whether or not it is configured to support NAS protocols. Before you configure
NFS and CIFS for your newly created SVM, take a quick look at the SVMs root volume:
1. Select the Storage Virtual Machines tab.
2. Navigate to cluster1 > svm1 > Storage > Volumes.
3. Note the existence of the svm1_root volume, which hosts the namespace for the svm1 SVM. The root
volume is not large; only 20 MB in this example. Root volumes are small because they only intend to
house the junctions that organize the SVMs volumes; all of the files hosted on the SVM should reside
inside the volumes that are junctioned into the namespace, rather than directly in the SVMs root volume.

1
2
3

Figure 4-35:
Confirm that CIFS and NFS are running for our SVM using System Manager. Check CIFS first.
4. Under the Storage Virtual Machines tab, navigate to cluster1 > svm1 > Configuration > Protocols >
CIFS.
5. In the CIFS pane, select the Configuration tab.
6. Note that the Service Status field is listed as Started, which indicates that there is a running CIFS
server for this SVM. If CIFS was not already running for this SVM, then you could configure and start it
using the Setup button found under the Configuration tab.

46

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-36:
Now check that NFS is enabled for your SVM.
7. Select NFS under the Protocols section.
8. Notice that the NFS Server Status field shows as Enabled. The Enable and Disable buttons on the
menu bar can be used to place the NFS server online and offline if needed. Please leave NFS enabled
for this lab.
9. NFS version 3 is enabled, but versions 4 and 4.1 are not. If you wanted to change this you could use the
Edit button to do so, but for this lab NFS version 3 is sufficient.

47

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8
7

Figure 4-37:
At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server.
However, you have not yet configured those two servers to actually serve any data. The first step in that
process is to configure the SVMs default NFS export policy.
When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS export
policy for the SVM that contains an empty list of access rules. Without any access rules that policy
will not allow clients to access any exports, so you need to add a rule to the default policy so that the
volumes you will create on this SVM later in this lab will be automatically accessible to NFS clients. If any
of this seems a bit confusing, do not worry; the concept should become clearer as you work through this
section and the next one.
10. In System Manager, select the Storage Virtual Machines tab and navigate to cluster1 > svm1 >
Policies > Export Policies.
11. In the Export Polices window, select the default policy.
12. Click the Add button in the bottom portion of the Export Policies pane.

48

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10
11
12

Figure 4-38:

13.
14.
15.
16.

49

The Create Export Rule window opens. Using this dialog you can create any number of rules that
provide fine grained access control for clients and specify their application order. For this lab, you are
going to create a single rule that grants unfettered access to any host on the labs private network.
Set the Client Specification: value to 0.0.0.0/0
Set the Rule Index: number to 1
In the Access Protocols: area, check the CIFS and NFS checkboxes.The default values in the other
fields in the window are acceptable.
When you finish entering these values, click OK.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13
14
15

16

Figure 4-39:
The Create Export Policy window closes and focus returns to the Export Policies pane in System
Manager.
17. The new access rule you created now shows up in the bottom portion of the pane.

50

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

17

Figure 4-40:
With this updated default export policy in place, NFS clients will now be able to mount the root of
the svm1 SVMs namespace, and use that mount to access any volumes that you junction into the
namespace.
Now create a CIFS share for the svm1 SVM. You are going to create a single share named nsrootat
the root of the SVMs namespace.
18. Select the Storage Virtual Machines tab and navigate to cluster1 > svm1 > Storage > Shares.
19. In the Shares pane, select Create Share.

51

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

18

Figure 4-41:
The Create Share dialog box opens.
20. Set the Folder to Share: value to / (If you alternately opt to use the Browse button, make sure you
select the root folder).
21. Set the Share Name: value to nsroot
22. Click the Create button.

20

21

22
Figure 4-42:
The Create Share window closes, and focus returns to Shares pane in System Manager. The new
nsroot share now shows up in the Shares pane, but you are not yet finished.

52

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

23. Select nsroot from the list of shares.


24. Click the Edit button to edit the shares settings.

24
23

Figure 4-43:
The Edit nsroot Settings window opens.
25. Select the Permissions tab. When you create a share, by permissions are set by default to grant
Everyone Full Control . You can set more detailed permissions on the share from this tab, but this
configuration is sufficient for the exercises in this lab.

53

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

Figure 4-44:
There are other settings to check in this window, so do not close it yet.
26. Select the Options tab at the top of the window and make sure that the Enable as read-only, Enable
Oplocks, Browsable, and Notify Change checkboxes are all checked. All other checkboxes should be
cleared.
27. If you had to change any of the settings listed on the previous screen then the Save and Close button
will become active, and you should click it. Otherwise, click the Cancel button.

54

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

26

27

Figure 4-45:
The Edit nsroot Settings window closes, and focus returns to the Shares pane in System Manager.
Setup of the \\svm1\nsroot CIFS share is now complete.
For this lab you have created just one share at the root of your namespace, which allows users to
access any volume mounted in the namespace through that share. The advantage of this approach is
that it reduces the number of mapped drives that you have to manage on your clients; any changes you
make to the namespace , such as adding/removing volumes or changing junction locations, become
instantly visible to your clients. If you prefer to use multiple shares then clustered Data ONTAP allows
you to create additional shares rooted at any directory level within the namespace.

4.2.2 Setting Up Username Mapping


Since you have configured your SVM to support both NFS and CIFS, you next need to set up username mapping
so that the UNIX root accounts and the DEMO\Administrator account will have synonymous access to each
others files. Setting up such a mapping may not be desirable in all environments, but it will simplify data sharing
for this lab since these are the two primary accounts you are using in this lab.
1. In System Manager, open the Storage Virtual Machines tab and navigate to cluster1 > svm1 >
Configuration > Users and Groups > Name Mapping.
2. In the Name Mapping pane, click Add.

55

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-46:
The Add Name Mapping Entry window opens.
Create a Windows to UNIX mapping by completing all of the fields as follows:
3. Set the Direction: value to Windows to UNIX.
4. Set the Position: number to 1.
5. Set the Pattern: value to demo\\administrator (the two backslashes listed here is not a typo, and
administrator should not be capitalized).
6. Set the Replacement: value to root.
7. When you have finished populating these fields, click Add.

3
4
5
7

Figure 4-47:

56

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The window closes and focus returns to the Name Mapping pane in System Manager. Click the Add
button again to create another mapping rule.
The Add Name Mapping Entry window opens.
Create a UNIX to Windows to mapping by completing all of the fields as follows:
8. Set the Direction: value to UNIX to Windows.
9. Set the Position: value to 1.
10. Set the Pattern: value to root
11. Set the Replacement: value to demo\\administrator (the two backslashes listed here are not a typo,
and administrator should not be capitalized).
12. When you have finished populating these fields, click Add.

8
9
10
12

11

Figure 4-48:
The second Add Name Mapping window closes, and focus again returns to the Name Mapping pane
in System Manager.
13. You should now see two mappings listed in this pane that together make the root and DEMO
\Administrator accounts equivalent to each other for the purpose of file access within the SVM.

57

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-49:

4.2.3 Create a Volume and Map It to the Namespace


Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume only
resides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate,
which can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of a
volume can vary depending on what storage controller model is hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols
(varies based on controller model), which means that there is an effective limit on the total number of volumes
that a cluster can host, depending on how many nodes there are in your cluster.
Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the nodes Data
ONTAP operating system. Do not use the nodes root aggregate to host any other volumes or user data; always
create additional aggregates and volumes for that purpose.
Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin provisioning,
deduplication, and compression. One specific storage efficiency feature you will be seeing in the section of the lab
is thin provisioning, which dictates how space for a FlexVol is allocated in its containing aggregate.
When you create a FlexVol with a volume guarantee of type volume you are thickly provisioning the volume,
pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume will
never run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volume
guarantee of none you are thinly provisioning the volume, only allocating space for it on the containing
aggregate at the time and in the quantity that the volume actually requires the space to store the data.
This latter configuration allows you to increase your overall space utilization and even oversubscribe an
aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribed
volumes reached their full size. However, if an oversubscribed aggregate does fill up then all its volumes will run

58

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

out of space before they reach their maximum volume size, therefore oversubscription deployments generally
require a greater degree of administrative vigilance around space utilization.
In the Clusters section, you created a new aggregate named aggr1_cluster1_01; you will now use that
aggregate to host a new thinly provisioned volume named engineering for the SVM named svm1.
1. In System Manager, open the Storage Virtual Machines tab.
2. Navigate to cluster1 > svm1 > Storage > Volumes.
3. Click Create to launch the Create Volume wizard.

Figure 4-50:
The Create Volume window opens.
4. Populate the following values into the data fields in the window.

Name: engineering
Aggregate: aggr1_cluster1_01
Total Size: 10 GB
Check the Thin Provisioned checkbox.

Leave the other values at their defaults.


5. Click Create .

59

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5
Figure 4-51:
The Create Volume window closes, and focus returns to the Volumes pane in System Manager.
6. The newly created engineering volume now appears in the Volumes list. Notice that the volume is 10 GB
in size, and is thin provisioned.

60

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-52:
System Manager has also automatically mapped the engineering volume into the SVMs NAS
namespace.
7. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Namespace.
8. Notice that the engineering volume is now junctioned in under the root of the SVMs namespace, and
has also inherited the default NFS Export Policy.

61

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7
8

Figure 4-53:
Since you have already configured the access rules for the default policy, the volume is instantly
accessible to NFS clients. As you can see in the preceding screenshot, the engineering volume was
junctioned as /engineering, meaning that any client that had mapped a share to \\svm1\nsroot or NFS
mounted svm1:/ would now instantly see the engineering directory in the share, and in the NFS mount.
Now create a second volume.
9. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Volumes.
10. Click Create to launch the Create Volume wizard.

62

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10
9

Figure 4-54:
The Create Volume window opens.
11. Populate the following values into the data fields in the window:

Name: eng_users
Aggregate: aggr1_cluster1_01
Total Size: 10 GB
Check the Thin Provisioned checkbox.

Leave the other values at their defaults.


12. Click the Create button.

63

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12
Figure 4-55:
The Create Volume window closes, and focus returns again to the Volumes pane in System
Manager. The newly created eng_users volume should now appear in the Volumes list.
13. Select the eng_users volume in the volumes list, and examine the details for this volume in the
General box at the bottom of the pane. Specifically, note that this volume has a Junction Path value of
/eng_users.

64

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-56:
You do have more options for junctioning than just placing your volumes into the root of your
namespace. In the case of the eng_users volume, you will re-junction that volime underneath the
engineering volume, and shorten the junction name to take advantage of an already intuitive context.
14. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Namespace.
15. In the Namespace pane, select the eng_users junction point.
16. Click Unmount.

65

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

16
14

15

Figure 4-57:
The Unmount Volume window opens asking for confirmation that you really want to unmount the
volume from the namespace.
17. Click Unmount.

17
Figure 4-58:
The Unmount Volume window closes, and focus returns to the NameSpace pane in System
Manager. The eng_users volume no longer appears in the junction list for the namespace, and since
it is no longer junctioned in the namespace, that means clients can no longer access it or even see it.
Now you will junction the volume in at another location in the namespace.
18. Click Mount.

66

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

Figure 4-59:
The Mount Volume window opens.
19. Set the fields in the window as follows.

Volume Name: eng_users.

Junction Name: users.


20. Click Browse.

19

20

Figure 4-60:
The Browse For Junction Path window opens.
21. Select engineering, which will populate /engineering into the textbox above the list.
22. Click Select to accept the selection.

67

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

21

22
Figure 4-61:
The Browse For Junction Path window closes, and focus returns to the Mount Volume window.
23. The fields in the Mount Volume window should now all contain values as follows:

Volume Name: eng_users.

Junction Name: users.

Junction Path: /engineering.


24. When ready, click Mount.

23

24
Figure 4-62:
The Mount Volume window closes, and focus returns to the Namespace pane in System Manager.
25. The eng_users volume is now mounted in the namespace as /engineering/users.

68

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

25

Figure 4-63:

You can also create a junction within user created directories. For example, from a CIFS or NFS client
you could create a folder named Projects inside the engineering volume, and then create a widgets
volume that junctions in under the projects folder. In that scenario, the namespace path to the widgets
volume contents would be /engineering/projects/widgets.
Now you will create a couple of qtrees within the eng_users volume, one for each of the users bob
and susan.
26. Navigate to Storage Virtual Machines > cluster1 > svm1 > Storage > Qtrees.
27. Click Create to launch the Create Qtree wizard.

69

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

27

26

Figure 4-64:
The Create Qtree window opens.
28. Set the Name: value to bob
29. Click on the Browse button next to the Volume: property..

70

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28
29

Figure 4-65:
The "Select a Volume" window opens.
30. Expand the svm1 list and select the eng_users volume. Remember, here you are selecting the name of
the volume that will host the qtree, not the path where that qtree will reside in the namespace.
31. Click the OK button.

71

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

30

31
Figure 4-66:
The "Select a Volume" window closes, and focus returns to the "Create Qtree" window.
32. The Volume field is now populated with eng_users.
33. Select the Quota tab.

72

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

33
32

Figure 4-67:
The Quota tab is where you define the space usage limits you want to apply to the qtree. You will not
actually be implementing any quota limits in this lab.
34. Click the Create button to finish creating the qtree.

34

Figure 4-68:

73

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager.
35. The new bob qtree is now present in the qtrees list.
36. Now create a qtree for the user account "susan" by clicking the Create button.

36

35

Figure 4-69:
The Create Qtree window opens.
37. Select the Details tab and then populate the fields as follows.

Name: susan

Volume: eng_users
38. Click Create.

74

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37

38
Figure 4-70:
The Create Qtree window closes, and focus returns to the Qtrees pane in System Manager.
39. At this point you should see both the bob and susan qtrees in System Manager.

75

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

39

Figure 4-71:

4.2.4 Connect to the SVM From a Windows Client


The svm1 SVM is up and running and is configured for NFS and CIFS access, so its time to validate that
everything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a Windows
host. You should complete both parts of this section so you can see that both hosts are able to seamlessly access
the volume and its files.
This part of the lab demonstrates connecting the Windows client jumphost to the CIFS share \\svm1\nsroot using
the Windows GUI.
1. On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the taskbar.

Figure 4-72:
A Windows Explorer window opens.
2. In Windows Explorer click on Computer.

76

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3. Click on Map network drive to launch the Map Network Drive wizard.

2
3

Figure 4-73:
The Map Network Drive wizard opens.
4. Set the fields in the window to the following values.

Drive: S:

Folder: \\svm1\nsroot

Check the Reconnect at sign-in checkbox.


5. When finished click Finish.

77

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5
Figure 4-74:
A new Windows Explorer window opens.
6. The engineering volume you earlier junctioned into the svm1s namespace is visible at the top of the
nsroot share, which points to the root of the namespace. If you created another volume on svm1 right
now and mounted it under the root of the namespace, that new volume would instantly become visible
in this share, and to clients like jumphost that have already mounted the share. Double-click on the
engineering folder to open it.

78

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-75:
File Explorer displays the contents of the engineering folder. Next you will create a file in this folder to
confirm that you can write to it.
7. Notice that the eng_users volume that you junctioned in as users is visible inside this folder.
8. Right-click in the empty space in the right pane of File Explorer.
9. In the context menu, select New > Text Document, and name the resulting file cifs.txt.

79

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9
Figure 4-76:
10. Double-click the cifs.txt file you just created to open it with Notepad.
Tip: If you aren't seeing file extensions in your lab, you can enable that by going to the View
menu at the top of Windows Explorer and checking the File Name Extensions checkbox.
11. In Notepad, enter some text (make sure you put a carriage return at the end of the line, or else when
you later view the contents of this file on linux the command shell prompt will appear on the same line
as the file contents).
12. Use the File > Save menu in Notepad to save the files updated contents to the share. If write access is
working properly you will not receive an error message.

80

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

12
11

Figure 4-77:
Close Notepad and File Explorer to finish this exercise.

4.2.5 Connect to the SVM From a Linux Client


This section demonstrates how to connect a Linux client to the NFS volume svm1:/ using the Linux command line.
1. Follow the instructions in the Accessing the Command Line section at the beginning of this lab guide to
open PuTTY and connect to the system rhel1. Log in as the user root with the password Netapp1!.
2. Verify that there are no NFS volumes currently mounted on rhel1.
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962504
6311544 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
[root@rhel1 ~]#

3. Create the /svm1 directory to serve as a mount point for the NFS volume you will be shortly mounting.
[root@rhel1 ~]# mkdir /svm1
[root@rhel1 ~]#

4. Add an entry for the NFS mount to the fstab file.


[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab
[root@rhel1 ~]#

81

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Verify the fstab file contains the new entry you just created.
[root@rhel1 ~]# grep svm1 /etc/fstab
svm1:/ /svm1 nfs rw,defaults 0 0
[root@rhel1 ~]#

6. Mount all the file systems listed in the fstab file.


[root@rhel1 ~]# mount -a
[root@rhel1 ~]#

7. View a list of the mounted file systems.


[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962508
6311540 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
[root@rhel1 ~]#

The NFS file system svm1:/ now shows as mounted on /svm1.


8. Navigate into the /svm1 directory.
[root@rhel1 ~]# cd /svm1
[root@rhel1 svm1]#

9. Notice that you can see the engineering volume that you previously junctioned into the SVMs
namespace.
[root@rhel1 svm1]# ls
engineering
[root@rhel1 svm1]#

10. Navigate into engineering and list it's contents.


Attention: The following command output assumes that you have already performed the
Windows client connection steps found earlier in this lab guide, including creating the cifs.txt file.
[root@rhel1 svm1]# cd engineering
[root@rhel1 engineering]# ls
cifs.txt users
[root@rhel1 engineering]#

11. Display the contents of the cifs.txt file you created earlier.
Tip: When you cat the cifs.txt file, if the shell prompt winds up on the same line as the file
output then that indicates that you forgot to include a newline at the end of the file when you
created the file on Windows.
[root@rhel1 engineering]# cat cifs.txt
write test from jumphost
[root@rhel1 engineering]#

12. Verify that you can create file in this directory.


[root@rhel1 engineering]# echo "write test from rhel1" > nfs.txt
[root@rhel1 engineering]# cat nfs.txt
write test from rhel1
[root@rhel1 engineering]# ll
total 4
-rwxrwxrwx 1 root bin
26 Oct 20 03:05 cifs.txt
-rwxrwxrwx 1 root root
22 Oct 20 03:06 nfs.txt
drwxrwxrwx 4 root root 4096 Oct 20 02:37 users
[root@rhel1 engineering]#

82

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4.2.6 NFS Exporting Qtrees (Optional)


Clustered Data ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains how to
configure qtree exports, and demonstrates how to set different export rules for a given qtree. For this exercise you
will work with the qtrees you created in the previous section.
Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees do still
exist in cluster mode, but their purpose is essentially now limited to just quota management, with most other 7mode qtree features, including NFS exports, now the exclusive purview of volumes. This functionality change
created challenges for 7-mode customers with large numbers of NFS qtree exports who were trying to transition
to cluster mode and could not convert those qtrees to volumes because they would exceed clustered Data
ONTAPs maximum number of volumes limit.
To solve this problem, clustered Data ONTP 8.2.1 introduced qtree NFS. NetApp continues to recommend that
customers favor volumes over qtrees in cluster mode whenever practical, but customers requiring large numbers
of qtree NFS exports now have a supported solution under clustered Data ONTAP.
While this section provides a graphical method to configure qtree NFS exports, you must still use the command
line to accomplish some configuration tasks.
Begin by creating a new export and rules that only permit NFS access from the Linux host rhel1.
1. In System Manager, select the Storage Virtual Machines tab.
2. Navigate to cluster1 > svm1 > Policies > Export Policies.
3. Click the Create button.

Figure 4-78:
The Create Export Policy window opens.
4. Set the Policy Name to rhel1-only.
5. Click the Add button.

83

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-79:
The Create Export Rule window opens.
6. Set Client Specification to 192.168.0.61, and notice that you are leaving all of the Access Protocol
checkboxes unchecked.
7. Click OK.

84

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-80:
The Create Export Rule window closes, and focus returns to the Create Export Policy window.
8. The new access rule now is now present in the rules window, and the rules Access Protocols entry
indicates that there are no protocol restrictions. If you had selected all the available protocol checkboxes
when creating this rule, then each of those selected protocols would have been explicitly listed here.
9. Click Create.

85

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8
9

Figure 4-81:
The Create Export Policy window closes, and focus returns to the Export Policies pane in System
Manager.
10. The rhel1-only policy now shows up in the export policy list.

86

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10
Figure 4-82:
Now you need to apply this new export policy to the qtree. System Manager does not support this
capability so you will have to use the clustered Data ONTAP command line. Open a PuTTY connection
to cluster1, and log in using the username admin and the password Netapp1!, then enter the following
commands.
11. Produce a list of svm1s export policies, and then a list of its qtrees:
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::> volume qtree show
Vserver
Volume
Qtree
Style
---------- ------------- ------------ -----------svm1
eng_users
""
ntfs
svm1
eng_users
bob
ntfs
svm1
eng_users
susan
ntfs
svm1
engineering
""
ntfs
svm1
svm1_root
""
ntfs
5 entries were displayed.
cluster1::>

Oplocks
--------enable
enable
enable
enable
enable

Status
-------normal
normal
normal
normal
normal

12. Apply the rhel1-only export policy to the susan qtree.


cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan
-export-policy rhel1-only
cluster1::>

87

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is
using the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name: svm1
Volume Name: eng_users
Qtree Name: susan
Qtree Path: /vol/eng_users/susan
Security Style: ntfs
Oplock Mode: enable
Unix Permissions: Qtree Id: 2
Qtree Status: normal
Export Policy: rhel1-only
Is Export Policy Inherited: false
cluster1::>

14. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::> volume qtree show -vserver svm1 -fields export-policy
vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only
svm1
engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>

15. Now you need to validate that the more restrictive export policy that youve applied to the qtree susan
is working as expected. If you still have an active PuTTY session open to the the Linux host rhel1
then bring that window up now, otherwise open a new PuTTY session to that host (username = root,
password = Netapp1!). Run the following commands to verify that you can still access the susan qtree
from rhel1.
[root@rhel1 ~]# cd /svm1/engineering/users
[root@rhel1 users]# ls
bob susan
[root@rhel1 users]# cd susan
[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt
[root@rhel1 susan]# cat rhel1.txt
hello from rhel1
[root@rhel1 susan]#

16. Now open a PuTTY connection to the Linux host rhel2 (again, username = root and password =
Netapp1!). This host should be able to access all the volumes and qtrees in the svm1 namespace
*except* susan, which should give a permission denied error because that qtrees associated export
policy only grants access to the host rhel1.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]# ls
bob susan
[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]# cd bob
[root@rhel2 bob]

4.3 Create Storage for iSCSI


Expected Completion Time: 50 Minutes

88

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

This section of the lab is optional, and includes instructions for mounting a LUN on Windows and Linux. If you
choose to complete this section you must first complete the Create a Storage Virtual Machine for iSCSI section,
and then complete either the Create, Map, and Mount a Windows LUN section, or the Create, Map, and Mount
a Linux LUN section as appropriate based on your platform of interest.
The 50 minute time estimate assumes you complete only one of the Windows or Linux LUN sections. You are
welcome to complete both of those section if you choose, but you should plan on needing approximately 90
minutes to complete the entire Create and Mount a LUN section.
If you completed the Create a Storage Virtual Machine for NFS and CIFS section of this lab then you explored
the concept of a Storage Virtual Machine (SVM), created an SVM, and configured it to serve data over NFS and
CIFS. If you skipped that section of the lab guide, consider reviewing the introductory text found at the beginning
of that section, and each of its subsections, before you proceed further because this section builds on concepts
described there.
In this section you are going to create another SVM and configure it for SAN protocols, which means you are
going to configure the SVM for iSCSI since this virtualized lab does not support FC. The configuration steps for
iSCSI and FC are similar, so the information provided here is also useful for FC deployment. After you create a
new SVM and configure it for iSCSI, you will create a LUN for Windows and/or a LUN for Linux, and then mount
the LUN(s) on their respective hosts.
NetApp supports configuring an SVM to serve data over both SAN and NAS protocols, but it is common to see
customers use separate SVMs for each in order to separate administrative responsibilities, or for architectural
and operational clarity. For example, SAN protocols do not support LIF failover ,so you cannot use NAS LIFs to
support SAN protocols. You must instead create dedicated LIFs just for SAN. Implementing separate SVMs for
SAN and NAS can in this example simplify the operational complexity of each SVMs configuration, making each
easier to understand and manage, but ultimately whether to mix or separate is a customer decision, and not a
NetApp recommendation.
Since SAN LIFs do not support migration to different nodes, an SVM must have dedicated SAN LIFs on every
node that you want to service SAN requests, and you must utilize MPIO and ALUA to manage the controllers
available paths to the LUNs. In the event of a path disruption MPIO and ALUA will compensate by re-routing the
LUN communication over an alternate controller path (i.e., over a different SAN LIF).
NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the cluster
so that all nodes can provide a path to the LUNs. In large clusters where this would result in the presentation
of a large number of paths for a given LUN we recommend that you use portsets to limit the LUN to seeing no
more than 8 LIFs. Data ONTAP 8.3 introduces a new Selective LUN Mapping (SLM) feature to provide further
assistance in managing fabric paths. SLM limits LUN path access to just the node that owns the LUN and its HA
partner, and Data ONTAP automatically applies SLM to all new LUM map operations. For further information on
Selective LUN Mapping, please see the Hands-On Lab for SAN Features in clustered Data ONTAP 8.3.
In this lab the cluster contains two nodes connected to a single storage network. You will still configure a total of 4
SAN LIFs, because it is common to see implementations with 2 paths per node for redundancy.
This section of the lab allows you to create and mount a LUN for only Windows, only Linux, or both if you desire.
Both the Windows and Linux LUN creation steps require that you complete the Create a Storage Virtual Machine
for iSCSI section that comes next. If you want to create a Windows LUN, you need to complete the Create, Map,
and Mount a Windows LUN section that follows. Additionally, if you want to create a Linux LUN, you need to
complete the Create, Map, and Mount a Linux LUN section that follows after that. You can safely complete both
of those last two sections in the same lab.

4.3.1 Create a Storage Virtual Machine for iSCSI


In this section you will create a new SVM named svmluns on the cluster. You will create the SVM, configure it for
iSCSI, and create four data LIFs to support LUN access to the SVM (two on each cluster node).
Return to the System Manager window and start the procedure to create a new storage virtual machine.
1. Open the Storage Virtual Machines tab.
2. Select cluster1.

89

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

3. Click Create to launch the Storage Virtual Machine Setup wizard.

Figure 4-83:
The Storage Virual machine (SVM) Setup window opens.
4. Set the fields as follows:

SVM Name: svmluns


Data Protocols: check the iSCSI checkbox.
Tip: The list of available Data Protocols is dependant upon what protocols are licensed
on your cluster; if a given protocol is not listed it is because you are not licensed for it.
(In this lab the cluster is fully licensed for all features.)
Root Aggregate: aggr1_cluster1_01. If you completed the NAS section of this lab, you will
note that this is the same aggregate you used to hold the volumes for svm1. Multiple SVMs
can share the same aggregate.

The default values for IPspace, Volume Type, Default Language, and Security Style are already
populated for you by the wizard, as is the DNS configuration. When ready, click Submit & Continue.

90

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-84:
The Configure iSCSI Protocol step of the wizard opens.
5. Set the fields in the window as follows.

LIFs Per Node: 2

Subnet: Demo

Select the Auto-select the IP address from this subnet radio button.
6. The Provision a LUN for iSCSI Storage (Optional) section allows to to quickly create a LUN when first
creating an SVM. This lab guide does not use that in order to show you the much more common activity
of adding a new volume and LUN to an existing SVM in a later step.
7. Check the Review or modify LIF configuration (Advanced Settings) checkbox. Checking this
checkbox changes the window layout and makes some fields uneditable, so the screenshot show this
checkbox before it has been checked.

91

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-85:
Once you check the Review or modify LIF configuration checkbox, the Configure iSCSI Protocol
window changes to include a list of the LIFs that the wizard plans to create.
8. Take note of the LIF names and ports that the wizard has chosen to assign the LIFs you have asked it to
create.
9. Since this lab utilizes a cluster that only has two nodes, and those nodes are configured as an HA pair,
Data ONTAPs automatically configured Selective LUN Mapping is more than sufficient for this lab so
there is no need to create a portset.
10. Click Submit & Continue.

92

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 4-86:
The wizard advances to the SVM Administration step. Unlike data LIFS for NAS protocols, which
automatically support both data and management functionality, iSCSI LIFs only support data protocols
and so you must create a dedicated management LIF for this new SVM.
11. Set the fields in the window as follows:

Password: netapp123

Confirm Password: netapp123

Subnet: Demo

Port: cluster1-01:e0c
12. Click Submit & Continue.

93

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12

Figure 4-87:
The New Storage Virtual Machine (SVM) Summary winow opens. Review the contents of this
window, taking note of the names, IP addresses, and port assignments for the 4 iSCSI LIFs, and the
management LIF that the wizard created for you.
13. Click OK to close the window.

94

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-88:
The New Storage Virtual Machine (SVM) Summary window closes, and focus returns to System
Manager.
14. System Manager now shows a summary view for the new svmluns SVM.

95

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

Figure 4-89:

4.3.2 Create, Map, and Mount a Windows LUN


In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Windows:

Gather the iSCSI Initiator Name of the Windows client.


Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that volume,
and map the LUN so it can be accessed by the Windows client.
Mount the LUN on a Windows client leveraging multi-pathing.

You must complete all of the subsections of this section in order to use the LUN from the Windows client.

4.3.2.1 Gather the Windows Client iSCSI Initiator Name


You need to determine the Windows clients iSCSI initiator name so that when you create the LUN you can set up
an appropriate initiator group to control access to the LUN.
On the desktop of the Windows client named "jumphost" (the main Windows host you use in the lab), perform the
following tasks:
1. Click on the Windows button on the far left side of the task bar.

96

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-90:
The Start screen opens.
2. Click on Administrative Tools.

Figure 4-91:
Windows Explorer opens to the List of Administrative Tools.
3. Double-click the entry for the iSCSI Initiator tool.

97

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-92:
The iSCSI Initiator Properties window opens.
4. Select the Configuration tab.
5. Take note of the value in the Initiator Name field, which contains the initiator name for jumphost.
Attention: The initiator name is iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
You will need this value later, so you might want to copy this value from the properties window and paste
it into a text file on your labs desktop so you have it readily available when that time comes.
6. Click OK.

98

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-93:
The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Tools
window. Leave this window open because you will need to access other tools later in the lab.

4.3.2.2 Create and Map a Windows LUN


You will now create a new thin provisioned Windows LUN named windows.lun in the volume winluns on the
SVM svmluns. You will also create an initiator igroup for the LUN and populate it with the Windows host jumphost.
An initiator group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names of the hosts that
are permitted to see and access the associated LUNs.

99

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Return to the System Manager window.


1. Open the Storage Virtual Machines tab.
2. Navigate to cluster1 > svmluns > Storage > LUNs.
3. Click Create to launch the Create LUN wizard.

Figure 4-94:
The Create LUN Wizard opens.
4. Click Next to advance to the next step in the wizard.

100

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-95:
The wizard advances to the General Properties step.
5. Set the fields in the window as follows.

Name: windows.lun.

Description: Windows LUN.

Type: Windows 2008 or later.

Size: 10 GB.

Check the Disable Space Reservation check box.


6. Click Next to continue.

101

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-96:
The wizard advances to the LUN Container step.
7. Select the radio button to Create a new flexible volume, and set the fields under that heading as
follows.

Aggregate Name: aggr1_cluster1_01.

Volume Name: winluns.


8. When finished click Next.

102

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-97:
The wizard advances to the Initiator Mappings step.
9. Click the Add Initiator Group button.

103

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-98:
The Create Initiator Group window opens.
10. Set the fields in the window as follows.

Name: winigrp

Operating System: Windows

Type: Select the iSCSI radio button.


11. Click the Initiators tab.

104

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11
10

Figure 4-99:
The Initiators tab displays.
12. Click the Add button to add a new initiator.

105

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12

Figure 4-100:
A new empty entry appears in the list of initiators.
13. Populate the Name entry with the value of the iSCSI Initiator name for jumnphost that you saved earlier.
In case you misplaced that value, it was:
Attention: iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
14. When you finish entering the value, click the OK button underneath the entry. Finally, click Create.

106

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

14

Figure 4-101:
An Initiator-Group Summary window opens confiming that the winigrp igroup was created successfully.
15. Click OK to acknowledge the confirmation.

15
Figure 4-102:
The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the
Create LUN wizard.
16. Click the checkbox under the map column next to the winigrp initiator group.
Caution: This is a critical step because this is where you actually map the new LUN to the new
igroup.
17. Click Next to continue.

107

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

16

17

Figure 4-103:
The wizard advances to the Storage Quality of Service Properties step. You will not be creating any
QoS policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for
Advanced Concepts for clustered Data ONTAP 8.3.
18. Click Next to continue.

108

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18

Figure 4-104:
The wizards advances to the LUN Summary step, where you can review your selections before
proceding with creating the LUN.
19. If everything looks correct, click Next.

109

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

Figure 4-105:
The wizard begins the task of creating the volume that contains the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step, the wizard displays a green checkmark in
the window next to that step.
20. Click the Finish button to terminate the wizard.

110

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20

Figure 4-106:
The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager.
21. The new LUN windows.lun now shows up in the LUNs view, and if you select it you can review its
details in the bottom pane.

111

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

21

Figure 4-107:

4.3.2.3 Mount the LUN on a Windows Client


The final step is to mount the LUN on the Windows client. You will be using MPIO/ALUA to support multiple
paths to the LUN using both of the SAN LIFs you configured earlier on the svmluns SVM. Data ONTAP DSM for
Windows MPIO is the multi-pathing software you will be using for this lab, and that software is already installed on
jumphost.
You should begin by validating that the Multi-Path I/O (MPIO) software is working properly on this windows host.
The Administrative Tools window should still be open on jumphost; if you already closed it then you will need to
re-open it now so you can access the MPIO tool
1. On the desktop of JUMPHOST, in the Administrative Tools window which you should still have open,
double-click the MPIO tool.

112

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-108:
The MPIO Properties window opens.
2. Select the Discover Multi-Paths tab.
3. Examine the Add Support for iSCSI devices checkbox. If this checkbox is NOT greyed out then MPIO
is improperly configured. This checkbox should be greyed out for this lab, but in the event it is not then
place a check in that checkbox, click the Add button, and then click Yes in the reboot dialog to reboot
your windows host. Once the system finishes rebooting, return to this window to verify that the checkbox
is now greyed out, indicating that MPIO is properly configured.
4. Click Cancel.

113

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-109:
The MPIO Properties window closes and focus returns to the Administrative Tools window for
jumphost. Now you need to begin the process of connecting jumphost to the LUN.
5. In Administrative Tools, double-click the iSCSI Initiator tool.

114

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-110:
The iSCSI Initiator Properties window opens.
6. Select the Targets tab.
7. Notice that there are no targets listed in the Discovered Targets list box, indicating that that are
currently no iSCSI targets mapped to this host.
8. Click the Discovery tab.

115

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-111:
The Discovery tab is where you begin the process of discovering LUNs, and to do that you must define
a target portal to scan. You are going to manually add a target portal to jumphost.
9. Click the Discover Portal button.

116

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-112:
The Discover Target Portal window opens. Here you will specify the first of the IP addresses that the
clustered Data ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmluns
SVM. Recall that the wizard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.
10. Set the IP Address or DNS name textbox to 192.168.0.133, the first address in the range for your
LIFs.
11. Click OK.

10

11
Figure 4-113:
The Discover Target Portal window closes, and focus returns to the iSCSI Initiator Properties
window.
12. The Target Portals list now contains an entry for the IP address you entered in the previous step.

117

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Click on the Targets tab.

13

12

Figure 4-114:
The Targets tab opens to show you the list of discovered targets.
14. In the Discovered targets list select the only listed target. Observe that the targets status is Inactive,
because although you have discovered it you have not yet connected to it. Also note that the Name of
the discovered target in your lab will have a different value than what you see in this guide; that name
string is uniquely generated for each instance of the lab. (Make a mental note of that string value as you
will see it a lot as you continue to configure iSCSI in later steps of this process.)
15. Click the Connect button.

118

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

Figure 4-115:
The Connect to Target dialog box opens.
16. Click the Enable multi-path checkbox,.
17. Click the Advanced button.

16
17
Figure 4-116:

119

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Advanced Settings window opens.


18. In the Target portal IP dropdown menu select the entry containing the IP address you specified when
you discovered the target portal, which should be 192.168.0.133. The listed values are IP Address and
Port number combinations, and the specific value you want to select here is 192.168.0.133 / 3260.
19. When finished, click OK.

18

19

Figure 4-117:
The Advanced Setting window closes, and focus returns to the Connect to Target window.
20. Click OK.

120

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20
Figure 4-118:
The Connect to Target window closes, and focus returns to the iSCSI Initiator Properties window.
21. Notice that the status of the listed discovered target has changed from Inactive to Connected.

21

Figure 4-119:
Thus far you have added a single path to your iSCSI LUN, using the address for the
cluster1-01_iscsi_lif_1 LIF the Create LUN wizard created on the node cluster1-01 for the svmluns
SVM. You are now going to add each of the other SAN LIFs present on the svmluns SVM. To begin this
procedure you must first edit the properties of your existing connection.
22. Still on the Targets tab, select the discovered target entry for your existing connection.
23. Click Properties.

121

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

22

23

Figure 4-120:
The Properties window opens. From this window you will be starting the procedure of connecting
alternate paths for your newly connected LUN. You will be repeating this procedure 3 times, once for
each of the remaining LIFs that are present on the svmluns SVM.
LIF IP Address

Done

192.168.0.134
192.168.0.135
192.168.0.136
24. The Identifier list will contain an entry for every path you have specified so far, so it can serve as a
visual indicator on your progress for defining specify all your paths. The first time you enter this window
you will see one entry, for the the LIF you used to first connect to this LUN.
25. Click Add Session.

122

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

24
25

Figure 4-121:
The Connect to Target window opens.
26. Check the Enable muti-path checkbox.
27. Click Advanced.

26
27
Figure 4-122:
The Advanced Setting window opens.
28. Select the Target port IP entry that contains the IP address of the LIF whose path you are adding
in this iteration of the procedure to add an alternate path. The following screenshot shows the
192.168.0.134 address, but the value you specify will depend of which specific path you are
configuring.
29. When finished, click OK.

123

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

Figure 4-123:
The Advanced Settings window closes, and focus returns to the Connect to Target window.
30. Click OK.

124

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

30
Figure 4-124:
The Connect to Target window closes, and focus returns to the Properties window where a new
identifier list. Repeat the procedure from the last 4 screenshots for each of the last two remaining LIF IP
addresses.
When you have finished adding all 3 paths the Identifiers list in the Properties window should contain 4
entries.
31. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions,
one for each path. Note that it is normal for the identifier values in your lab to differ from those in the
screenshot.
32. Click OK.

125

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

31

32

Figure 4-125:
The Properties window closes, and focus returns to the iSCSI Properties window.
33. Click OK.

126

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

33

Figure 4-126:
The iSCSI Properties window closes, and focus returns to the desktop of jumphost. If the Administrative
Tools window is not still open on your desktop, open it again now.
If all went well, the jumphost is now connected to the LUN using multi-pathing, so it is time to format
your LUN and build a filesystem on it.
34. In Administrative Tools, double-click the Computer Management tool.

127

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

34

Figure 4-127:
The Computer Management window opens.
35. In the left pane of the Computer Management window, navigate to Computer Management (Local) >
Storage > Disk Management.

35

Figure 4-128:
36. When you launch Disk Management an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it.

128

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Note: If you see more than one disk listed then MPIO has not correctly recognized that the
multiple paths you set up are all for the same LUN, so you will need to cancel the Initialize
Disk dialog, quit Computer Manager, and go back to the iSCSI Initiator tool to review your path
configuration steps to find and correct any configuration errors, after which you can return to the
Computer Management tool and try again.
Click OK to initialize the disk.

36

Figure 4-129:
The Initialize Disk window closes, and focus returns to the Disk Management view in the Computer
Management window.
37. The new disk shows up in the disk list at the bottom of the window, and has a status of Unallocated.
38. Right-click inside the Unallocated box for the disk (if you right-click outside this box you will get the
incorrect context menu), and select New Simple Volume from the context menu.

129

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37
38

Figure 4-130:
The New Simple Volume Wizard window opens.
39. Click the Next button to advance the wizard.

130

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

39

Figure 4-131:
The wizard advances to the Specify Volume Size step.
40. The wizard defaults to allocating all of the space in the volume, so click the Next button.

131

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

40

Figure 4-132:
The wizard advances to the Assign Drive Letter or Path step.
41. The wizard automatically selects the next available drive letter, which should be E. Click Next.

132

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

41

Figure 4-133:
The wizard advances to the Format Partition step.
42. Set the Volume Label field to WINLUN.
43. Click Next.

133

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

42
43

Figure 4-134:
The wizard advances to the Completing the New Simple Volume Wizard step.
44. Click Finish.

134

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

44

Figure 4-135:
The New Simple Volume Wizard window closes, and focus returns to the Disk Management view of
the Computer Management window.
45. The new WINLUN volume now shows as Healthy in the disk list at the bottom of the window,
indicating that the new LUN is mounted and ready to use. Before you complete this section of the lab,
take a look at the MPIO configuration for this LUN by right-clicking inside the box for the WINLUN
volume.
46. From the context menu select Properties.

135

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

45

46
Figure 4-136:
The WINLUN (E:) Properties window opens.
47. Click the Hardware tab.
48. In the All disk drives list select the NETAPP LUN C-Mode Multi-Path Disk entry.
49. Click Properties.

136

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

47

48

49

Figure 4-137:
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.
50. Click the MPIO tab.
51. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.
We recommend using the Data ONTAP DSM software, as it is the most full-featured option available,
although the Microsoft DSM is also supported.
52. The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available, but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. You can click the More information about MPIO policies link at the bottom
of the dialog window for details about all the available policies.
53. The top two paths show both a Path State and TPG State as Active/Optimized. These paths are
connected to the node cluster1-01 and the Least Queue Depth policy makes active use of both paths to
this node. Conversely, the bottom two paths show a Path State of Unavailable, and a TPG State of
Active/Unoptimized. These paths are connected to the node cluster1-02, and only enter a Path State
of Active/Optimized if the node cluster1-01 becomes unavailable, or if the volume hosting the LUN
migrates over to the node cluster1-02.
54. When you finish reviewing the information in this dialog click OK to exit. If you changed any of the
values in this dialog you should consider using the Cancel button to discard those changes.

137

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

52

50
51

53

54
Figure 4-138:

The NETAPP LUN C-Mode Multi-Path Disk Device Properties window closes, and focus returns to the
WINLUN (E:) Properties window.
55. Click OK.

138

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

55

Figure 4-139:
The WINLUN (E:) Properties window closes.
56. Close the Computer Management window.

139

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

56

Figure 4-140:
57. Close the Administrative Tools window.

57

Figure 4-141:

140

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

58. You may see a message from Microsoft Windows stating that you must format the disk in drive E:
before you can use it. As you may recall, you did format the LUN during the New Simple Volume
Wizard", meaning this is an erroneous message from WIndows. Click Cancel to ignore it.

58

Figure 4-142:

Feel free to open Windows Explorer and verify that you can create a file on the E: drive.
This completes this exercise.

4.3.3 Create, Map, and Mount a Linux LUN


In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Linux:

Gather the iSCSI Initiator Name of the Linux client.


Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within that
volume, and map the LUN to the Linux client.
Mount the LUN on the Linux client.

You must complete all of the following subsections in order to use the LUN from the Linux client. Note that you
are not required to complete the Windows LUN section before starting this section of the lab guide, but the
screenshots and command line output shown here assumes that you have. If you did not complete the Windows
LUN section, the differences will not affect your ability to create and mount the Linux LUN.

4.3.3.1 Gather the Linux Client iSCSI Initiator Name


You need to determine the Linux clients iSCSI initiator name so that you can set up an appropriate initiator group
to control access to the LUN.
You should already have a PuTTY connection open to the Linux host rhel1. If you do not, then open one now
using the instructions found in the Accessing the Command Line section at the beginning of this lab guide. The
username will be root and the password will be Netapp1!.
1. Change to the directory that hosts the iscsi configuration files.
[root@rhel1 ~]# cd /etc/iscsi
[root@rhel1 iscsi]# ls
initiatorname.iscsi iscsid.conf
[root@rhel1 iscsi]#

2. Display the name of the iscsi initiator.


[root@rhel1 iscsi] cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 iscsi]#

Important: The initiator name for rhel1 is iqn.1994-05.com.redhat:rhel1.demo.netapp.com.

141

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

4.3.3.2 Create and Map a Linux LUN


In this activity, you create a new thin provisioned Linux LUN on the SVM svmluns under the volume linluns,
and also create an initiator igroup for the LUN so that only the Linux host rhel1 can access it. An initiator group,
or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names for the hosts that are permitted to see
the associated LUNs.
Attention: Switch back to the System Manager window so that you can create the LUN.
1. In System Manager open the Storage Virtual Machines tab.
2. In the left pane, navigate to cluster1 > svmluns > Storage > LUNs.
3. You may or may not see a listing presented for the LUN windows.lun, depending on whether or not you
completed the lab sections for creating a Windows LUN.
4. Click Create.

1
4

Figure 4-143:
The Create LUN Wizard opens.
5. Click Next to advance to the next step in the wizard.

142

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-144:
The wizard advances to the General Properties step.
6. Set the fields in the window as follows.

Name: linux.lun

Description: Linux LUN

Type: Linux

Size: 10 GB

Check the Disable Space Reservation check box.


7. Click Next to continue.

143

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-145:
The wizard advances to the LUN Container step.
8. Select the radio button to Create a new flexible volume, and set the fields under that heading as
follows.

Aggregate Name: aggr1_cluster1_01

Volume Name: linluns


9. When finished click Next.

144

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-146:
The wizard advances to the Initiator Mapping step.
10. Click Add Initiator Group.

145

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

Figure 4-147:
The Create Initiator Group window opens.
11. Set the fields in the window as follows.

Name: linigrp

Operating System: Linux

Type: Select the iSCSI radio button.


12. Click the Initiators tab.

146

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11

12

Figure 4-148:
The Initiators tab displays.
13. Click the Add button to add a new initiator.

147

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13

Figure 4-149:
A new empty entry appears in the list of initiators.
14. Populate the Name entry with the value of the iSCSI Initiator name for rhel1.
Note: The initiator name is iqn.1994-05.com.redhat:rhel1.demo.netapp.com
15. When you finish entering the value, click OK underneath the entry. Finally, click Create.

148

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

Figure 4-150:
An Initiator-Group Summary window opens confiming that the linigrp igroup was created
successfully.
16. Click OK to acknowledge the confirmation.

16
Figure 4-151:
The Initiator-Group Summary window closes, and focus returns to the Initiator Mapping step of the
Create LUN wizard.
17. Click the checkbox under the Map column next to the linigrp initiator group. This is a critical step
because this is where you actually map the new LUN to the new igroup.
18. Click Next to continue.

149

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

17

18

Figure 4-152:
The wizard advances to the Storage Quality of Service Properties step. You will not create any QoS
policies in this lab. If you are interested in learning about QoS, please see the Hands-on Lab for
Advanced Concepts for clustered Data ONTAP 8.3.1 lab.
19. Click Next to continue.

150

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

19

Figure 4-153:
The wizards advances to the LUN Summary step, where you can review your selections before
proceding to create the LUN.
20. If everything looks correct, click Next.

151

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20

Figure 4-154:
The wizard begins the task of creating the volume that will contain the LUN, creating the LUN, and
mapping the LUN to the new igroup. As it finishes each step the wizard displays a green checkmark in
the window next to that step.
21. Click Finish to terminate the wizard.

152

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

21

Figure 4-155:
The Create LUN wizard window closes, and focus returns to the LUNs view in System Manager.
22. The new LUN linux.lun now shows up in the LUNs view, and if you select it you can review its details
in the bottom pane.

153

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 4-156:
The new Linux LUN now exists, and is mapped to your rhel1 client.
Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space
from a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to
notify the client when the LUN cannot accept writes due to lack of space on the volume. This feature
is supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft
Windows 2012. The RHEL clients used in this lab are running version 6.6 and so you will enable the
space reclamation feature for your Linux LUN. You can only enable space reclamation through the Data
ONTAP command line,
23. In the cluster1 CLI, view whether space reclamation is enabled for the LUN.
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields
space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::>

24. Enable space reclamation for the LUN linux.lun.


cluster1::> lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation
enabled
cluster1::>

25. View the LUN's space reclamation setting again.


cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled

154

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

4.3.3.3 Mount the LUN on a Linux Client


In this section you will use the Linux command line to configure the host rhel1 to connect to the Linux LUN /vol/
linluns/linux.lun you created in the preceding section.
This section assumes that you know how to use the Linux command line. If you are not familiar with these
concepts, we recommend that you skip this section of the lab.
1. If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root with
the password "Netapp1!".
2. The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, and
the iSCSI initiator name has already been configured for each host. Confirm that is the case:
[root@rhel1 ~]# rpm -qa | grep netapp
netapp_linux_unified_host_utilities-7-0.x86_64
[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 ~]#

3. In the /etc/iscsi/iscsid.conf file the node.session.timeo.replacement_timeout value is set to 5 to better


support timely path failover, and the node.startup value is set to automatic so that the system will
automatically log in to the iSCSI node at startup.
[root@rhel1 ~]# grep replacement_time /etc/iscsi/iscsid.conf
#node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 5
[root@rhel1 ~]# grep node.startup /etc/iscsi/iscsid.conf
# node.startup = automatic
node.startup = automatic
[root@rhel1 ~]#

4. You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages and
a /etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access the
LUN using all of the SAN LIFs you created for the svmluns SVM.
[root@rhel1 ~]# rpm -q device-mapper
device-mapper-1.02.79-8.el6.x86_64
[root@rhel1 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.9-72.el6.x86_64
[root@rhel1 ~]# cat /etc/multipath.conf
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd
# NetApp recommended defaults
defaults {
flush_on_last_del yes
max_fds
max
queue_without_daemon no
user_friendly_names no
dev_loss_tmo infinity
fast_io_fail_tmo 5
}
blacklist {
devnode "^sda"
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^ccis.*"
}
devices {
# NetApp iSCSI LUNs
device {

155

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

vendor
"NETAPP"
product
"LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio
"alua"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
[root@rhel1 ~]#

5. You now need to start the iSCSI software service on rhel1 and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
[root@rhel1 ~]# service iscsid status
iscsid is stopped
[root@rhel1 ~]# service iscsid force-start
Starting iscsid: OK
[root@rhel1 ~]# service iscsi status
No active sessions
[root@rhel1 ~]# chkconfig iscsi on
[root@rhel1 ~]# chkconfig --list iscsi
iscsi
0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@rhel1 ~]#

6. Next discover the available targets using the iscsiadm command. Note that the exact values used
for the node paths may differ in your lab from what is shown in this example, and that after running
this command there will not as of yet be active iSCSI sessions because you have not yet created the
necessary device files.
[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets
--portal 192.168.0.133
192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]# iscsiadm --mode session
iscsiadm: No active sessions.
[root@rhel1 ~]#

7. Create the devices necessary to support the discovered nodes, after which the sessions become active.
[root@rhel1 ~]# iscsiadm --mode node -l all
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
[root@rhel1 ~]# iscsiadm --mode session
tcp: [1] 192.168.0.134:3260,1029
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4

156

Basic Concepts for Clustered Data ONTAP 8.3.1

portal: 192.168.0.134,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.133,3260]
portal: 192.168.0.134,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.133,3260]

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

tcp: [2] 192.168.0.136:3260,1031


iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [3] 192.168.0.135:3260,1030
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [4] 192.168.0.133:3260,1028
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]#

8. At this point the Linux client sees the LUN over all four paths but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol
size
product
-----------------------------------------------------------------------------------------------svmluns
/vol/linluns/linux.lun /dev/sde
host3
iSCSI
10g
cDOT
svmluns

/vol/linluns/linux.lun /dev/sdd

host4

iSCSI

10g

cDOT

svmluns

/vol/linluns/linux.lun /dev/sdc

host5

iSCSI

10g

cDOT

svmluns

/vol/linluns/linux.lun /dev/sdb

host6

iSCSI

10g

cDOT

[root@rhel1 ~]#

9. Since the lab includes a pre-configured /etc/multipath.conf file you just need to start the multipathd
service to handle the multiple path management and configure it to start automatically at boot time.
[root@rhel1 ~]# service multipathd status
multipathd is stopped
[root@rhel1 ~]# service multipathd start
Starting multipathd daemon: OK
[root@rhel1 ~]# service multipathd status
multipathd (pid 8656) is running...
[root@rhel1 ~]# chkconfig multipathd on
[root@rhel1 ~]# chkconfig --list multipathd
multipathd
0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@rhel1 ~]#

10. The multipath command displays the configuration of DM-Multipath, and the multipath -ll
command displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/
mapper that you use to access the multipathed LUN (in order to create a filesystem on it and to
mount it); the first line of output from the multipath -ll command lists the name of that device file (in
this example 3600a0980774f6a34515d464d486c7137). The autogenerated name for this device
file will likely differ in your copy of the lab. Also pay attention to the output of the sanlun lun show -p
command which shows information about the Data ONTAP path of the LUN, the LUNs size, its device
file name under /dev/mapper, the multipath policy, and also information about the various device paths
themselves.
[root@rhel1 ~]# multipath -ll
[1m3600a0980774f6a34515d464d486c7137 dm-2 NETAPP,LUN C-Mode
size=10G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 6:0:0:0 sdb 8:16 active ready running
| `- 3:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 5:0:0:0 sdc 8:32 active ready running
`- 4:0:0:0 sdd 8:48 active ready running
[root@rhel1 ~]# ls -l /dev/mapper
total 0
lrwxrwxrwx 1 root root
7 Oct 20 06:50 3600a0980774f6a34515d464d486c7137 -> ../dm-2
crw-rw---- 1 root root 10, 58 Oct 19 18:57 control
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_root -> ../dm-0
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_swap -> ../dm-1
[root@rhel1 ~]# sanlun lun show -p
ONTAP Path: svmluns:/vol/linluns/linux.lun
LUN: 0
LUN Size: 10g
Product: cDOT
Host Device: 3600a0980774f6a34515d464d486c7137
Multipath Policy: round-robin 0

157

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Multipath Provider: Native


--------- ---------- ------- -----------host
vserver
path
path
/dev/
host
state
type
node
adapter
--------- ---------- ------- -----------up
primary
sdb
host6
up
primary
sde
host3
up
secondary sdc
host5
up
secondary sdd
host4
[root@rhel1 ~]#

---------------------------------------------vserver
LIF
---------------------------------------------cluster1-01_iscsi_lif_1
cluster1-01_iscsi_lif_2
cluster1-02_iscsi_lif_1
cluster1-02_iscsi_lif_2

You can see even more detail about the configuration of multipath and the LUN as a whole by running
the commands multipath -v3 -d -ll or iscsiadm -m session -P 3. As the output of these
commands is rather lengthy, it is omitted here.
11. The LUN is now fully configured for multipath access, so the only steps remaining before you can use
the LUN on the Linux host is to create a filesystem and mount it. When you run the following commands
in your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that
string from the output of ls -l /dev/mapper):
[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980774f6a34515d464d486c71377
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks:
0/204800 done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=16 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rhel1 ~]# mkdir /linuxlun
[root@rhel1 ~]# mount -t ext4 -o discard /dev/mapper/3600a0980774f6a345515d464d486c7137 /
linuxlun
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root
11877388 4962816
6311232 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
/dev/mapper/3600a0980774f6a34515d464d486c7137 10321208 154100
9642820
2% /linuxlun
[root@rhel1 ~]# ls /linuxlun
lost+found
[root@rhel1 ~]# echo "hello from rhel1" > /linuxlun/test.txt
[root@rhel1 ~]# cat /linuxlun/test.txt
hello from rhel1
[root@rhel1 ~]# ls -l /linuxlun/test.txt
-rw-r--r-- 1 root root 6 Oct 20 06:54 /linuxlun/test.txt
[root@rhel1 ~]#

The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.
12. To have RHEL automatically mount the LUNs filesystem at boot time, run the following command
(modified to reflect the multipath device path being used in your instance of the lab) to add the mount
information to the /etc/fstab file. The following command should be entered as a single line
[root@rhel1 ~]# echo '/dev/mapper/3600a0980774f6a34515d464d486c7137
/linuxlun ext4 _netdev,discard,defaults 0 0' >> /etc/fstab
[root@rhel1 ~]#

158

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5 References
The following references were used in writing this lab guide.

159

TR-3982: NetApp Clustered Data ONTAP 8.2.X an Introduction:, July 2014


TR-4100: Nondisruptive Operations and SMB File Shares for Clustered Data ONTAP, April 2013
TR-4129: Namespaces in clustered Data ONTAP, July 2014

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6 Version History

160

Version

Date

Document Version History

Version 1.0

October 2014

Initial Release for Hands On Labs

Version 1.0.1

December 2014

Updates for Lab on Demand

Version 1.1

April 2015

Updated for Data ONTAP 8.3GA and other application


software. NDO section spun out into a separate lab guide.

Version 1.2

October 2015

Updated for Data ONTAP 8.3.1GA and other application


software.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7 CLI Introduction
This begins the CLI version of the Basic Concepts for Clustered Data ONTAP 8.3.1.

161

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8 Introduction

This lab introduces the fundamentals of clustered Data ONTAP . In it you will start with a pre-created 2-node
cluster, and configure Windows 2012R2 and Red Hat Enterprise Linux 6.6 hosts to access storage on the cluster
using CIFS, NFS, and iSCSI.

8.1 Why clustered Data ONTAP?


One of the key ways to understand the benefits of clustered Data ONTAP is to consider server virtualization.
Before server virtualization, system administrators frequently deployed applications on dedicated servers in order
to maximize application performance, and to avoid the instabilities often encountered when combining multiple
applications on the same operating system instance. While this design approach was effective, it also had the
following drawbacks:

It did not scale well adding new servers for every new application was expensive.
It was inefficient most servers are significantly under-utilized, and businesses are not extracting the
full benefit of their hardware investment.
It was inflexible re-allocating standalone server resources for other purposes is time consuming, staff
intensive, and highly disruptive.

Server virtualization directly addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware, allowing
businesses to consolidate their server workloads to a smaller set of more effectively utilized physical servers.
Additionally, the ability to transparently migrate running virtual machines across a pool of physical servers
reduces the impact of downtime due to scheduled maintenance activities.
Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a single
logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data ONTAP you
can:

162

Combine different types and models of NetApp storage controllers (known as nodes) into a shared
physical storage resource pool (referred to as a cluster).
Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the
same storage cluster.
Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage
Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data
volumes, LUNs, CIFS shares, and NFS exports.
Support multi-tenancy with delegated administration of SVMs. Tenants can be different companies,
business units, or even individual application owners, each with their own distinct administrators whose
admin rights are limited to just the assigned SVM.
Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.
Non-disruptively migrate live data volumes and client connections from one cluster node to another.
Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during
hardware refresh cycles.
Leverage multiple nodes in the cluster to simultaneously service a given SVM's storage workloads.
This means that businesses can scale out their SVMs beyond the bounds of a single physical node in
response to growing storage and performance requirements, all non-disruptively.
Apply software and firmware updates, and configuration changes without downtime.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8.2 Lab Objectives


This lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow you to
focus on the topics that specifically interest you. The "Clusters" section is prerequisite for the other sections. If you
are interested in NAS functionality then complete the Storage Virtual Machines for NFS and CIFS section. If you
are interested in SAN functionality, then complete the Storage Virtual Machines for iSCSI section, and at least
one of it's Windows or Linux subsections (you may do both if you so choose).
Here is a summary of the exercises in this lab, along with their Estimated Completion Times (ECT):

Clusters (Required, ECT = 20 minutes).

Explore a cluster

View Advanced Drive Partitioning.

Create a data aggregate.

Create a Subnet.
Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.

Configure the Storage Virtual Machine for CIFS and NFS access.

Mount a CIFS share from the Storage Virtual Machine on a Windows client.

Mount a NFS volume from the Storage Virtual Machine on a Linux client.
Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)

Create a Storage Virtual Machine.

Create a volume on the Storage Virtual Machine.


For Windows (Optional, ECT = 40 minutes)

Create a Windows LUN on the volume and map the LUN to an igroup.

Configure a Windows client for iSCSI and MPIO and mount the LUN.
For Linux (Optional, ECT = 40 minutes)

Create a Linux LUN on the volume and map the LUN to an igroup.
Configure a Linux client for iSCSI and multipath and mount the LUN.
This lab includes instructions for completing each of these tasks using either System
Manager, NetApps graphical administration interface, or the Data ONTAP command line.
The end state of the lab produced by either method is exactly the same so use whichever
method you are the most comfortable with.

8.3 Prerequisites
This lab introduces clustered Data ONTAP, and makes no assumptions that the user has previous experience
with Data ONTAP. The lab does assume some basic familiarity with storage system related concepts such as
RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume that
the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mounting NFS volumes and LUNs on a Linux client. All steps are performed from
the Linux command line, and assumes a basic working knowledge of the Linux command line. A basic working
knowledge of a text editor such as vi may be useful, but is not required.

163

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

8.4 Accessing the Command Line


PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order to
run command line commands.
1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host JUMPHOST as
shown in the following screenshot; just double-click on the icon to launch it.
Tip: If you already have a PuTTY session open and you want to start another (even to a different
host), you will instead need to right-click the PuTTY icon and select PuTTY from the context
menu.

Figure 8-1:

Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.
2. By default PuTTY should launch into the Basic options for your PuTTY session display as shown in the
screenshot. If you accidentally navigate away from this view just click on the Session category item to
return to this view.
3. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You can
find the correct username and password for the host in the Lab Host Credentials table found in the Lab
Environment section of this guide.

164

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 8-2:

If you are new to the clustered Data ONTAP CLI, the length of the commands can seem a little
initimidating. However, the commands are actually quite easy to use if you remember the following 3 tips:

Make liberal use of the Tab key while entering commands, as the clustered Data ONTAP
command shell supports tab completion. If you hit the Tab key while entering a portion of a
command word, the command shell will examine the context and try to complete the rest of
the word for you. If there is insufficient context to make a single match, it will display a list of all
the potential matches. Tab completion also usually works with command argument values, but
there are some cases where there is simply not enough context for it to know what you want,
in which case you will just need to type in the argument value.
You can recall your previously entered commands by repeatedly pressing the up-arrow
key, and you can then navigate up and down the list using the up-arrow and down-arrow
keys.When you find a command you want to modify, you can use the left-arrow, rightarrow, and Delete keys to navigate around in a selected command to edit it.
Entering a question mark character (?) causes the CLI to print contextual help information. You
can use this character on a line by itself or while entering a command.

The clustered Data ONTAP command lines supports a number of additional usability features that make
the command line much easier to use. If you are interested in learning more about this topic then please
refer to the "Hands-On Lab for Advanced Features of Clustered Data ONTAP 8.3.1" lab, which contains
an entire section dedicated to this subject.

165

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9 Lab Environment
The following figure contains a diagram of the environment for this lab.

Figure 9-1:
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration
steps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data ONTAP
features that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of the
same functionality as physical storage controllers, they are not capable of providing the same performance as a
physical controller, which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 3: Table 1: Lab Host Credentials

Hostname

Description

IP Address(es)

Username

Password

JUMPHOST

Windows 20012R2 Remote


Access host

192.168.0.5

Demo\Administrator

Netapp1!

RHEL1

Red Hat 6.6 x64 Linux host

192.168.0.61

root

Netapp1!

RHEL2

Red Hat 6.6 x64 Linux host

192.168.0.62

root

Netapp1!

DC1

Active Directory Server

192.168.0.253

Demo\Administrator

Netapp1!

cluster1

Data ONTAP cluster

192.168.0.101

admin

Netapp1!

cluster1-01

Data ONTAP cluster node

192.168.0.111

admin

Netapp1!

cluster1-02

Data ONTAP cluster node

192.168.0.112

admin

Netapp1!

Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.

166

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Table 4: Table 2: Preinstalled NetApp Software

Hostname

167

Description

JUMPHOST

Data ONTAP DSM v4.1 for Windows MPIO, Windows Unified Host Utility Kit
v7.0.0, NetApp PowerShell Toolkit v3.2.1.68

RHEL1, RHEL2

Linux Unified Host Utilities Kit v7.0

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10 Using the clustered Data ONTAP Command Line


If you choose to utilize the clustered Data ONTAP command line to complete portions of this lab then you should
be aware that clustered Data ONTAP supports command line completion. When entering a command at the Data
ONTAP command line you can at any time mid-typing hit the Tab key and if you have entered enough unique text
for the command interpreter to determine what the rest of the argument would be it will automatically fill in that
text for you. For example, entering the text cluster sh and then hitting the tab key will automatically expand the
entered command text to cluster show.
At any point mid-typing you can also enter the ? character and the command interpreter will list any potential
matches for the command string. This is a particularly useful feature if you cannot remember all of the various
command line options for a given clustered Data ONTAP command; for example, to see the list of options
available for the cluster show command you can enter:
cluster1::> cluster show ?
[ -instance | -fields <fieldname>, ... ]
[[-node] <nodename>]
Node
[ -eligibility {true|false} ] Eligibility
[ -health {true|false} ]
Health
cluster1::>

When using tab completion, if the Data ONTAP command interpreter is unable to identify a unique expansion it
will display a list of potential matches similar to what using the ? character does.
cluster1::> cluster s
Error: Ambiguous command.
cluster show
cluster statistics
cluster1::>

Possible matches include:

The Data ONTAP commands are structured hierarchically. When you log in you are placed at the root of
that command hierarchy, but you can step into a lower branch of the hierarchy by entering one of the base
commands. For example, when you first log in to the cluster enter the ? command to see the list of available base
commands, as follows:
cluster1::> ?
up
cluster>
dashboard>
event>
exit
export-policy
history
job>
lun>
man
metrocluster>
network>
qos>
redo
rows
run
security>
set
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>
cluster1::>

168

Go up one directory
Manage clusters
(DEPRECATED)-Display dashboards
Manage system events
Quit the CLI session
Manage export policies and rules
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage LUNs
Display the on-line manual pages
Manage MetroCluster
Manage physical and virtual network connections
QoS settings
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the nodeshell
The security directory
Display/Set CLI session settings
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage Vservers

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The > character at the end of a command signifies that it has a sub-hierarchy; enter the vserver command to
enter the vserver sub-hierarchy.
cluster1::> vserver
cluster1::vserver> ?
active-directory>
add-aggregates
add-protocols
audit>
check>
cifs>
context
create
dashboard>
data-policy>
delete
export-policy>
fcp>
fpolicy>
group-mapping>
iscsi>
locks>
modify
name-mapping>
nfs>
peer>
remove-aggregates
remove-protocols
rename
security>
services>
show
show-protocols
smtape>
start
stop
vscan>
cluster1::vserver>

Manage Active Directory


Add aggregates to the Vserver
Add protocols to the Vserver
Manage auditing of protocol requests that the
Vserver services
The check directory
Manage the CIFS configuration of a Vserver
Set Vserver context
Create a Vserver
The dashboard directory
Manage data policy
Delete a Vserver
Manage export policies and rules
Manage the FCP service on a Vserver
Manage FPolicy
The group-mapping directory
Manage the iSCSI services on a Vserver
Manage Client Locks
Modify a Vserver
The name-mapping directory
Manage the NFS configuration of a Vserver
Create and manage Vserver peer relationships
Remove aggregates from the Vserver
Remove protocols from the Vserver
Rename a Vserver
Manage ontap security
The services directory
Display Vservers
Show protocols for Vserver
The smtape directory
Start a Vserver
Stop a Vserver
Manage Vscan

Notice how the prompt changes to reflect that you are now in the vserver sub-hierarchy, and that some of the
subcommands have sub-hierarchies of their own. To return to the root of the hierarchy enter the top command;
you can also navigate upwards one level at a time by using the up or .. commands.
cluster1::vserver> top
cluster1::>

The Data ONTAP command interpreter supports command history. By repeatedly hitting the up arrow key you
can step through the series of commands you ran earlier, and you can selectively execute a given command
again when you find it by hitting the Enter key. You can also use the left and right arrow keys to edit the
command before you run it again.

169

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11 Lab Activities
11.1 Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that are joined together for the purpose of serving
data to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute its
work across the member nodes. Communication and data transfer between member nodes (such as when a
client accesses data on a node other than the one actually hosting the data) takes place over a 10Gb clusterinterconnect network to which all the nodes are connected, while management and client data traffic passes over
separate management and data networks configured on the member nodes.
Clusters typically consist of one, or more, NetApp storage controller High Availability (HA) pairs. Both controllers
in an HA pair actively host and serve data, but they are also capable of taking over their partners responsibilities
in the event of a service disruption by virtue of their redundant cable paths to each others disk storage. Having
multiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support nondisruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This means
that cluster expansion and technology refreshes can take place while the cluster remains fully online, and serving
data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an even
number of controller nodes. There is one exception to this rule, the single node cluster, which is a special
cluster configuration that supports small storage deployments using a single physical controller head. The primary
difference between single node and standard clusters, besides the number of nodes, is that a single node cluster
does not have a cluster network. Single node clusters can be converted into traditional multi-node clusters, and
at that point become subject to all the standard cluster requirements like the need to utilize an even number of
nodes consisting of HA pairs. This lab does not contain a single node cluster, and so this lab guide does not
discuss them further.
Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although the
node limit can be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters that also host
iSCSI and FC can scale up to a maximum of 8 nodes.
This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulated
controller, also known as a VSIM, is a virtual machine that simulates the functionality of a physical controller
without the need for dedicated controller hardware. The vsim is not designed for performance testing, but does
offer much of the same functionality as a physical FAS controller, including the ability to generate I/O to disks.
This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product features. The vsim
is limited when a feature requires a specific physical capability that the vsim does not support. For example,
vsims do not support Fibre Channel connections, which is why this lab uses iSCSI to demonstrate block storage
functionality.
This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes Data
ONTAP licenses, the clusters basic network configuration, and a pair of pre-configured HA controllers. In this
next section you will create the aggregates that are used by the SVMs that you will create in later sections of the
lab. You will also take a look at the new Advanced Drive Partitioning feature introduced in clustered Data ONTAP
8.3.

11.1.1 Advanced Drive Partitioning


Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storage
in clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e.,
cabling) to a given controller head.

170

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a
group of disks that are all physically attached to the same node. A given disk can only be a member of a single
aggregate.
By default each cluster node has one aggregate known as the root aggregate, which is a group of the nodes
local disks that host the nodes Data ONTAP operating system. A nodes root aggregate is automatically created
during Data ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks
(1 data, 2 parity), and has a name that begins the string aggr0. For example, in this lab the root aggregate of
the node cluster1-01 is named aggr0_cluster1_01., and the root aggregate of the node cluster1-02 is named
aggr0_cluster1_02.
On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controllers root
aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root aggregate disk
overhead requirement signficantly reduces the disks available for storing user data. To improve usable capacity,
NetApp introduced Advanced Drive Partitioning in 8.3, which divides the Hard Disk Drives (HDDs) on nodes
that have this feature enabled into two partititions; a small root partition, and a much larger data partition. Data
ONTAP allocates the root partitions to the node root aggregate, and the data partitions for data aggregates. Each
partition behaves like a virtual disk, so in terms of RAID, Data ONTAP treats these partitions just like physical
disks when creating aggregates. The key benefit is that a much higher percentage of the nodes overall disk
capacity is now available to host user data.
Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs installed
in their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system installation
time, and there is no way to convert an existing system to use Advanced Drive Partitioning other than to
completely evacuate the affected HDDs, and re-install Data ONTAP.
All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of HDDs. The
capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3 also introduces
SSD partitioning for use with Flash Pools, but the details of that feature lie outside the scope of this lab.
In this section, you use the CLI to determine if a cluster node is utilizing Advanced Drive Partitioning.
If you do not already have a PuTTY session established to cluster1, launch PuTTY as described in the Accessing
the Command Line section at the beginning of this guide, and connect to the host cluster1 using the username
admin and the password Netapp1!.
1. List all of the physical disks attached to the cluster:
cluster1::> storage disk show
Usable
Disk
Container
Disk
Size Shelf Bay Type
Type
---------------- ---------- ----- --- ------- ----------VMw-1.1
28.44GB
0 VMDISK shared

171

VMw-1.2

28.44GB

1 VMDISK

shared

VMw-1.3

28.44GB

2 VMDISK

shared

VMw-1.4

28.44GB

3 VMDISK

shared

VMw-1.5

28.44GB

4 VMDISK

shared

VMw-1.6

28.44GB

5 VMDISK

shared

VMw-1.7

28.44GB

6 VMDISK

shared

VMw-1.8

28.44GB

8 VMDISK

shared

VMw-1.9

28.44GB

9 VMDISK

shared

VMw-1.10

28.44GB

10 VMDISK

shared

VMw-1.11
VMw-1.12
VMw-1.13

28.44GB
28.44GB
28.44GB

11 VMDISK
12 VMDISK
0 VMDISK

shared
shared
shared

VMw-1.14

28.44GB

1 VMDISK

shared

VMw-1.15

28.44GB

2 VMDISK

shared

Basic Concepts for Clustered Data ONTAP 8.3.1

Container
Name
Owner
--------- -------aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
aggr0_cluster1_01
cluster1-01
cluster1-01
cluster1-01
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

VMw-1.16

28.44GB

3 VMDISK

shared

VMw-1.17

28.44GB

4 VMDISK

shared

VMw-1.18

28.44GB

5 VMDISK

shared

VMw-1.19

28.44GB

6 VMDISK

shared

VMw-1.20

28.44GB

8 VMDISK

shared

VMw-1.21
VMw-1.22

28.44GB
28.44GB

9 VMDISK
10 VMDISK

shared
shared

VMw-1.23
28.44GB
VMw-1.24
28.44GB
24 entries were displayed.
cluster1::>

11 VMDISK
12 VMDISK

shared
shared

cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
cluster1-02
aggr0_cluster1_02
aggr0_cluster1_02
cluster1-02
cluster1-02
cluster1-02

The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster.
The container type for all the disks is shared, which indicates that the disks are partitioned. For disks
that are not partitioned, you would typically see values like spare, data, parity, and dparity. The
Owner field indicates which node the disk is assigned to, and the Container Name field indicates which
aggregate the disk is assigned to. Notice that two disks for each node do not have a Container Name
listed; these are spare disks that Data ONTAP can use as replacements in the event of a disk failure.
2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List the
aggregates that exist on the cluster:
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
10.26GB
510.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
10.26GB
510.6MB
95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::>

3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the command
that you would ordinarily use to display that information for an aggregate that is not using partitioned
disks.
cluster1::> storage disk show -aggregate aggr0_cluster1_01
There are no entries matching your query.
Info: One or more aggregates queried for use shared disks. Use "storage aggregate show-status"
to get correct set of disks associated with these aggregates.
cluster1::>

4. As you can see, in this instance the preceding command is not able to produce a list of disks because
this aggregate is using shared disks. Instead it refers you to use the storage aggregate show command
to query the aggregate for a list of its assigned disk partitions.
cluster1::> storage aggregate show-status -aggregate aggr0_cluster1_01
Owner Node: cluster1-01
Aggregate: aggr0_cluster1_01 (online, raid_dp) (block checksums)
Plex: /aggr0_cluster1_01/plex0 (online, normal, active, pool0)
RAID Group /aggr0_cluster1_01/plex0/rg0 (normal, block checksums)
Usable Physical
Position Disk
Pool Type
RPM
Size
Size
-------- --------------------------- ---- ----- ------ -------- -------shared
VMw-1.1
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.2
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.3
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.4
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.5
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.6
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.7
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.8
0
VMDISK
1.52GB 28.44GB
shared
VMw-1.9
0
VMDISK
1.52GB 28.44GB

172

Basic Concepts for Clustered Data ONTAP 8.3.1

Status
-------(normal)
(normal)
(normal)
(normal)
(normal)
(normal)
(normal)
(normal)
(normal)

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

shared
VMw-1.10
10 entries were displayed.
cluster1::>

VMDISK

1.52GB

28.44GB (normal)

The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 1.52 GB,
and you know that the aggregate is using the listed disks root partitions because aggr0_cluster1_01 is a
root aggregate.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically
determines the size of the root and data disk partitions at system installation time. That determination
is based on the quantity and size of the available disks assigned to each node. As you saw earlier, this
particular cluster node has 12 disks, so during installation Data ONTAP partitioned all 12 disks but only
assigned 10 of those root partitions to the root aggregate so that the node would have 2 spares disks
available.to protect against disk failures.
5. The Data ONTAP CLI includes a diagnostic level command that provides a more comprehensive single
view of a systems partitioned disks. The following command shows the partitioned disks that belong to
the node cluster1-01.
cluster1::> set -priv diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster1::*> disk partition show -owner-node-name cluster1-01
Usable Container
Container
Partition
Size
Type
Name
Owner
------------------------- ------- ------------- ----------------- ----------------VMw-1.1.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.1.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.2.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.2.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.3.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.3.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.4.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.4.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.5.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.5.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.6.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.6.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.7.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.7.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
VMw-1.8.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.8.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.9.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.9.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.10.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.10.P2
1.52GB aggregate
/aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.11.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.11.P2
1.52GB spare
Pool0
cluster1-01
VMw-1.12.P1
26.88GB spare
Pool0
cluster1-01
VMw-1.12.P2
1.52GB spare
Pool0
cluster1-01
24 entries were displayed.
cluster1::*> set -priv admin
cluster1::>

11.1.2 Create a New Aggregate on Each Cluster Node


The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregate
should not be used to host user data, so in this section you will be creating a new aggregate on each of the nodes
in cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will be creating later in this
lab.

173

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the
storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to use
one or more specific aggregates to host the SVMs volumes. Multiple SVMs can be assigned to use the same
aggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just a
single SVM provides greater workload isolation.
For this lab, you will be creating a single user data aggregate on each node in the cluster.
1. Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist option
to display a list of the disks in the entire cluster.)
Note: By default the PuTTY window may wrap output lines because the window is too small;
if this is the case for you then simply expand the window by selecting its edge and dragging it
wider, after which any subsequent output will utilize the visible width of the window.
cluster1::> disk show -nodelist cluster1-01
Usable
Disk
Disk
Size Shelf Bay Type
---------------- ---------- ----- --- ------VMw-1.25
28.44GB
0 VMDISK
VMw-1.26
28.44GB
1 VMDISK
VMw-1.27
28.44GB
2 VMDISK
VMw-1.28
28.44GB
3 VMDISK
VMw-1.29
28.44GB
4 VMDISK
VMw-1.30
28.44GB
5 VMDISK
VMw-1.31
28.44GB
6 VMDISK
VMw-1.32
28.44GB
8 VMDISK
VMw-1.33
28.44GB
9 VMDISK
VMw-1.34
28.44GB
- 10 VMDISK
VMw-1.35
28.44GB
- 11 VMDISK
VMw-1.36
28.44GB
- 12 VMDISK
VMw-1.37
28.44GB
0 VMDISK
VMw-1.38
28.44GB
1 VMDISK
VMw-1.39
28.44GB
2 VMDISK
VMw-1.40
28.44GB
3 VMDISK
VMw-1.41
28.44GB
4 VMDISK
VMw-1.42
28.44GB
5 VMDISK
VMw-1.43
28.44GB
6 VMDISK
VMw-1.44
28.44GB
8 VMDISK
VMw-1.45
28.44GB
9 VMDISK
VMw-1.46
28.44GB
- 10 VMDISK
VMw-1.47
28.44GB
- 11 VMDISK
VMw-1.48
28.44GB
- 12 VMDISK
24 entries were displayed.
cluster1::>

Container
Type
----------shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared
shared

Container
Name
Owner
--------- -------aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
aggr0_cluster1_01 cluster1-01
cluster1-01
cluster1-01
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
aggr0_cluster1_02 cluster1-02
cluster1-02
cluster1-02

2. Display a list of the aggregates on the cluster.


cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01 10.26GB 510.6MB 95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02 10.26GB 510.6MB 95% online
1 cluster1-02
raid_dp,
normal
2 entries were displayed.
cluster1::>

3. Create the aggregate named aggr1_cluster1_01 on the node cluster1-01.


cluster1::> aggr create -aggregate aggr1_cluster1_01 -node cluster1-01 -diskcount 5
[Job 257] Job is queued: Create aggr1_cluster1_01.
[Job 257] Job succeeded: DONE
cluster1::>

4. Create the aggregate named aggr1_cluster1_02 on the node cluster1-02.


cluster1::> aggr create -aggregate aggr1_cluster1_02 -node cluster1-02 -diskcount 5
[Job 258] Job is queued: Create aggr1_cluster1_02.
[Job 258] Job succeeded: DONE
cluster1::>

174

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Display the list of aggregates on the cluster again.


cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01 10.26GB 510.6MB 95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02 10.26GB 510.6MB 95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01 72.53GB 72.53GB 0% online
0 cluster1-01
raid_dp,
normal
aggr1_cluster1_02 72.53GB 72.53GB 0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>

11.1.3 Networks
This section discusses the network components that Clustered Data ONTAP provides to manage your cluster.
Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps) you
can create to aggregate those connections, and the VLANs you can use to subdivide them.
A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of associated
characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail
over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM,
and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs, in part, on
all nodes that are hosting its LIFs.
Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM has its
own routing table, changes to one SVMs routing table does not have impact on any other SVMs routing table.
IPspaces are new in Data ONTAP 8.3, and allow you to configure a Data ONTAP cluster to logically separate
one IP network from another, even if those two networks are using the same IP address range. IPspaces are a
mult-tenancy feature that allow storage service providers to share a cluster between different companies while still
separating storage traffic for privacy and security. Every cluster includes a default IPspace to which Data ONTAP
automatically assigns new SVMs, and that default IPspace is probably sufficient for most NetApp customers who
deploy a cluster within a single company or organization that uses a non-conflicting IP address range.
Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to the
same layer 2 networks, both physical and virtual (i.e., VLANs). Every IPspace has its own set of Broadcast
Domains, and Data ONTAP provides a default broadcast domain to go along with the default IPspace. Broadcast
domains are used by Data ONTAP to determine what ports an SVM can use for its LIFs.
Subnets in Data ONTAP 8.3 are a convenience feature intended to make LIF creation and management easier
for Data ONTAP administrators. A subnet is a pool of IP addresses that you can specify by name when creating
a LIF. Data ONTAP will automatically assign an available IP address from the pool to the LIF, along with a subnet
mask and a gateway. A subnet is scoped to a specific broadcast domain, so all the subnets addresses belong
to the same layer 3 network. Data ONTAP manages the pool automatically as you create or delete LIFs, and if
you manually configure a LIF with an address from the pool, it will detect that the address is in use and mark it as
such in the pool.
DNS Zones allow an SVM to manage DNS name resolution for its own LIFs, and since multiple LIFs can share
the same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To use DNS
Zones you must configure your DNS server to delegate DNS authority for the subdomain to the SVM.

11.1.3.1 Create Subnets


1. Display a list of the clusters IPspaces. A cluster actually contains two IPspaces by default; the
Cluster IPspace, which correlates to the cluster network that Data ONTAP uses to have cluster nodes

175

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

communicate with each other, and the Default IPspace to which Data ONTAP automatically assigns all
new SVMs. You can create more IPspaces if necessary, but that activity will not be covered in this lab.
cluster1::> network ipspace show
IPspace
Vserver List
------------------- ----------------------------Cluster
Cluster
Default
cluster1
2 entries were displayed.
cluster1::>

Broadcast Domains
---------------------------Cluster
Default

2. Display a list of the clusters broadcast domains. Remember that broadcast domains are scoped to
a single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in the
Cluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace.
cluster1::> network port broadcast-domain show
IPspace Broadcast
Name
Domain Name
MTU Port List
------- ----------- ------ ----------------------------Cluster Cluster
1500
cluster1-01:e0a
cluster1-01:e0b
cluster1-02:e0a
cluster1-02:e0b
Default Default
1500
cluster1-01:e0c
cluster1-01:e0d
cluster1-01:e0e
cluster1-01:e0f
cluster1-01:e0g
cluster1-02:e0c
cluster1-02:e0d
cluster1-02:e0e
cluster1-02:e0f
cluster1-02:e0g
2 entries were displayed.
cluster1::>

Update
Status Details
-------------complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete
complete

3. Display a list of the clusters subnets.


cluster1::> network subnet show
This table is currently empty.
cluster1::>

4. Data ONTAP does not include a default subnet, so you will need to create a subnet now. The specific
command you will use depends on what sections of this lab guide you plan to complete, as you want to
correctly align the IP address pool in your lab with the IP addresses used in the portions of this lab guide
that you want to complete.

If you plan to complete the NAS portion of this lab, enter the following command. Use this
command as well if you plan to complete both the NAS and SAN portions of this lab.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.131-192.168.0.139
cluster1::>

If you only plan to complete the SAN portion of this lab, then enter the following command
instead.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.133-192.168.0.139
cluster1::>

176

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Re-display the list of the clusters subnets. This example assumes you plan to complete the whole lab.
cluster1::> network subnet
IPspace: Default
Subnet
Name
Subnet
--------- ---------------Demo
192.168.0.1/24
cluster1::>

show
Broadcast
Avail/
Domain
Gateway
Total
Ranges
--------- --------------- --------- --------------Default
192.168.0.1
9/9
192.168.0.131-192.168.0.139

6. If you are interested in seeing a list of all of the network ports on your cluster, you can use the following
command for that purpose.
cluster1::> network port show
Node
Port
IPspace
------ --------- -----------cluster1-01
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
cluster1-02
e0a
Cluster
e0b
Cluster
e0c
Default
e0d
Default
e0e
Default
e0f
Default
e0g
Default
14 entries were displayed.
cluster1::>

Speed (Mbps)
Broadcast Domain Link
MTU
Admin/Oper
---------------- ----- ------- -----------Cluster
Cluster
Default
Default
Default
Default
Default

up
up
up
up
up
up
up

1500
1500
1500
1500
1500
1500
1500

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000

Cluster
Cluster
Default
Default
Default
Default
Default

up
up
up
up
up
up
up

1500
1500
1500
1500
1500
1500
1500

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/1000

11.2 Create Storage for NFS and CIFS


Expected Completion Time: 40 Minutes
If you are only interested in SAN protocols then you do not need to complete this section. However, we
recommend that you review the conceptual information found here, and at the beginning of each of this sections
subsections, before you advance to the SAN section as most of this conceptual material will not be repeated
there.
Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operate
within a cluster that serve data out to storage clients. A single cluster can host hundreds of SVMs, with each SVM
managing its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g.,
NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients, its own namespace.
The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and customers
are encouraged to actively embrace this feature in order to take full advantage of a clusters capabilities. We
recommend against any organization starting out on a deployment intended to scale with only a single SVM.
You explicitly configure which storage protocols you want a given SVM to support at the time you create that
SVM. You can later add or remove protocols as desired. A single SVM can host any combination of the supported
protocols.
An SVMs assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. As
you saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means that an
SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a direct
relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associated
characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail
over to, an assigned SVM, a role, a routing group, and so on. You can only assign a given LIF to a single SVM,

177

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

and since LIFs map to physical network ports on cluster nodes, this means that an SVM runs in part on all nodes
that are hosting its LIFs.
When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes hosted
by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is a
function of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility under
NetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some load
balancing by responding to different clients with different LIF addresses, but this distribution is not sophisticated
and requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFS
Servers do not handle name resolution on their own.
DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname.
DNS is supported by both NFS and CIFS clients, and works equally well with clients on local area and wide area
networks. Since DNS is an external service that resides outside of Data ONTAP, this architecture creates the
potential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline.
To compensate for this condition you can configure DNS servers to delegate the name resolution responsibility
for the SVMs hostname records to the SVM itself, so that it can directly respond to name resolution requests
involving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding what
LIF address to return in response to a DNS name resolution request.
LIFS that map to physical network ports that reside on the same node as a volumes containing aggregate offer
the most efficient client access path to the volumes data. However, clients can also access volume data through
LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP uses
the high speed cluster network to bridge communication between the node hosting the LIF and the node hosting
the volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that has
an aggregate that is hosting volumes for that SVM. If you desire additional resiliency then you can also create a
NAS LIF on nodes not hosting aggregates for the SVM.
A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to another
in the event of a component failure. Any existing connections to that LIF from NFS and SMB 2.0 (and later)
clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates to
a different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing network
requests from that new node/port. Throughout this operation the NAS LIF maintains its IP address. Clients
connected to the LIF may notice a brief delay while the failover is in progress, but as soon as it completes the
clients resume any in-process NAS operations without any loss of data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storage
controller node can host a maximum of 125 SVMs, so you can calculate the clusters effective SVM limit by
multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but there
is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node is
part of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node can
also accommodate its HA partners LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVMs CIFS and NFS volumes into a single
logical filesystem view. Clients can access the entire namespace by mounting a single share or export at the
top of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistent
view of the SVMs data to all clients rather than having to reproduce that view structure on each individual
client. As an administrator maps and unmaps volumes from the namespace, those volumes instantly become
visible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVMs namespace.
Administrators can also create NFS exports at individual junction points within the namespace, and can create
CIFS shares at any directory path in the namespace.

11.2.1 Create a Storage Virtual Machine for NAS


In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a volume
over NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the cluster.
Start by creating the storage virtual machine.

178

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

If you do not already have a PuTTY connection open to cluster1 then open one now following the directions in
the Accessing the Command Line section at the beginning of his lab guide. The username is admin and the
password is Netapp1!.
1. Create the SVM named svm1. Notice that the clustered Data ONTAP command line syntax still refers to
storage virtual machines as vservers.
cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate aggr1_cluster1_01
-language C.UTF-8 -rootvolume-security ntfs -snapshot-policy default
[Job 259] Job is queued: Create svm1.
[Job 259]
[Job 259] Job succeeded:
Vserver creation completed
cluster1::>

2. Add CIFS and NFS protocol support to the SVM svm1:


cluster1::> vserver show-protocols -vserver svm1
Vserver: svm1
Protocols: nfs, cifs, fcp, iscsi, ndmp
cluster1::>

3. Remove the FCP, iSCSI, and NDMP protocols from the SVM svm1.
cluster1::> vserver remove-protocols -vserver svm1 -protocols fcp,iscsi,ndmp
cluster1::>

4. Display the list of protocols assigned to the SVM svm1.


cluster1::> vserver show-protocols -vserver svm1
Vserver: svm1
Protocols: nfs, cifs
cluster1::>

5. Display a list of the vservers in the cluster.


cluster1::> vserver show
Vserver
----------cluster1
cluster1-01
cluster1-02
svm1

Type
------admin
node
node
data

Subtype
---------default

Admin
State
---------running

Operational
State
----------running

Root
Volume
---------svm1_root

Aggregate
---------aggr1_
cluster1_
01

4 entries were displayed.


cluster1::>

6. Display a list of the clusters network interfaces:


cluster1::> network interface show
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------Cluster
cluster1-01_clus1
up/up
169.254.224.98/16
cluster1-02_clus1
up/up
169.254.129.177/16
cluster1
cluster1-01_mgmt1
up/up
192.168.0.111/24
cluster1-02_mgmt1
up/up
192.168.0.112/24
cluster_mgmt up/up
192.168.0.101/24
5 entries were displayed.
cluster1::>

179

Basic Concepts for Clustered Data ONTAP 8.3.1

Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01

e0a

true

cluster1-02

e0a

true

cluster1-01

e0c

true

cluster1-02
cluster1-01

e0c
e0c

true
true

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7. Notice that there are not any LIFs defined for the SVM svm1 yet. Create the svm1_cifs_nfs_lif1 data LIF
for svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>

8. Create the svm1_cifs_nfs_lif2 data LIF for the SVM svm1:


cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data
-data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>

9. Display all of the LIFs owned by svm1:


cluster1::> network interface show -vserver svm1
Logical
Status
Network
Vserver
Interface Admin/Oper Address/Mask
----------- ---------- ---------- -----------------svm1
svm1_cifs_nfs_lif1
up/up
192.168.0.131/24
svm1_cifs_nfs_lif2
up/up
192.168.0.132/24
2 entries were displayed.
cluster1::>

Current
Current Is
Node
Port
Home
------------- ------- ---cluster1-01

e0c

true

cluster1-02

e0c

true

10. Display the SVM svm1's DNS configuration.


cluster1::> vserver services dns show
Name
Vserver
State
Domains
Servers
--------------- --------- ----------------------------------- ---------------cluster1
enabled
demo.netapp.com
192.168.0.253
cluster1::>

11. Configure the DNS domain and nameservers for the svm1 SVM:
cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253
-domains demo.netapp.com
cluster1::>

12. Display SVM's DNS configuration.


cluster1::> vserver services dns show
Vserver
State
--------------- --------cluster1
enabled
svm1
enabled
2 entries were displayed.
cluster1::>

Domains
----------------------------------demo.netapp.com
demo.netapp.com

Name
Servers
---------------192.168.0.253
192.168.0.253

Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
you can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have done
this as part of the network interface create commands, but we opted to perform it separately here so
you could see how to modify an existing LIF.
13. Configure lif1 to accept DNS delegation responsibility for the svm1.demo.netapp.com zone.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1
-dns-zone svm1.demo.netapp.com
cluster1::>

14. Configure lif2 to accept DNS delegation responsibility for the svm1.demo.netapp.com zone.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2

180

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

-dns-zone svm1.demo.netapp.com
cluster1::>

15. Display the DNS delegation for svm1.


cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif
address
dns-zone
------- ------------------ ------------- -------------------svm1
svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1
svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>

16. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username "root" and password "Netapp1!") and executing the following commands. If the delegation is
working correctly you should see IP addresses returned for the host svm1.demo.netapp.com, and if you
run the command several times you will eventually see that the responses vary the returned address
between the SVMs two LIFs.
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server: 192.168.0.253
Address: 192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server: 192.168.0.253
Address: 192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
[root@rhel1 ~]#

17. This completes the planned LIF configuration changes for svm1, so now display a detailed configuration
report for the LIF svm1_cifs_nfs_lif1:
cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance
Vserver Name: svm1
Logical Interface Name: svm1_cifs_nfs_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.131
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Demo
Administrative Status: up
Failover Policy: system-defined
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: svm1.demo.netapp.com
DNS Query Listen Enable: true
Failover Group Name: Default
FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>

When you issued the vserver create command to create svm1 you included an option to enable CIFS,
but that command did not actually create a CIFS server for the svm. Now it is time to create that CIFS
server.

181

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

18. Display the status of the cluster's CIFS servers.


cluster1::> vserver cifs show
This table is currently empty.
cluster1::>

19. Create a CIFS server for svm1.


cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"DEMO.NETAPP.COM" domain.
Enter the user name: Administrator
Enter the password:
cluster1::>

20. Display the status of the cluster's CIFS servers.


cluster1::> vserver cifs show
Server
Status
Vserver
Name
Admin
----------- --------------- --------svm1
SVM1
up
cluster1::>

Domain/Workgroup
Name
---------------DEMO

Authentication
Style
-------------domain

As with CIFS, when you created svm1 you included an option to enable NFS, but that command did not
actually create the NFS server. Now it is time to create that NFS server.
21. Display the status of the NFS server for svm1.
cluster1::> vserver nfs status -vserver svm1
The NFS server is not running on Vserver "svm1".
cluster1::>

22. Create an NFS v3 NFS server for svm1.


cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::>

23. Display the status of the NFS server for svm1 again.
cluster1::> vserver nfs status -vserver svm1
The NFS server is running on Vserver "svm1".
cluster1::>

11.2.2 Configure CIFS and NFS


Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the "svm1" SVM in the
previous section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand that
clients cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created any
volumes on the SVM, but also because you have not told the SVM what you want to share, and who you want to
share it with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVMs volumes into a directory
hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVMs root volume
(svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFS
and NFS clients. The SVMs other volumes are junctioned (i.e. mounted) within that root volume or within other
volumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified,
centrally maintained view of the storage encompassed by the namespace, regardless of where those junctioned
volumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not been
junctioned into the namespace.

182

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declared
at the top of the namespace. While this is a very powerful capability, there is no requirement to make the whole
namespace accessible. You can create CIFS shares at any directory level in the namespace, and you can
create different NFS export rules at junction boundaries for individual volumes and for individual qtrees within a
junctioned volume.
Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy model
that dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exports
the root of its namespace and automatically associates that export with the SVMs default export policy. But that
default policy is initially empty, and until it is populated with access rules no NFS clients will be able to access
the namespace. The SVMs default export policy applies to the root volume and also to any volumes that an
administrator junctions into the namespace, but an administrator can optionally create additional export policies
in order to implement different access rules within the namespace. You can apply export policies to a volume
as a whole and to individual qtrees within a volume, but a given volume or qtree can only have one associated
export policy. While you cannot create NFS exports at any other directory level in the namespace, NFS clients
can mount from any level in the namespace by leveraging the namespaces root export.
In this section of the lab, you are going to configure a default export policy for your SVM so that any volumes you
junction into its namespace will automatically pick up the same NFS export rules. You will also create a single
CIFS share at the top of the namespace so that all the volumes you junction into that namespace are accessible
through that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will be
setting up name mapping between UNIX and Windows user accounts to facilitate smooth multiprotocol access to
the volumes and files in the namespace.
When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVMs namespace. An
SVM always has a root volume, whether or not it is configured to support NAS protocols.
1. Verify that CIFS is running by default for the SVM svm1:
cluster1::> vserver cifs show
Server
Status
Vserver
Name
Admin
----------- --------------- --------svm1
SVM1
up
cluster1::>

Domain/Workgroup
Name
---------------DEMO

Authentication
Style
-------------domain

2. Display the status of the NFS server for svm1 again.


cluster1::> vserver nfs status -vserver svm1
The NFS server is running on Vserver "svm1".
cluster1::>

3. Display the NFS server's configuration.


cluster1::> vserver nfs show -vserver svm1
Vserver: svm1
General NFS Access: true
NFS v3: enabled
NFS v4.0: disabled
UDP Protocol: enabled
TCP Protocol: enabled
Default Windows User: NFSv4.0 ACL Support: disabled
NFSv4.0 Read Delegation Support: disabled
NFSv4.0 Write Delegation Support: disabled
NFSv4 ID Mapping Domain: defaultv4iddomain.com
NFSv4 Grace Timeout Value (in secs): 45
Preserves and Modifies NFSv4 ACL (and NTFS File Permissions in Unified Security Style):
enabled
NFSv4.1 Minor Version Support: disabled
Rquota Enable: disabled
NFSv4.1 Parallel NFS Support: enabled
NFSv4.1 ACL Support: disabled
NFS vStorage Support: disabled
NFSv4 Support for Numeric Owner IDs: enabled
Default Windows Group: NFSv4.1 Read Delegation Support: disabled
NFSv4.1 Write Delegation Support: disabled

183

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

NFS Mount Root Only: enabled


NFS Root Only: disabled
Permitted Kerberos Encryption Types: des, des3, aes-128, aes-256
Showmount Enabled: disabled
Set the Protocol Used for Name Services Lookups for Exports: udp
NFSv3 MS-DOS Client Support: disabled
cluster1::>

4. Display a list of all the export policies.


cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::>

The only defined policy is "default".


5. Display a list of all the export policy rules.
cluster1::> vserver export-policy rule show
This table is currently empty.
cluster1::>

There are no rules defined for the "default" export policy.


6. Add a rule to the default export policy granting read-write access to all hosts.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname default
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1
cluster1::>

7. Display a listing of all the export policy rules.


cluster1::> vserver export-policy rule show
Policy
Rule
Access
Vserver
Name
Index
Protocol
------------ --------------- ------ -------svm1
default
1
any
cluster1::>

Client
Match
--------------------0.0.0.0/0

RO
Rule
--------any

8. Display a detailed listing of all the export policy rules.


cluster1::> vserver export-policy rule show -policyname default -instance
Vserver: svm1
Policy Name: default
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>

9. Display a list of the shares in the cluster.


cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
Full Control
svm1
ipc$
3 entries were displayed.
cluster1::>

184

Basic Concepts for Clustered Data ONTAP 8.3.1

Properties
---------browsable
oplocks

Comment
--------

browsable
changenotify
browsable -

ACL
----------BUILTIN\Administrators /

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10. Create a share at the root of the namespace for the SVM svm1:
cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path /
cluster1::>

11. Display a list of the shares in the cluster again.


cluster1::> vserver cifs share show
Vserver
Share
Path
-------------- ------------- ----------------svm1
admin$
/
svm1
c$
/
Full Control

Properties
---------browsable
oplocks

svm1
svm1

browsable
changenotify
browsable oplocks
browsable
changenotify

ipc$
nsroot

/
/

Comment
--------

ACL
----------BUILTIN\Administrators /

Everyone / Full Control

4 entries were displayed.


cluster1::>

Set up CIFS <-> NFS user name mapping for the SVM svm1.
12. Display a list of the current name mappings.
cluster1::> vserver name-mapping show
This table is currently empty.
cluster1::>

13. Create a name mapping of DEMO\Administrator (specified in the command as "demo\\administrator") to


root.
cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1
-pattern demo\\administrator -replacement root
cluster1::>

14. Create a name mapping of root to DEMO\Administrator.


cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1
-pattern root -replacement demo\\administrator
cluster1::>

15. Display a list of the current name mappings.


cluster1::> vserver name-mapping show
Vserver
Direction Position
-------------- --------- -------svm1
win-unix 1
Pattern:
Replacement:
svm1
unix-win 1
Pattern:
Replacement:
2 entries were displayed.
cluster1::>

demo\\administrator
root
root
demo\\administrator

11.2.3 Create a Volume and Map It to the Namespace Using the CLI
Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume only
resides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate,
which can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of a
volume can vary depending on what storage controller model is hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols

185

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

(varies based on controller model), which means that there is an effective limit on the total number of volumes
that a cluster can host, depending on how many nodes there are in your cluster.
Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the nodes Data
ONTAP operating system. Do not use the nodes root aggregate to host any other volumes or user data; always
create additional aggregates and volumes for that purpose.
Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin provisioning,
deduplication, and compression. One specific storage efficiency feature you will be seeing in the section of the lab
is thin provisioning, which dictates how space for a FlexVol is allocated in its containing aggregate.
When you create a FlexVol with a volume guarantee of type volume you are thickly provisioning the volume,
pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume will
never run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volume
guarantee of none you are thinly provisioning the volume, only allocating space for it on the containing
aggregate at the time and in the quantity that the volume actually requires the space to store the data.
This latter configuration allows you to increase your overall space utilization and even oversubscribe an
aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribed
volumes reached their full size. However, if an oversubscribed aggregate does fill up then all its volumes will run
out of space before they reach their maximum volume size, therefore oversubscription deployments generally
require a greater degree of administrative vigilance around space utilization.
In the Clusters section, you created a new aggregate named aggr1_cluster1_01; you will now use that
aggregate to host a new thinly provisioned volume named engineering for the SVM named svm1.
1. Display basic information about the SVMs current list of volumes:
cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----svm1
svm1_root
aggr1_cluster1_01
online
RW
20MB
18.86MB
5%
cluster1::>

2. Display the junctions in the SVMs namespace:


cluster1::> volume show -vserver svm1 -junction
Junction
Vserver
Volume
Language Active
Junction Path
--------- ------------ -------- -------- ------------------------svm1
svm1_root
C.UTF-8 true
/
cluster1::>

Junction
Path Source
-----------

3. Create the volume engineering, junctioning it into the namespace at /engineering:


cluster1::> volume create -vserver svm1 -volume engineering -aggregate aggr1_cluster1_01 size 10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path /
engineering
[Job 267] Job is queued: Create engineering.
[Job 267] Job succeeded: Successful
cluster1::>

4. Display a list of svm1's volumes.


cluster1::> volume show -vserver svm1
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
2 entries were displayed.
cluster1::>

Type
Size Available Used%
---- ---------- ---------- ----RW

10GB

9.50GB

5%

RW

20MB

18.86MB

5%

5. Display a list of svm1's volume junction points.


cluster1::> volume show -vserver svm1 -junction

186

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Vserver
Volume
Language
--------- ------------ -------svm1
engineering C.UTF-8
svm1
svm1_root
C.UTF-8
2 entries were displayed.
cluster1::>

Junction
Active
-------true
true

Junction Path
------------------------/engineering
/

Junction
Path Source
----------RW_volume
-

6. Create the volume eng_users, junctioning it into the namespace at /engineering/users.


cluster1::> volume create -vserver svm1 -volume eng_users -aggregate aggr1_cluster1_01 size 10GB -percent-snapshot-space 5 -space-guarantee none -policy default -junction-path /
engineering/users
[Job 268] Job is queued: Create eng_users.
[Job 268] Job succeeded: Successful
cluster1::>

7. Display a list of svm1's volume junction points.


volume show -vserver svm1 -junction
Junction
Vserver
Volume
Language Active
--------- ------------ -------- -------svm1
eng_users
C.UTF-8 true
svm1
engineering C.UTF-8 true
svm1
svm1_root
C.UTF-8 true
3 entries were displayed.
cluster1::>

Junction Path
------------------------/engineering/users
/engineering
/

Junction
Path Source
----------RW_volume
RW_volume
-

8. Display detailed information about the volume engineering. Notice here that the volume is reporting as
thin provisioned (Space Guarantee Style is set to none) and that the Export Policy is set to default.
cluster1::> volume show -vserver svm1 -volume engineering -instance
Vserver Name: svm1
Volume Name: engineering
Aggregate Name: aggr1_cluster1_01
Volume Size: 10GB
Volume Data Set ID: 1026
Volume Master Data Set ID: 2147484674
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: Group ID: Security Style: ntfs
UNIX Permissions: -----------Junction Path: /engineering
Junction Path Source: RW_volume
Junction Active: true
Junction Parent Volume: svm1_root
Comment:
Available Size: 9.50GB
Filesystem Size: 10GB
Total User-Visible Size: 9.50GB
Used Size: 152KB
Used Percentage: 5%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 12GB
(DEPRECATED)-Autosize Increment (for flexvols only): 512MB
Minimum Autosize: 10GB
Autosize Grow Threshold Percentage: 85%
Autosize Shrink Threshold Percentage: 50%
Autosize Mode: off
Autosize Enabled (for flexvols only): false
Total Files (for user-visible data): 311280
Files Used (for user-visible data): 98
Space Guarantee Style: none
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshot Copies: 5%
Snapshot Reserve Used: 0%
Snapshot Policy: default

187

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Creation Time: Mon Oct 20 02:33:31 2014


Language: C.UTF-8
Clone Volume: false
Node name: cluster1-01
NVFAIL Option: off
Volume's NVFAIL State: false
Force NVFAIL on MetroCluster Switchover: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 0%
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Inconsistency in the File System: false
Is Volume Quiesced (On-Disk): false
Is Volume Quiesced (In-Memory): false
Volume Contains Shared or Compressed Data: false
Space Saved by Storage Efficiency: 0B
Percentage Saved by Storage Efficiency: 0%
Space Saved by Deduplication: 0B
Percentage Saved by Deduplication: 0%
Space Shared by Deduplication: 0B
Space Saved by Compression: 0B
Percentage Space Saved by Compression: 0%
Volume Size Used by Snapshot Copies: 0B
Block Type: 64-bit
Is Volume Moving: false
Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: Managed By Storage Service: Create Namespace Mirror Constituents For SnapDiff Use: Constituent Volume Role: QoS Policy Group Name: Caching Policy Name: Is Volume Move in Cutover Phase: false
Number of Snapshot Copies in the Volume: 0
VBN_BAD may be present in the active filesystem: false
Is Volume on a hybrid aggregate: false
Total Physical Used Size: 152KB
Physical Used Percentage: 0%
cluster1::>

9. View how much disk space this volume is actually consuming in its containing aggregate; the Total
Footprint value represents the volumes total consumption. The value here is so small because this
volume is thin provisioned and you have not yet added any data to it. If you had thick provisioned the
volume then the footprint here would have been 1 GB, the full size of the volume.
cluster1::> volume show-footprint -volume engineering
Vserver : svm1
Volume : engineering
Feature
Used
----------------------------------------Volume Data Footprint
152KB
Volume Guarantee
0B
Flexible Volume Metadata
13.38MB
Delayed Frees
352KB
Total Footprint
13.88MB
cluster1::>

Used%
----0%
0%
0%
0%
0%

10. Create a qtree in the eng_users volume named "bob".


cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree bob
cluster1::>

11. Create a qtree in the eng_users volume named "susan".


cluster1::> volume qtree create -vserver svm1 -volume eng_users -qtree susan
cluster1::>

12. Generate a list of all the qtrees that belong to svm1.


cluster1::> volume qtree show -vserver svm1
Vserver
Volume
Qtree
Style
Oplocks
Status
---------- ------------- ------------ ------------ --------- --------

188

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm1
eng_users
""
svm1
eng_users
bob
svm1
eng_users
susan
svm1
engineering
""
svm1
svm1_root
""
5 entries were displayed.
cluster1::>

ntfs
ntfs
ntfs
ntfs
ntfs

enable
enable
enable
enable
enable

normal
normal
normal
normal
normal

13. Produce a detailed report of the configuration for the qtree bob.
cluster1::> volume qtree show -qtree bob -instance
Vserver Name: svm1
Volume Name: eng_users
Qtree Name: bob
Actual (Non-Junction) Qtree Path: /vol/eng_users/bob
Security Style: ntfs
Oplock Mode: enable
Unix Permissions: Qtree Id: 1
Qtree Status: normal
Export Policy: default
Is Export Policy Inherited: true
cluster1::>

11.2.4 Connect to the SVM From a Windows Client


The svm1 SVM is up and running and is configured for NFS and CIFS access, so its time to validate that
everything is working properly by mounting the NFS export on a Linux host, and the CIFS share on a Windows
host. You should complete both parts of this section so you can see that both hosts are able to seamlessly access
the volume and its files.
This part of the lab demonstrates connecting the Windows client jumphost to the CIFS share \\svm1\nsroot using
the Windows GUI.
1. On the Windows host jumphost open Windows Explorer by clicking on the folder icon on the taskbar.

Figure 11-1:
A Windows Explorer window opens.
2. In Windows Explorer click on Computer.
3. Click on Map network drive to launch the Map Network Drive wizard.

189

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

2
3

Figure 11-2:
The Map Network Drive wizard opens.
4. Set the fields in the window to the following values.

Drive: S:

Folder: \\svm1\nsroot

Check the Reconnect at sign-in checkbox.


5. When finished click Finish.

190

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5
Figure 11-3:
A new Windows Explorer window opens.
6. The engineering volume you earlier junctioned into the svm1s namespace is visible at the top of the
nsroot share, which points to the root of the namespace. If you created another volume on svm1 right
now and mounted it under the root of the namespace, that new volume would instantly become visible
in this share, and to clients like jumphost that have already mounted the share. Double-click on the
engineering folder to open it.

191

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 11-4:
File Explorer displays the contents of the engineering folder. Next you will create a file in this folder to
confirm that you can write to it.
7. Notice that the eng_users volume that you junctioned in as users is visible inside this folder.
8. Right-click in the empty space in the right pane of File Explorer.
9. In the context menu, select New > Text Document, and name the resulting file cifs.txt.

192

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9
Figure 11-5:
10. Double-click the cifs.txt file you just created to open it with Notepad.
Tip: If you aren't seeing file extensions in your lab, you can enable that by going to the View
menu at the top of Windows Explorer and checking the File Name Extensions checkbox.
11. In Notepad, enter some text (make sure you put a carriage return at the end of the line, or else when
you later view the contents of this file on linux the command shell prompt will appear on the same line
as the file contents).
12. Use the File > Save menu in Notepad to save the files updated contents to the share. If write access is
working properly you will not receive an error message.

193

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

10

12
11

Figure 11-6:
Close Notepad and File Explorer to finish this exercise.

11.2.5 Connect to the SVM From a Linux Client


This section demonstrates how to connect a Linux client to the NFS volume svm1:/ using the Linux command line.
1. Follow the instructions in the Accessing the Command Line section at the beginning of this lab guide to
open PuTTY and connect to the system rhel1. Log in as the user root with the password Netapp1!.
2. Verify that there are no NFS volumes currently mounted on rhel1.
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962504
6311544 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
[root@rhel1 ~]#

3. Create the /svm1 directory to serve as a mount point for the NFS volume you will be shortly mounting.
[root@rhel1 ~]# mkdir /svm1
[root@rhel1 ~]#

4. Add an entry for the NFS mount to the fstab file.


[root@rhel1 ~]# echo "svm1:/ /svm1 nfs rw,defaults 0 0" >> /etc/fstab
[root@rhel1 ~]#

194

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

5. Verify the fstab file contains the new entry you just created.
[root@rhel1 ~]# grep svm1 /etc/fstab
svm1:/ /svm1 nfs rw,defaults 0 0
[root@rhel1 ~]#

6. Mount all the file systems listed in the fstab file.


[root@rhel1 ~]# mount -a
[root@rhel1 ~]#

7. View a list of the mounted file systems.


[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root 11877388 4962508
6311540 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
[root@rhel1 ~]#

The NFS file system svm1:/ now shows as mounted on /svm1.


8. Navigate into the /svm1 directory.
[root@rhel1 ~]# cd /svm1
[root@rhel1 svm1]#

9. Notice that you can see the engineering volume that you previously junctioned into the SVMs
namespace.
[root@rhel1 svm1]# ls
engineering
[root@rhel1 svm1]#

10. Navigate into engineering and list it's contents.


Attention: The following command output assumes that you have already performed the
Windows client connection steps found earlier in this lab guide, including creating the cifs.txt file.
[root@rhel1 svm1]# cd engineering
[root@rhel1 engineering]# ls
cifs.txt users
[root@rhel1 engineering]#

11. Display the contents of the cifs.txt file you created earlier.
Tip: When you cat the cifs.txt file, if the shell prompt winds up on the same line as the file
output then that indicates that you forgot to include a newline at the end of the file when you
created the file on Windows.
[root@rhel1 engineering]# cat cifs.txt
write test from jumphost
[root@rhel1 engineering]#

12. Verify that you can create file in this directory.


[root@rhel1 engineering]# echo "write test from rhel1" > nfs.txt
[root@rhel1 engineering]# cat nfs.txt
write test from rhel1
[root@rhel1 engineering]# ll
total 4
-rwxrwxrwx 1 root bin
26 Oct 20 03:05 cifs.txt
-rwxrwxrwx 1 root root
22 Oct 20 03:06 nfs.txt
drwxrwxrwx 4 root root 4096 Oct 20 02:37 users
[root@rhel1 engineering]#

195

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11.2.6 NFS Exporting Qtrees (Optional)


Clustered Data ONTAP 8.2.1 introduced the ability to NFS export qtrees. This optional section explains how to
configure qtree exports and will demonstrate how to set different export rules for a given qtree. For this exercise
you will be working with the qtrees you created in the previous section.
Qtrees had many capabilities in Data ONTAP 7-mode that are no longer present in cluster mode. Qtrees do still
exist in cluster mode, but their purpose is essentially now limited to just quota management, with most other 7mode qtree features, including NFS exports, now the exclusive purview of volumes. This functionality change
created challenges for 7-mode customers with large numbers of NFS qtree exports who were trying to transition
to cluster mode and could not convert those qtrees to volumes because they would exceed clustered Data
ONTAPs maximum number of volumes limit.
To solve this problem, clustered Data ONTP 8.2.1 introduced qtree NFS. NetApp continues to recommend that
customers favor volumes over qtrees in cluster mode whenever practical, but customers requiring large numbers
of qtree NFS exports now have a supported solution under clustered Data ONTAP.
You need to create a new export policy and configure it with rules so that only the Linux host rhel1 will be granted
access to the associated volume and/or qtree.
1. Display a list of the export policies.
cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
cluster1::>

2. Create the export policy named rhel1-only.


cluster1::> vserver export-policy create -vserver svm1 -policyname rhel1-only
cluster1::>

3. Re-display the list of export policies.


cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::>

4. Display a list of the rules for the rhel1-only export policy.


cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only
There are no entries matching your query.
cluster1::>

5. Add a rule to the policy so that only the Linux host rhel1 will be granted access.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname rhel1-only
-clientmatch 192.168.0.61 -rorule any -rwrule any -superuser any -anon 65534
-ruleindex 1
cluster1::>

6. Display a list of all the export policy rules.


cluster1::> vserver export-policy rule show
Policy
Rule
Access
Vserver
Name
Index
Protocol
------------ --------------- ------ -------svm1
default
1
any
svm1
rhel1-only
1
any
2 entries were displayed.
cluster1::>

196

Basic Concepts for Clustered Data ONTAP 8.3.1

Client
Match
--------------------0.0.0.0/0
192.168.0.61

RO
Rule
--------any
any

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7. Display a detailed report of the rhel1-only export policy rules.


cluster1::> vserver export-policy rule show -vserver svm1 -policyname rhel1-only -instance
Vserver: svm1
Policy Name: rhel1-only
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 192.168.0.61
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>

8. Produce a list of svm1s export policies.


cluster1::> vserver export-policy show
Vserver
Policy Name
--------------- ------------------svm1
default
svm1
rhel1-only
2 entries were displayed.
cluster1::>

9. List svm1's qtrees.


cluster1::> volume qtree show
Vserver
Volume
Qtree
---------- ------------- -----------svm1
eng_users
""
svm1
eng_users
bob
svm1
eng_users
susan
svm1
engineering
""
svm1
svm1_root
""
5 entries were displayed.
cluster1::>

Style
-----------ntfs
ntfs
ntfs
ntfs
ntfs

Oplocks
--------enable
enable
enable
enable
enable

Status
-------normal
normal
normal
normal
normal

10. Apply the rhel1-only export policy to the susan qtree.


cluster1::> volume qtree modify -vserver svm1 -volume eng_users -qtree susan
-export-policy rhel1-only
cluster1::>

11. Display the configuration of the susan qtree. Notice the Export Policy field shows that this qtree is using
the rhel1-only export policy.
cluster1::> volume qtree show -vserver svm1 -volume eng_users -qtree susan
Vserver Name: svm1
Volume Name: eng_users
Qtree Name: susan
Qtree Path: /vol/eng_users/susan
Security Style: ntfs
Oplock Mode: enable
Unix Permissions: Qtree Id: 2
Qtree Status: normal
Export Policy: rhel1-only
Is Export Policy Inherited: false
cluster1::>

12. Produce a report showing the export policy assignments for all the volumes and qtrees that belong to
svm1.
cluster1::> volume qtree show -vserver svm1 -fields export-policy
vserver volume
qtree export-policy
------- --------- ----- ------------svm1
eng_users ""
default
svm1
eng_users bob
default
svm1
eng_users susan rhel1-only

197

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm1

engineering
""
default
svm1
svm1_root ""
default
5 entries were displayed.
cluster1::>

Now you need to validate that the more restrictive export policy that youve applied to the qtree susan is
working as expected from rhel1.
Note: If you still have an active PuTTY session open to the the Linux host rhel1 then bring that
window up now, otherwise open a new PuTTY session to that host (username = root, password
= Netapp1!).
13. Change directory to /svm1/engineering/users.
[root@rhel1 ~]# cd /svm1/engineering/users
[root@rhel1 users]#

14. List the directory contents.


[root@rhel1 users]# ls
bob susan
[root@rhel1 users]#

15. Enter the susan sub-directory.


[root@rhel1 users]# cd susan
[root@rhel1 susan]#

16. Create a file in this directory.


[root@rhel1 susan]# echo "hello from rhel1" > rhel1.txt
[root@rhel1 susan]#

17. Display the contents of the newly created file.


[root@rhel1 susan]# cat rhel1.txt
hello from rhel1
[root@rhel1 susan]#

Next validate that rhel2 has different access rights to the qtree. This host should be able to access all
the volumes and qtrees in the svm1 namespace *except* susan, which should give a permission denied
error because that qtrees associated export policy only grants access to the host rhel1.
Note: Open a PuTTY connection to the Linux host rhel2 (again, username = root and password
= Netapp1!).
18. Create a mount point for the svm1 NFS volume.
[root@rhel2 ~]# mkdir /svm1
[root@rhel2 ~]#

19. Mount the NFS volume svm1:/ on /svm1.


[root@rhel2 ~]# mount svm1:/ /svm1
[root@rhel2 ~]#

20. Change directory to /svm1/engineering/users.


[root@rhel2 ~]# cd /svm1/engineering/users
[root@rhel2 users]#

21. List the directory's contents.


[root@rhel2 users]# ls
bob susan

198

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

[root@rhel2 users]#

22. Attempt to enter the susan sub-directory.


[root@rhel2 users]# cd susan
bash: cd: susan: Permission denied
[root@rhel2 users]#

23. Attempt to enter the bob sub-directory.


[root@rhel2 users]# cd bob
[root@rhel2 bob]

11.3 Create Storage for iSCSI


Expected Completion Time: 50 Minutes
This section of the lab is optional, and includes instructions for mounting a LUN on Windows and Linux. If you
choose to complete this section you must first complete the Create a Storage Virtual Machine for iSCSI section,
and then complete either the Create, Map, and Mount a Windows LUN section, or the Create, Map, and Mount
a Linux LUN section as appropriate based on your platform of interest.
The 50 minute time estimate assumes you complete only one of the Windows or Linux LUN sections. You are
welcome to complete both of those section if you choose, but you should plan on needing approximately 90
minutes to complete the entire Create and Mount a LUN section.
If you completed the Create a Storage Virtual Machine for NFS and CIFS section of this lab then you explored
the concept of a Storage Virtual Machine (SVM), created an SVM, and configured it to serve data over NFS and
CIFS. If you skipped that section of the lab guide, consider reviewing the introductory text found at the beginning
of that section, and each of its subsections, before you proceed further because this section builds on concepts
described there.
In this section you are going to create another SVM and configure it for SAN protocols, which means you are
going to configure the SVM for iSCSI since this virtualized lab does not support FC. The configuration steps for
iSCSI and FC are similar, so the information provided here is also useful for FC deployment. After you create a
new SVM and configure it for iSCSI, you will create a LUN for Windows and/or a LUN for Linux, and then mount
the LUN(s) on their respective hosts.
NetApp supports configuring an SVM to serve data over both SAN and NAS protocols, but it is common to see
customers use separate SVMs for each in order to separate administrative responsibilities, or for architectural
and operational clarity. For example, SAN protocols do not support LIF failover ,so you cannot use NAS LIFs to
support SAN protocols. You must instead create dedicated LIFs just for SAN. Implementing separate SVMs for
SAN and NAS can in this example simplify the operational complexity of each SVMs configuration, making each
easier to understand and manage, but ultimately whether to mix or separate is a customer decision, and not a
NetApp recommendation.
Since SAN LIFs do not support migration to different nodes, an SVM must have dedicated SAN LIFs on every
node that you want to service SAN requests, and you must utilize MPIO and ALUA to manage the controllers
available paths to the LUNs. In the event of a path disruption MPIO and ALUA will compensate by re-routing the
LUN communication over an alternate controller path (i.e., over a different SAN LIF).
NetApp best practice is to configure at least one SAN LIF per storage fabric/network on each node in the cluster
so that all nodes can provide a path to the LUNs. In large clusters where this would result in the presentation
of a large number of paths for a given LUN we recommend that you use portsets to limit the LUN to seeing no
more than 8 LIFs. Data ONTAP 8.3 introduces a new Selective LUN Mapping (SLM) feature to provide further
assistance in managing fabric paths. SLM limits LUN path access to just the node that owns the LUN and its HA
partner, and Data ONTAP automatically applies SLM to all new LUM map operations. For further information on
Selective LUN Mapping, please see the Hands-On Lab for SAN Features in clustered Data ONTAP 8.3.

199

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

In this lab the cluster contains two nodes connected to a single storage network. You will still configure a total of 4
SAN LIFs, because it is common to see implementations with 2 paths per node for redundancy.
This section of the lab allows you to create and mount a LUN for only Windows, only Linux, or both if you desire.
Both the Windows and Linux LUN creation steps require that you complete the Create a Storage Virtual Machine
for iSCSI section that comes next. If you want to create a Windows LUN, you need to complete the Create, Map,
and Mount a Windows LUN section that follows. Additionally, if you want to create a Linux LUN, you need to
complete the Create, Map, and Mount a Linux LUN section that follows after that. You can safely complete both
of those last two sections in the same lab.

11.3.1 Create a Storage Virtual Machine for iSCSI


If you do not already have a PuTTY session open to cluster1, open one now following the instructions in the
Accessing the Command Line section at the beginning of this lab guide and enter the following commands.
1. Display the available aggregates so you can decide which one you want to use to host the root volume
for the SVM you will be creating.
cluster1::> aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0_cluster1_01
10.26GB
510.6MB
95% online
1 cluster1-01
raid_dp,
normal
aggr0_cluster1_02
10.26GB
510.6MB
95% online
1 cluster1-02
raid_dp,
normal
aggr1_cluster1_01
72.53GB
72.49GB
0% online
3 cluster1-01
raid_dp,
normal
aggr1_cluster1_02
72.53GB
72.53GB
0% online
0 cluster1-02
raid_dp,
normal
4 entries were displayed.
cluster1::>

2. Create the SVM svmluns on aggregate aggr1_cluster1_01. Note that the clustered Data ONTAP
command line syntax still refers to storage virtual machines as vservers.
cluster1::> vserver create -vserver svmluns -rootvolume svmluns_root
-aggregate aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security-style unix
-snapshot-policy default
[Job 269] Job is queued: Create svmluns.
[Job 269]
[Job 269] Job succeeded:
Vserver creation completed
cluster1::>

3. Add the iSCSI protocol to the SVM svmluns:


cluster1::> vserver iscsi create -vserver svmluns
cluster1::>

4. Display svmlun's configured protocols.


cluster1::> vserver show-protocols -vserver svmluns
Vserver: svmluns
Protocols: nfs, cifs, fcp, iscsi, ndmp
cluster1::>

5. Remove all the protocols other than iscsi.


cluster1::> vserver remove-protocols -vserver svmluns -protocols nfs,cifs,fcp,ndmp
cluster1::>

200

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

6. Display the configured protocols for svmluns.


cluster1::> vserver show-protocols -vserver svmluns
Vserver: svmluns
Protocols: iscsi
cluster1::>

7. Display detailed configuration for the svmlun SVM.


cluster1::> vserver show -vserver svmluns
Vserver:
Vserver Type:
Vserver Subtype:
Vserver UUID:
Root Volume:
Aggregate:
NIS Domain:
Root Volume Security Style:
LDAP Client:
Default Volume Language Code:
Snapshot Policy:
Comment:
Quota Policy:
List of Aggregates Assigned:
Limit on Maximum Number of Volumes allowed:
Vserver Admin State:
Vserver Operational State:
Vserver Operational State Stopped Reason:
Allowed Protocols:
Disallowed Protocols:
Is Vserver with Infinite Volume:
QoS Policy Group:
Config Lock:
IPspace Name:
cluster1::>

svmluns
data
default
beeb8ca5-580c-11e4-a807-0050569901b8
svmluns_root
aggr1_cluster1_01
unix
C.UTF-8
default
default
unlimited
running
running
iscsi
nfs, cifs, fcp, ndmp
false
false
Default

8. Create 4 SAN LIFs for the SVM svmluns, 2 per node. Do not forget you can save some typing here by
using the up arrow to recall previous commands that you can edit and then execute.
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0d -subnet-name
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-01_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-01 -home-port e0e -subnet-name
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_1
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0d -subnet-name
-failover-policy disabled -firewall-policy data
cluster1::> network interface create -vserver svmluns -lif cluster1-02_iscsi_lif_2
-role data -data-protocol iscsi -home-node cluster1-02 -home-port e0e -subnet-name
-failover-policy disabled -firewall-policy data
cluster1::>

Demo
Demo
Demo
Demo

9. Now create a Management Interface LIF for the SVM.


cluster1::> network interface create -vserver svmluns -lif svmluns_admin_lif1 -role data
-data-protocol none -home-node cluster1-01 -home-port e0c -subnet-name Demo
-failover-policy nextavail -firewall-policy mgmt
cluster1::>

10. Display a list of the LIFs in the cluster.


cluster1::> network interface show
Logical
Status
Network
Current
Vserver
Interface Admin/Oper Address/Mask
Node
----------- ---------- ---------- ------------------ ------------cluster
cluster1-01_clus1 up/up 169.254.224.98/16 cluster1-01
cluster1-02_clus1 up/up 169.254.129.177/16 cluster1-02
cluster1
cluster1-01_mgmt1 up/up 192.168.0.111/24 cluster1-01
cluster1-02_mgmt1 up/up 192.168.0.112/24 cluster1-02
cluster_mgmt up/up
192.168.0.101/24
cluster1-01
svm1

201

Basic Concepts for Clustered Data ONTAP 8.3.1

Current Is
Port
Home
------- ---e0a
e0a

true
true

e0c
e0c
e0c

true
true
true

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

svm1_cifs_nfs_lif1 up/up 192.168.0.131/24 cluster1-01


svm1_cifs_nfs_lif2 up/up 192.168.0.132/24 cluster1-02

e0c
e0c

true
true

svmluns
cluster1-01_iscsi_lif_1 up/up 192.168.0.133/24 cluster1-01
cluster1-01_iscsi_lif_2 up/up 192.168.0.134/24 cluster1-01
cluster1-02_iscsi_lif_1 up/up 192.168.0.135/24 cluster1-02
cluster1-02_iscsi_lif_2 up/up 192.168.0.136/24 cluster1-02
svmluns_admin_lif1 up/up 192.168.0.137/24 cluster1-01 e0c
12 entries were displayed.
cluster1::>

e0d
e0e
e0d
e0e

true
true
true
true
true

11. Display detailed information for the LIF cluster1-01_iscsi_lif_1.


cluster1::> network interface show -lif cluster1-01_iscsi_lif_1 -instance
Vserver Name: svmluns
Logical Interface Name: cluster1-01_iscsi_lif_1
Role: data
Data Protocol: iscsi
Home Node: cluster1-01
Home Port: e0d
Current Node: cluster1-01
Current Port: e0d
Operational Status: up
Extended Status: Is Home: true
Network Address: 192.168.0.133
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: Subnet Name: Demo
Administrative Status: up
Failover Policy: disabled
Firewall Policy: data
Auto Revert: false
Fully Qualified DNS Zone Name: none
DNS Query Listen Enable: false
Failover Group Name: FCP WWPN: Address family: ipv4
Comment: IPspace of LIF: Default
cluster1::>

12. Display a list of all the volumes on the cluster to see the root volume for the svmluns SVM.
cluster1::> volume show
Vserver
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----cluster1-01 vol0
aggr0_cluster1_01 online RW
9.71GB
6.97GB
28%
cluster1-02 vol0
aggr0_cluster1_02 online RW
9.71GB
6.36GB
34%
svm1
eng_users
aggr1_cluster1_01 online RW
10GB
9.50GB
5%
svm1
engineering aggr1_cluster1_01 online RW
10GB
9.50GB
5%
svm1
svm1_root
aggr1_cluster1_01 online RW
20MB
18.86MB
5%
svmluns
svmluns_root aggr1_cluster1_01 online RW
20MB
18.86MB
5%
6 entries were displayed.
cluster1::>

11.3.2 Create, Map, and Mount a Windows LUN


In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Windows:

Gather the iSCSI Initiator Name of the Windows client.


Create a thin provisioned Windows volume, create a thin provisioned Windows LUN within that volume,
and map the LUN so it can be accessed by the Windows client.
Mount the LUN on a Windows client leveraging multi-pathing.

You must complete all of the subsections of this section in order to use the LUN from the Windows client.

202

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11.3.2.1 Gather the Windows Client iSCSI Initiator Name


You need to determine the Windows clients iSCSI initiator name so that when you create the LUN you can set up
an appropriate initiator group to control access to the LUN.
On the desktop of the Windows client named "jumphost" (the main Windows host you use in the lab), perform the
following tasks:
1. Click on the Windows button on the far left side of the task bar.

Figure 11-7:
The Start screen opens.
2. Click on Administrative Tools.

Figure 11-8:

203

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Windows Explorer opens to the List of Administrative Tools.


3. Double-click the entry for the iSCSI Initiator tool.

Figure 11-9:
The iSCSI Initiator Properties window opens.
4. Select the Configuration tab.
5. Take note of the value in the Initiator Name field, which contains the initiator name for jumphost.
Attention: The initiator name is iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
You will need this value later, so you might want to copy this value from the properties window and paste
it into a text file on your labs desktop so you have it readily available when that time comes.
6. Click OK.

204

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 11-10:
The iSCSI Properties window closes, and focus returns to the Windows Explorer Administrator Tools
window. Leave this window open because you will need to access other tools later in the lab.

11.3.2.2 Create and Map a Windows LUN


You will now create a new thin provisioned Windows LUN named windows.lun in the volume winluns on
the SVM "svmluns". You will also create an initiator igroup for the LUN and populate it with the Windows host
jumphost. An initiator group, or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names of the
hosts that are permitted to see and access the associated LUNs.

205

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

1. If you do not already have a PuTTY connection open to cluster1 then please open one now following the
instructions in the Accessing the Command Line section at the beginning of this lab guide.
2. Create the volume winluns to host the Windows LUN you will be creating in a later step:
cluster1::> volume create -vserver svmluns -volume winluns -aggregate aggr1_cluster1_01
-size 10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none
-autosize-mode grow -nvfail on
[Job 270] Job is queued: Create winluns.
[Job 270] Job succeeded: Successful
cluster1::>

3. Display a list of the volumes on the cluster.


cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
svmluns
winluns
aggr1_cluster1_01
online
7 entries were displayed.
cluster1::>

Type
Size Available Used%
---- ---------- ---------- ----RW

9.71GB

7.00GB

27%

RW

9.71GB

6.34GB

34%

RW

10GB

9.50GB

5%

RW

10GB

9.50GB

5%

RW

20MB

18.86MB

5%

RW

20MB

18.86MB

5%

RW

10.31GB

21.31GB

0%

4. Create the Windows LUN named windows.lun:


cluster1::> lun create -vserver svmluns -volume winluns -lun windows.lun -size 10GB
-ostype windows_2008 -space-reserve disabled
Created a LUN of size 10g (10742215680)
cluster1::>

5. Add a comment to the LUN definition.


cluster1::> lun modify -vserver svmluns -volume winluns -lun windows.lun
-comment "Windows LUN"
cluster1::>

6. Display the LUNs on the cluster.


cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online unmapped windows_2008
10.00GB
cluster1::>

7. Display a list of the defined igroups.


cluster1::> igroup show
This table is currently empty.
cluster1::>

8. Create a new igroup named winigrp that you will use to manage access to the new LUN.
cluster1::> igroup create -vserver svmluns -igroup winigrp -protocol iscsi -ostype windows
-initiator iqn.1991-05.com.microsoft:jumphost.demo.netapp.com
cluster1::>

206

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

9. Add the Windows clients initiator name to the igroup.


cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::>

10. Map the LUN windows.lun to the igroup winigrp.


cluster1::> lun map -vserver svmluns -volume winluns -lun windows.lun -igroup winigrp
cluster1::>

11. Display a list of all the LUNs.


cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
10.00GB
cluster1::>

12. Display a list of all the mapped LUNs.


cluster1::> lun mapped show
Vserver
Path
---------- ---------------------------------------svmluns
/vol/winluns/windows.lun
cluster1::>

Igroup
------winigrp

LUN ID
-----0

Protocol
-------iscsi

13. Display a detailed report on the configuration of the LUN windows.lun.


cluster1::> lun show -lun windows.lun -instance
Vserver Name: svmluns
LUN Path: /vol/winluns/windows.lun
Volume Name: winluns
Qtree Name: ""
LUN Name: windows.lun
LUN Size: 10.00GB
OS Type: windows_2008
Space Reservation: disabled
Serial Number: wOj4Q]FMHlq6
Comment: Windows LUN
Space Reservations Honored: false
Space Allocation: disabled
State: online
LUN UUID: 8e62421e-bff4-4ac7-85aa-2e6e3842ec8a
Mapped: mapped
Block Size: 512
Device Legacy ID: Device Binary ID: Device Text ID: Read Only: false
Fenced Due to Restore: false
Used Size: 0
Maximum Resize Size: 502.0GB
Creation Time: 10/20/2014 04:36:41
Class: regular
Node Hosting the LUN: cluster1-01
QoS Policy Group: Clone: false
Clone Autodelete Enabled: false
Inconsistent import: false
cluster1::>

11.3.2.3 Mount the LUN on a Windows Client


The final step is to mount the LUN on the Windows client. You will be using MPIO/ALUA to support multiple
paths to the LUN using both of the SAN LIFs you configured earlier on the svmluns SVM. Data ONTAP DSM for

207

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Windows MPIO is the multi-pathing software you will be using for this lab, and that software is already installed on
jumphost.
You should begin by validating that the Multi-Path I/O (MPIO) software is working properly on this windows host.
The Administrative Tools window should still be open on jumphost; if you already closed it then you will need to
re-open it now so you can access the MPIO tool
1. On the desktop of JUMPHOST, in the Administrative Tools window which you should still have open,
double-click the MPIO tool.

Figure 11-11:
The MPIO Properties window opens.
2. Select the Discover Multi-Paths tab.
3. Examine the Add Support for iSCSI devices checkbox. If this checkbox is NOT greyed out then MPIO
is improperly configured. This checkbox should be greyed out for this lab, but in the event it is not then
place a check in that checkbox, click the Add button, and then click Yes in the reboot dialog to reboot
your windows host. Once the system finishes rebooting, return to this window to verify that the checkbox
is now greyed out, indicating that MPIO is properly configured.
4. Click Cancel.

208

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 11-12:
The MPIO Properties window closes and focus returns to the Administrative Tools window for
jumphost. Now you need to begin the process of connecting jumphost to the LUN.
5. In Administrative Tools, double-click the iSCSI Initiator tool.

209

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 11-13:
The iSCSI Initiator Properties window opens.
6. Select the Targets tab.
7. Notice that there are no targets listed in the Discovered Targets list box, indicating that that are
currently no iSCSI targets mapped to this host.
8. Click the Discovery tab.

210

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 11-14:
The Discovery tab is where you begin the process of discovering LUNs, and to do that you must define
a target portal to scan. You are going to manually add a target portal to jumphost.
9. Click the Discover Portal button.

211

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Figure 11-15:
The Discover Target Portal window opens. Here you will specify the first of the IP addresses that the
clustered Data ONTAP Create LUN wizard assigned your iSCSI LIFs when you created the svmluns
SVM. Recall that the wizard assigned your LIFs IP addresses in the range 192.168.0.133-192.168.0.136.
10. Set the IP Address or DNS name textbox to 192.168.0.133, the first address in the range for your
LIFs.
11. Click OK.

10

11
Figure 11-16:
The Discover Target Portal window closes, and focus returns to the iSCSI Initiator Properties
window.
12. The Target Portals list now contains an entry for the IP address you entered in the previous step.

212

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13. Click on the Targets tab.

13

12

Figure 11-17:
The Targets tab opens to show you the list of discovered targets.
14. In the Discovered targets list select the only listed target. Observe that the targets status is Inactive,
because although you have discovered it you have not yet connected to it. Also note that the Name of
the discovered target in your lab will have a different value than what you see in this guide; that name
string is uniquely generated for each instance of the lab. (Make a mental note of that string value as you
will see it a lot as you continue to configure iSCSI in later steps of this process.)
15. Click the Connect button.

213

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

14

15

Figure 11-18:
The Connect to Target dialog box opens.
16. Click the Enable multi-path checkbox,.
17. Click the Advanced button.

16
17
Figure 11-19:

214

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

The Advanced Settings window opens.


18. In the Target portal IP dropdown menu select the entry containing the IP address you specified when
you discovered the target portal, which should be 192.168.0.133. The listed values are IP Address and
Port number combinations, and the specific value you want to select here is 192.168.0.133 / 3260.
19. When finished, click OK.

18

19

Figure 11-20:
The Advanced Setting window closes, and focus returns to the Connect to Target window.
20. Click OK.

215

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

20
Figure 11-21:
The Connect to Target window closes, and focus returns to the iSCSI Initiator Properties window.
21. Notice that the status of the listed discovered target has changed from Inactive to Connected.

21

Figure 11-22:
Thus far you have added a single path to your iSCSI LUN, using the address for the
cluster1-01_iscsi_lif_1 LIF the Create LUN wizard created on the node cluster1-01 for the svmluns
SVM. You are now going to add each of the other SAN LIFs present on the svmluns SVM. To begin this
procedure you must first edit the properties of your existing connection.
22. Still on the Targets tab, select the discovered target entry for your existing connection.
23. Click Properties.

216

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

22

23

Figure 11-23:
The Properties window opens. From this window you will be starting the procedure of connecting
alternate paths for your newly connected LUN. You will be repeating this procedure 3 times, once for
each of the remaining LIFs that are present on the svmluns SVM.
LIF IP Address

Done

192.168.0.134
192.168.0.135
192.168.0.136
24. The Identifier list will contain an entry for every path you have specified so far, so it can serve as a
visual indicator on your progress for defining specify all your paths. The first time you enter this window
you will see one entry, for the the LIF you used to first connect to this LUN.
25. Click Add Session.

217

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

24
25

Figure 11-24:
The Connect to Target window opens.
26. Check the Enable muti-path checkbox.
27. Click Advanced.

26
27
Figure 11-25:
The Advanced Setting window opens.
28. Select the Target port IP entry that contains the IP address of the LIF whose path you are adding
in this iteration of the procedure to add an alternate path. The following screenshot shows the
192.168.0.134 address, but the value you specify will depend of which specific path you are
configuring.
29. When finished, click OK.

218

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

28

29

Figure 11-26:
The Advanced Settings window closes, and focus returns to the Connect to Target window.
30. Click OK.

219

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

30
Figure 11-27:
The Connect to Target window closes, and focus returns to the Properties window where a new
identifier list. Repeat the procedure from the last 4 screenshots for each of the last two remaining LIF IP
addresses.
When you have finished adding all 3 paths the Identifiers list in the Properties window should contain 4
entries.
31. There are 4 entries in the Identifier list when you are finished, indicating that there are 4 sessions,
one for each path. Note that it is normal for the identifier values in your lab to differ from those in the
screenshot.
32. Click OK.

220

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

31

32

Figure 11-28:
The Properties window closes, and focus returns to the iSCSI Properties window.
33. Click OK.

221

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

33

Figure 11-29:
The iSCSI Properties window closes, and focus returns to the desktop of jumphost. If the Administrative
Tools window is not still open on your desktop, open it again now.
If all went well, the jumphost is now connected to the LUN using multi-pathing, so it is time to format
your LUN and build a filesystem on it.
34. In Administrative Tools, double-click the Computer Management tool.

222

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

34

Figure 11-30:
The Computer Management window opens.
35. In the left pane of the Computer Management window, navigate to Computer Management (Local) >
Storage > Disk Management.

35

Figure 11-31:
36. When you launch Disk Management an Initialize Disk dialog will open informing you that you must
initialize a new disk before Logical Disk Manager can access it.

223

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Note: If you see more than one disk listed then MPIO has not correctly recognized that the
multiple paths you set up are all for the same LUN, so you will need to cancel the Initialize
Disk dialog, quit Computer Manager, and go back to the iSCSI Initiator tool to review your path
configuration steps to find and correct any configuration errors, after which you can return to the
Computer Management tool and try again.
Click OK to initialize the disk.

36

Figure 11-32:
The Initialize Disk window closes, and focus returns to the Disk Management view in the Computer
Management window.
37. The new disk shows up in the disk list at the bottom of the window, and has a status of Unallocated.
38. Right-click inside the Unallocated box for the disk (if you right-click outside this box you will get the
incorrect context menu), and select New Simple Volume from the context menu.

224

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

37
38

Figure 11-33:
The New Simple Volume Wizard window opens.
39. Click the Next button to advance the wizard.

225

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

39

Figure 11-34:
The wizard advances to the Specify Volume Size step.
40. The wizard defaults to allocating all of the space in the volume, so click the Next button.

226

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

40

Figure 11-35:
The wizard advances to the Assign Drive Letter or Path step.
41. The wizard automatically selects the next available drive letter, which should be E. Click Next.

227

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

41

Figure 11-36:
The wizard advances to the Format Partition step.
42. Set the Volume Label field to WINLUN.
43. Click Next.

228

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

42
43

Figure 11-37:
The wizard advances to the Completing the New Simple Volume Wizard step.
44. Click Finish.

229

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

44

Figure 11-38:
The New Simple Volume Wizard window closes, and focus returns to the Disk Management view of
the Computer Management window.
45. The new WINLUN volume now shows as Healthy in the disk list at the bottom of the window,
indicating that the new LUN is mounted and ready to use. Before you complete this section of the lab,
take a look at the MPIO configuration for this LUN by right-clicking inside the box for the WINLUN
volume.
46. From the context menu select Properties.

230

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

45

46
Figure 11-39:
The WINLUN (E:) Properties window opens.
47. Click the Hardware tab.
48. In the All disk drives list select the NETAPP LUN C-Mode Multi-Path Disk entry.
49. Click Properties.

231

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

47

48

49

Figure 11-40:
The NETAPP LUN C-Mode Multi-Path Disk Device Properties window opens.
50. Click the MPIO tab.
51. Notice that you are using the Data ONTAP DSM for multi-path access rather than the Microsoft DSM.
We recommend using the Data ONTAP DSM software, as it is the most full-featured option available,
although the Microsoft DSM is also supported.
52. The MPIO policy is set to Least Queue Depth. A number of different multi-pathing policies are
available, but the configuration shown here sends LUN I/O down the path that has the fewest
outstanding I/O requests. You can click the More information about MPIO policies link at the bottom
of the dialog window for details about all the available policies.
53. The top two paths show both a Path State and TPG State as Active/Optimized. These paths are
connected to the node cluster1-01 and the Least Queue Depth policy makes active use of both paths to
this node. Conversely, the bottom two paths show a Path State of Unavailable, and a TPG State of
Active/Unoptimized. These paths are connected to the node cluster1-02, and only enter a Path State
of Active/Optimized if the node cluster1-01 becomes unavailable, or if the volume hosting the LUN
migrates over to the node cluster1-02.
54. When you finish reviewing the information in this dialog click OK to exit. If you changed any of the
values in this dialog you should consider using the Cancel button to discard those changes.

232

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

52

50
51

53

54
Figure 11-41:

The NETAPP LUN C-Mode Multi-Path Disk Device Properties window closes, and focus returns to the
WINLUN (E:) Properties window.
55. Click OK.

233

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

55

Figure 11-42:
The WINLUN (E:) Properties window closes.
56. Close the Computer Management window.

234

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

56

Figure 11-43:
57. Close the Administrative Tools window.

57

Figure 11-44:

235

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

58. You may see a message from Microsoft Windows stating that you must format the disk in drive E:
before you can use it. As you may recall, you did format the LUN during the New Simple Volume
Wizard", meaning this is an erroneous message from WIndows. Click Cancel to ignore it.

58

Figure 11-45:

Feel free to open Windows Explorer and verify that you can create a file on the E: drive.
This completes this exercise.

11.3.3 Create, Map, and Mount a Linux LUN


In an earlier section you created a new SVM and configured it for iSCSI. In the following sub-sections you will
perform the remaining steps needed to configure and use a LUN under Linux:

Gather the iSCSI Initiator Name of the Linux client.


Create a thin provisioned Linux volume, create a thin provisioned Linux LUN named linux.lun within that
volume, and map the LUN to the Linux client.
Mount the LUN on the Linux client.

You must complete all of the following subsections in order to use the LUN from the Linux client. Note that you
are not required to complete the Windows LUN section before starting this section of the lab guide, but the
screenshots and command line output shown here assumes that you have. If you did not complete the Windows
LUN section, the differences will not affect your ability to create and mount the Linux LUN.

11.3.3.1 Gather the Linux Client iSCSI Initiator Name


You need to determine the Linux clients iSCSI initiator name so that you can set up an appropriate initiator group
to control access to the LUN.
You should already have a PuTTY connection open to the Linux host rhel1. If you do not, then open one now
using the instructions found in the Accessing the Command Line section at the beginning of this lab guide. The
username will be root and the password will be Netapp1!.
1. Change to the directory that hosts the iscsi configuration files.
[root@rhel1 ~]# cd /etc/iscsi
[root@rhel1 iscsi]# ls
initiatorname.iscsi iscsid.conf
[root@rhel1 iscsi]#

2. Display the name of the iscsi initiator.


[root@rhel1 iscsi] cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 iscsi]#

Important: The initiator name for rhel1 is iqn.1994-05.com.redhat:rhel1.demo.netapp.com.

236

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

11.3.3.2 Create and Map a Linux LUN


In this activity, you create a new thin provisioned Linux LUN on the SVM svmluns under the volume linluns,
and also create an initiator igroup for the LUN so that only the Linux host rhel1 can access it. An initiator group,
or igroup, defines a list of the Fibre Channel WWPNs or iSCSI node names for the hosts that are permitted to see
the associated LUNs.
1. If you do not currently have a PuTTY session open to cluster1 then open one now following the
instructions from the Accessing the Command Line section at the beginning of this lab guide. The
username will be "admin" and the password will be "Netapp1!".
2. Create the thin provisioned volume linluns that will host the Linux LUN you will create in a later step:
cluster1::> volume create -vserver svmluns -volume linluns -aggregate aggr1_cluster1_01
-size 10.31GB -percent-snapshot-space 0 -snapshot-policy none -space-guarantee none
-autosize-mode grow -nvfail on
[Job 271] Job is queued: Create linluns.
[Job 271] Job succeeded: Successful
cluster1::>

3. Display the volume list.


cluster1::> volume show
Vserver
Volume
Aggregate
State
--------- ------------ ------------ ---------cluster1-01
vol0
aggr0_cluster1_01
online
cluster1-02
vol0
aggr0_cluster1_02
online
svm1
eng_users
aggr1_cluster1_01
online
svm1
engineering aggr1_cluster1_01
online
svm1
svm1_root
aggr1_cluster1_01
online
svmluns
linluns
aggr1_cluster1_01
online
svmluns
svmluns_root aggr1_cluster1_01
online
svmluns
winluns
aggr1_cluster1_01
online
8 entries were displayed.
cluster1::>

Type
Size Available Used%
---- ---------- ---------- ----RW

9.71GB

6.92GB

28%

RW

9.71GB

6.27GB

35%

RW

10GB

9.50GB

5%

RW

10GB

9.50GB

5%

RW

20MB

18.85MB

5%

RW

10.31GB

10.31GB

0%

RW

20MB

18.86MB

5%

RW

10.31GB

10.28GB

0%

4. Display a list of the LUNs on the cluster.


cluster1::> lun show
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/winluns/windows.lun
online mapped
windows_2008
10.00GB
cluster1::>

5. Create the thin provisioned Linux LUN linux.lun on the volume linluns:
cluster1::> lun create -vserver svmluns -volume linluns -lun linux.lun -size 10GB
-ostype linux -space-reserve disabled
Created a LUN of size 10g (10742215680)
cluster1::>

6. Add a comment to the LUN linux.lun.


cluster1::> lun modify -vserver svmluns -volume linluns -lun linux.lun
-comment "Linux LUN"
cluster1::>

237

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

7. Display the list of LUNs.


cluster1::> lun show
Vserver
Path
--------- ------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun

State
------online
online

Mapped
-------unmapped
mapped

Type
Size
-------- -------linux
10GB
windows_2008
10.00GB

2 entries were displayed.


cluster1::>

8. Display a list of the cluster's igroups.


cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
cluster1::>

9. Create a new igroup named linigrp that grants rhel1 access to the LUN linux.lun.
cluster1::> igroup create -vserver svmluns -igroup linigrp -protocol iscsi
-ostype linux -initiator iqn.1994-05.com.redhat:rhel1.demo.netapp.com
cluster1::>

10. Display a list of the igroups.


cluster1::> igroup show
Vserver
Igroup
Protocol OS Type Initiators
--------- ------------ -------- -------- -----------------------------------svmluns
linigrp
iscsi
linux
iqn.1994-05.com.redhat:rhel1.demo.
netapp.com
svmluns
winigrp
iscsi
windows iqn.1991-05.com.microsoft:jumphost.
demo.netapp.com
2 entries were displayed.
cluster1::>

11. Map the LUN linux.lun to the igroup linigrp.


cluster1::> lun map -vserver svmluns -volume linluns -lun linux.lun -igroup linigrp
cluster1::>

12. Display a list of the LUNs.


cluster1::> lun show
Vserver
Path
--------- ------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun

State
------online
online

Mapped
-------mapped
mapped

Type
Size
-------- -------linux
10GB
windows_2008
10.00GB

2 entries were displayed.


cluster1::>

13. Display a list of the LUN mappings.


cluster1::> lun mapped show
Vserver
Path
---------- ---------------------------------------svmluns
/vol/linluns/linux.lun
svmluns
/vol/winluns/windows.lun
2 entries were displayed.
cluster1::>

Igroup
------linigrp
winigrp

LUN ID
-----0
0

Protocol
-------iscsi
iscsi

14. Display just the LUN linux.lun.


cluster1::> lun show -lun linux.lun
Vserver
Path
State
Mapped
Type
Size
--------- ------------------------------- ------- -------- -------- -------svmluns
/vol/linluns/linux.lun
online mapped
linux
10GB

238

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

15. Display LUN mappings for just linux.lun.


cluster1::> lun mapped show -lun linux.lun
Vserver
Path
---------- ---------------------------------------svmluns
/vol/linluns/linux.lun
cluster1::>

Igroup
------linigrp

LUN ID
-----0

Protocol
-------iscsi

16. Display detailed LUN mapping information for linux.lun.


cluster1::> lun show -lun linux.lun -instance
Vserver Name: svmluns
LUN Path: /vol/linluns/linux.lun
Volume Name: linluns
Qtree Name: ""
LUN Name: linux.lun
LUN Size: 10GB
OS Type: linux
Space Reservation: disabled
Serial Number: wOj4Q]FMHlq7
Comment: Linux LUN
Space Reservations Honored: false
Space Allocation: disabled
State: online
LUN UUID: 1b4912fb-b779-4811-b1ff-7bc3a615454c
Mapped: mapped
Block Size: 512
Device Legacy ID: Device Binary ID: Device Text ID: Read Only: false
Fenced Due to Restore: false
Used Size: 0
Maximum Resize Size: 128.0GB
Creation Time: 10/20/2014 06:19:49
Class: regular
Node Hosting the LUN: cluster1-01
QoS Policy Group: Clone: false
Clone Autodelete Enabled: false
Inconsistent import: false
cluster1::>

Data ONTAP 8.2 introduced a space reclamation feature that allows Data ONTAP to reclaim space
from a thin provisioned LUN when the client deletes data from it, and also allows Data ONTAP to
notify the client when the LUN cannot accept writes due to lack of space on the volume. This feature
is supported by VMware ESX 5.0 and later, Red Hat Enterprise Linux 6.2 and later, and Microsoft
Windows 2012. The RHEL clients used in this lab are running version 6.6 and so you will enable the
space reclamation feature for your Linux LUN.
17. Display the space reclamation setting for the LUN linux.lun.
cluster1::> lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun disabled
cluster1::>

18. Configure the LUN linux.lun to support space reclamation.


lun modify -vserver svmluns -path /vol/linluns/linux.lun -space-allocation enabled
cluster1::>

19. Display the new space reclamation setting for the LUN linux.lun.
lun show -vserver svmluns -path /vol/linluns/linux.lun -fields space-allocation
vserver path
space-allocation
------- ---------------------- ---------------svmluns /vol/linluns/linux.lun enabled

239

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

cluster1::>

11.3.3.3 Mount the LUN on a Linux Client


In this section you will use the Linux command line to configure the host rhel1 to connect to the Linux LUN /vol/
linluns/linux.lun you created in the preceding section.
This section assumes that you know how to use the Linux command line. If you are not familiar with these
concepts, we recommend that you skip this section of the lab.
1. If you do not currently have a PuTTY session open to rhel1, open one now and log in as user root with
the password "Netapp1!".
2. The NetApp Linux Host Utilities kit has been pre-installed on both Red Hat Linux hosts in this lab, and
the iSCSI initiator name has already been configured for each host. Confirm that is the case:
[root@rhel1 ~]# rpm -qa | grep netapp
netapp_linux_unified_host_utilities-7-0.x86_64
[root@rhel1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:rhel1.demo.netapp.com
[root@rhel1 ~]#

3. In the /etc/iscsi/iscsid.conf file the node.session.timeo.replacement_timeout value is set to 5 to better


support timely path failover, and the node.startup value is set to automatic so that the system will
automatically log in to the iSCSI node at startup.
[root@rhel1 ~]# grep replacement_time /etc/iscsi/iscsid.conf
#node.session.timeo.replacement_timeout = 120
node.session.timeo.replacement_timeout = 5
[root@rhel1 ~]# grep node.startup /etc/iscsi/iscsid.conf
# node.startup = automatic
node.startup = automatic
[root@rhel1 ~]#

4. You will find that the Red Hat Linux hosts in the lab have pre-installed the DM-Multipath packages and
a /etc/multipath.conf file pre-configured to support multi-pathing so that the RHEL host can access the
LUN using all of the SAN LIFs you created for the svmluns SVM.
[root@rhel1 ~]# rpm -q device-mapper
device-mapper-1.02.79-8.el6.x86_64
[root@rhel1 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.9-72.el6.x86_64
[root@rhel1 ~]# cat /etc/multipath.conf
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd
# NetApp recommended defaults
defaults {
flush_on_last_del yes
max_fds
max
queue_without_daemon no
user_friendly_names no
dev_loss_tmo infinity
fast_io_fail_tmo 5
}
blacklist {
devnode "^sda"
devnode "^hd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^ccis.*"
}
devices {
# NetApp iSCSI LUNs
device {

240

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

vendor
"NETAPP"
product
"LUN"
path_grouping_policy group_by_prio
features "3 queue_if_no_path pg_init_retries 50"
prio
"alua"
path_checker tur
failback immediate
path_selector "round-robin 0"
hardware_handler "1 alua"
rr_weight uniform
rr_min_io 128
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
}
}
[root@rhel1 ~]#

5. You now need to start the iSCSI software service on rhel1 and configure it to start automatically at boot
time. Note that a force-start is only necessary the very first time you start the iscsid service on host.
[root@rhel1 ~]# service iscsid status
iscsid is stopped
[root@rhel1 ~]# service iscsid force-start
Starting iscsid: OK
[root@rhel1 ~]# service iscsi status
No active sessions
[root@rhel1 ~]# chkconfig iscsi on
[root@rhel1 ~]# chkconfig --list iscsi
iscsi
0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@rhel1 ~]#

6. Next discover the available targets using the iscsiadm command. Note that the exact values used
for the node paths may differ in your lab from what is shown in this example, and that after running
this command there will not as of yet be active iSCSI sessions because you have not yet created the
necessary device files.
[root@rhel1 ~]# iscsiadm --mode discovery --op update --type sendtargets
--portal 192.168.0.133
192.168.0.133:3260,1028 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.136:3260,1031 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.135:3260,1030 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
192.168.0.134:3260,1029 iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]# iscsiadm --mode session
iscsiadm: No active sessions.
[root@rhel1 ~]#

7. Create the devices necessary to support the discovered nodes, after which the sessions become active.
[root@rhel1 ~]# iscsiadm --mode node -l all
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Logging in to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
(multiple)
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
Login to [iface: default, target:
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4,
successful.
[root@rhel1 ~]# iscsiadm --mode session
tcp: [1] 192.168.0.134:3260,1029
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4

241

Basic Concepts for Clustered Data ONTAP 8.3.1

portal: 192.168.0.134,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.133,3260]
portal: 192.168.0.134,3260]
portal: 192.168.0.136,3260]
portal: 192.168.0.135,3260]
portal: 192.168.0.133,3260]

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

tcp: [2] 192.168.0.136:3260,1031


iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [3] 192.168.0.135:3260,1030
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
tcp: [4] 192.168.0.133:3260,1028
iqn.1992-08.com.netapp:sn.beeb8ca5580c11e4a8070050569901b8:vs.4
[root@rhel1 ~]#

8. At this point the Linux client sees the LUN over all four paths but it does not yet understand that all four
paths represent the same LUN.
[root@rhel1 ~]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol
size
product
-----------------------------------------------------------------------------------------------svmluns
/vol/linluns/linux.lun /dev/sde
host3
iSCSI
10g
cDOT
svmluns

/vol/linluns/linux.lun /dev/sdd

host4

iSCSI

10g

cDOT

svmluns

/vol/linluns/linux.lun /dev/sdc

host5

iSCSI

10g

cDOT

svmluns

/vol/linluns/linux.lun /dev/sdb

host6

iSCSI

10g

cDOT

[root@rhel1 ~]#

9. Since the lab includes a pre-configured /etc/multipath.conf file you just need to start the multipathd
service to handle the multiple path management and configure it to start automatically at boot time.
[root@rhel1 ~]# service multipathd status
multipathd is stopped
[root@rhel1 ~]# service multipathd start
Starting multipathd daemon: OK
[root@rhel1 ~]# service multipathd status
multipathd (pid 8656) is running...
[root@rhel1 ~]# chkconfig multipathd on
[root@rhel1 ~]# chkconfig --list multipathd
multipathd
0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@rhel1 ~]#

10. The multipath command displays the configuration of DM-Multipath, and the multipath -ll
command displays a list of the multipath devices. DM-Multipath maintains a device file under /dev/
mapper that you use to access the multipathed LUN (in order to create a filesystem on it and to
mount it); the first line of output from the multipath -ll command lists the name of that device file (in
this example 3600a0980774f6a34515d464d486c7137). The autogenerated name for this device
file will likely differ in your copy of the lab. Also pay attention to the output of the sanlun lun show -p
command which shows information about the Data ONTAP path of the LUN, the LUNs size, its device
file name under /dev/mapper, the multipath policy, and also information about the various device paths
themselves.
[root@rhel1 ~]# multipath -ll
[1m3600a0980774f6a34515d464d486c7137 dm-2 NETAPP,LUN C-Mode
size=10G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 6:0:0:0 sdb 8:16 active ready running
| `- 3:0:0:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 5:0:0:0 sdc 8:32 active ready running
`- 4:0:0:0 sdd 8:48 active ready running
[root@rhel1 ~]# ls -l /dev/mapper
total 0
lrwxrwxrwx 1 root root
7 Oct 20 06:50 3600a0980774f6a34515d464d486c7137 -> ../dm-2
crw-rw---- 1 root root 10, 58 Oct 19 18:57 control
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_root -> ../dm-0
lrwxrwxrwx 1 root root
7 Oct 19 18:57 vg_rhel1-lv_swap -> ../dm-1
[root@rhel1 ~]# sanlun lun show -p
ONTAP Path: svmluns:/vol/linluns/linux.lun
LUN: 0
LUN Size: 10g
Product: cDOT
Host Device: 3600a0980774f6a34515d464d486c7137
Multipath Policy: round-robin 0

242

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Multipath Provider: Native


--------- ---------- ------- -----------host
vserver
path
path
/dev/
host
state
type
node
adapter
--------- ---------- ------- -----------up
primary
sdb
host6
up
primary
sde
host3
up
secondary sdc
host5
up
secondary sdd
host4
[root@rhel1 ~]#

---------------------------------------------vserver
LIF
---------------------------------------------cluster1-01_iscsi_lif_1
cluster1-01_iscsi_lif_2
cluster1-02_iscsi_lif_1
cluster1-02_iscsi_lif_2

You can see even more detail about the configuration of multipath and the LUN as a whole by running
the commands multipath -v3 -d -ll or iscsiadm -m session -P 3. As the output of these
commands is rather lengthy, it is omitted here.
11. The LUN is now fully configured for multipath access, so the only steps remaining before you can use
the LUN on the Linux host is to create a filesystem and mount it. When you run the following commands
in your lab you will need to substitute in the /dev/mapper/ string that identifies your LUN (get that
string from the output of ls -l /dev/mapper):
[root@rhel1 ~]# mkfs.ext4 /dev/mapper/3600a0980774f6a34515d464d486c71377
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks:
0/204800 done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=16 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@rhel1 ~]# mkdir /linuxlun
[root@rhel1 ~]# mount -t ext4 -o discard /dev/mapper/3600a0980774f6a345515d464d486c7137 /
linuxlun
[root@rhel1 ~]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/vg_rhel1-lv_root
11877388 4962816
6311232 45% /
tmpfs
444612
76
444536
1% /dev/shm
/dev/sda1
495844
40084
430160
9% /boot
svm1:/
19456
128
19328
1% /svm1
/dev/mapper/3600a0980774f6a34515d464d486c7137 10321208 154100
9642820
2% /linuxlun
[root@rhel1 ~]# ls /linuxlun
lost+found
[root@rhel1 ~]# echo "hello from rhel1" > /linuxlun/test.txt
[root@rhel1 ~]# cat /linuxlun/test.txt
hello from rhel1
[root@rhel1 ~]# ls -l /linuxlun/test.txt
-rw-r--r-- 1 root root 6 Oct 20 06:54 /linuxlun/test.txt
[root@rhel1 ~]#

The discard option for mount allows the Red Hat host to utilize space reclamation for the LUN.
12. To have RHEL automatically mount the LUNs filesystem at boot time, run the following command
(modified to reflect the multipath device path being used in your instance of the lab) to add the mount
information to the /etc/fstab file. The following command should be entered as a single line
[root@rhel1 ~]# echo '/dev/mapper/3600a0980774f6a34515d464d486c7137
/linuxlun ext4 _netdev,discard,defaults 0 0' >> /etc/fstab
[root@rhel1 ~]#

243

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

12 References
The following references were used in writing this lab guide.

244

TR-3982: NetApp Clustered Data ONTAP 8.2.X an Introduction:, July 2014


TR-4100: Nondisruptive Operations and SMB File Shares for Clustered Data ONTAP, April 2013
TR-4129: Namespaces in clustered Data ONTAP, July 2014

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

13 Version History

245

Version

Date

Document Version History

Version 1.0

October 2014

Initial Release for Hands On Labs

Version 1.0.1

December 2014

Updates for Lab on Demand

Version 1.1

April 2015

Updated for Data ONTAP 8.3GA and other application


software. NDO section spun out into a separate lab guide.

Version 1.2

October 2015

Updated for Data ONTAP 8.3.1GA and other application


software.

Basic Concepts for Clustered Data ONTAP 8.3.1

2015 NetApp, Inc. All rights reserved. NetApp Proprietary

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment.
The NetApp IMT defines product components and versions that can be used to construct configurations
that are supported by NetApp. Specific results depend on each customer's installation in accordance
with published specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be obtained
by the use of the information or observance of any recommendations provided herein. The information in this
document is distributed AS IS, and the use of this information or the implementation of any recommendations or
techniques herein is a customers responsibility and depends on the customers ability to evaluate and integrate
them into the customers operational environment. This document and the information contained herein may be
used solely in connection with the NetApp products discussed in this document.

Go further, faster

2015 NetApp, Inc. All rights reserved. No portions of this presentation may be reproduced without prior written
consent of NetApp, Inc. Specifications are subject to change without notice. NetApp and the NetApp logo are
registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are
trademarks or registered trademarks of their respective holders and should be treated as such.

S-ar putea să vă placă și