Sunteți pe pagina 1din 34

LOAD-SHARING MIRROR VOLUMES

In addition to mirroring data in order to protect it, clustered


Data ONTAP provides mirroring for load balancing. Copies
of read/write volumes, which are called load-sharing (LS)
mirrors, can be used to offload
read requests from their read/write volumes. Also, when a
number of LS mirrors are created for a single read/write
volume, the likelihood of a read request being served
locally, rather than traversing the cluster
network, is greatly increased, resulting in better read
performance.

An LS mirror is mounted to the SVMs NAS namespace at the same point


as its read/write volume. So, if a volume has any LS mirrors, all
client requests are sent, transparently to the clients, to an
LS mirror, rather
than to the read/write volume. If the LS mirrors become
out-of-sync with their read/write volumes, a client read
request gets out-of-date information. LS mirrors are ideal
for volumes that are read frequently and
written infrequently.
To allow an NFS request to go to the read/write volume
after it has been replicated to an LS mirror, an additional
mount must be done to use the /.admin path (for
example, mount svm1:/.admin/vol_b
/mnt/vol_b_rw). For CIFS clients, an additional step is
needed within the cluster itself. You must create an
additional CIFS share that uses /.admin rather than / for
its path. The clients that require read/write
access must use that share.
When multiple LS mirrors exist for a volume, the node that
receives the request gives preference to a local LS mirror. If
there is no local LS mirror, Data ONTAP uses a round-robin
algorithm to choose which "remote" LS mirror receives the
request. For volumes with high read traffic, a good practice
is to have an LS mirror on every node so that all read
requests are served locally. Mirroring of the root volumes of
virtual servers is highly recommended and is considered a
best practice.
Create load-sharing mirrors to balance loads between nodes.

Using a load-sharing mirror to balance


loads
A load-sharing mirror reduces the network traffic to a FlexVol volume by providing
additional read-only access to clients. You can create and manage load-sharing mirrors to
distribute read-only traffic away from a FlexVol volume. Load-sharing mirrors do not support
Infinite Volumes.

A set of load-sharing mirrors consists of a source volume that can fan out to one or more
destination volumes. Each load-sharing mirror in the set must belong to the same Storage
Virtual Machine (SVM) as the source volume of the set. The load-sharing mirrors should also
be created on different aggregates and accessed by different nodes in the cluster to achieve
proper load balancing of client requests.

Creating load-sharing mirror relationships

Before you can replicate data from the source FlexVol volume to the load-sharing mirror
destination volumes, you must create the mirror relationships by using the snapmirror create
command.

Steps

1. Use the snapmirror create command with the -type LS parameter to create
a load-sharing mirror relationship between the source volume and a
destination volume.

Example

The following command creates a load-sharing mirror relationship between


the source root volume vs1_root for Vserver vs1 and the load-sharing
destination volume vs1_m1, and specifies the default hourly update
schedule, which is a NetApp best practice if you do not have an existing
schedule. If you do have an existing schedule configured, then you can
specify that instead.

cluster1::> snapmirror create -source-path //vs1/vs1_root


-destination-path //vs1/vs1_m1 -type LS -schedule hourly
[Job 171] Job is queued: snapmirror create the relationship with
destination //vs1/vs1_m1
[Job 171] Job succeeded: SnapMirror: done

When you create a relationship for a load-sharing mirror, the attributes for
that load-sharing mirror (throttles, update schedules, and so on) are
shared by all of the load-sharing mirrors that share the same source
volume.

2. Repeat Step 1 to add a load-sharing mirror relationship to the destination


volume on each node in the cluster.

Example

The following command creates load-sharing mirror relationships between


the Vserver root volume vs1_root and the destination volumes vs1_m2,
vs1_m3, and vs1_m4. The -schedule parameter does not need to be used
again, because Data ONTAP automatically applies the same schedule to
the set of all load-sharing mirrors that share the same source volume.

cluster1::> snapmirror create -source-path //vs1/vs1_root


-destination-path //vs1/vs1_m2 -type LS
[Job 172] Job is queued: snapmirror create the relationship with
destination //vs1_m2
[Job 172] Job succeeded: SnapMirror: done

cluster1::> snapmirror create -source-path //vs1/vs1_root


-destination-path //vs1/vs1_m3 -type LS
[Job 173] Job is queued: snapmirror create the relationship with
destination //vs1_m3
[Job 173] Job succeeded: SnapMirror: done

cluster1::> snapmirror create -source-path //vs1/vs1_root


-destination-path //vs1/vs1_m4 -type LS
[Job 174] Job is queued: snapmirror create the relationship with
destination //vs1_m4
[Job 174] Job succeeded: SnapMirror: done

Load sharing mirrors are read only unless you are accessing it via the admin share.

Check out page 57 (7.2 Accessing Load-Sharing Mirror Volumes) of the following TR:

SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP -
http://www.netapp.com/us/media/tr-4015.pdf

"By default, all client requests for access to a volume in an LS mirror set are granted read-
only access. Read-write access is granted by accessing a special administrative mount point,
which is the path that servers requiring read-write access into the LS mirror set must mount.
All other clients will have read-only access."
When you are accessing the admin share for write access, you are accessing the source
volume."After changes are made to the source volume, the changes must be replicated to the
rest of the volumes in the LS mirror set using the snapmirror update-ls-set command, or with
a scheduled update."

LS mirrors are for file access (NAS), not for block (SAN).

Netapp Load-Sharing Mirrors on Vserver Root Volume


Why do you create load-sharing mirrors on the root volume of a vserver ? For 2 main
reasons:

1. to protect the vserver root volume in case of a disaster and the root volume is lost

2. to load-balance client requests

In the case of a vserver root volume disaster, any of the load-sharing mirrored destinations
can be promoted to the full read/write root volume

In the case of load-balancing client requests, you need to have a load-sharing mirror setup for
each node in your cluster. If the client requests data from a volume on a particular node that
does not hold the root volume, the client connection gets re-directed to the root volume of the
node where the data resides.

For example:

If I have a 2 node-cluster Node1 and Node2, Node1 holding the root volume with Node2
holding a load-sharing mirror of the root. I have data on a volume on Node2, without a load-
sharing mirror the client will reference the root volume in Node1 in order to access data on
Node2.

With a load-sharing mirror setup on Node2 the client can access data on Node2 without
having to reference the root volume on Node1.

Netapp Load-Sharing Mirror Setup


Scenario Node1 and Node2. Root volume is in a vserver called Corporate and is named root

CLUSTER::> volume create -vserver corporate -volume root_m01 -aggregate aggr1node1


-size 1GB -type DP

CLUSTER::> volume create -vserver corporate -volume root_m02 -aggregate aggr1node2


-size 1GB -type DP

CLUSTER::> snapmirror create -source-path CLUSTER://corporate/root -destination-


path CLUSTER://corporate/root_m01 -type LS

CLUSTER::> snapmirror create -source-path CLUSTER://corporate/root -destination-


path CLUSTER://corporate/root_m02 -type LS
CLUSTER::> snapmirror initialize-ls-set -source-path CLUSTER://corporate/root

At this point we want to create a schedule so that the load-sharing mirror updates periodically

CLUSTER::> job schedule interval create -name 20mins -minutes 20mins

Now we will apply the schedule to the snapmirror job

CLUSTER::> snapmirror modify -source-path CLUSTER://corporate/root -destination-path *


-schedule 20mins

Lets say we add 2 more nodes to the cluster to make a 4 node cluster. We would create a
volume on each node (steps 1,2), create the snapmirror relationship (steps 3,4) and initialize
each relationship separately. So in step 5 we would replace the initialize-ls-set with:

CLUSTER::> snapmirror initialize -source-path CLUSTER://corporate/root -destination-


path CLUSTER://corporate/root_m03 -type LS

CLUSTER::> snapmirror initialize -source-path CLUSTER://corporate/root -destination-


path CLUSTER://corporate/root_m04 -type LS

and we would apply the job schedule of 20mins to these destinations by re-apply the
command:

CLUSTER::> job schedule interval create -name 20mins -minutes 20mins

____________________________________

Roles
RBAC: PREDEFINED ROLES IN CLUSTERED DATA ONTAP
Clustered Data ONTAP includes administrative access-control roles that can be
used to subdivide
administration duties for SVM administration tasks.
The vsadmin role is the superuser role for an SVM. The admin role is the superuser for a cluster.
Clustered Data ONTAP 8.1 and later versions support the vsadmin role. The
vsadmin role grants the data SVM administrator full administrative privileges for
the SVM. Additional roles include the vsadminprotocol role, the vsadmin-readonly role, and the
vsadmin-volume role. Each of these roles provides a unique SVM administration privilege
.
A cluster administrator with the readonly role can grant read-only capabilities. A cluster
administrator with the none role cannot grant capabilities.

Differences between cluster and SVM


administrators
Cluster administrators administer the entire cluster and the Storage Virtual Machines (SVMs,
formerly known as Vservers) it contains. SVM administrators administer only their own data SVMs.

Cluster administrators can administer the entire cluster and its resources. They can also set up data
SVMs and delegate SVM administration to SVM administrators. The specific capabilities that cluster
administrators have depend on their access-control roles. By default, a cluster administrator with the
admin account name or role name has all capabilities for managing the cluster and SVMs.

SVM administrators can administer only their own SVM storage and network resources, such as
volumes, protocols, LIFs, and services. The specific capabilities that SVM administrators have
depend on the access-control roles that are assigned by cluster administrators.

Managing access-control roles


Role-based access control (RBAC) limits users' administrative access to the level granted for their
role, enabling you to manage users by the role they are assigned to. Data ONTAP provides several
predefined roles. You can also create additional access-control roles, modify them, delete them, or
specify account restrictions for users of a role .

Predefined roles for SVM administrators


The five predefined roles for an SVM administrator are: vsadmin, vsadmin-volume, vsadminprotocol,
vsadmin-backup, and vsadmin-readonly. In addition to these predefined roles, you can
create customized SVM administrator roles by assigning a set of capabilities.
Each SVM can have its own user and administration authentication domain. You can delegate the
administration of an SVM to an SVM administrator after creating the SVM and user accounts .
The securityloginroleshowcommand displays the commands that a role can access.
To create role : the securityloginrolecreatecommand)

Data ONTAP prevents you from modifying predefined roles.

The number of customized access-control roles that you can create per cluster without any
performance degradation depends on the overall Data ONTAP configuration; however, it is best to
limit the number of customized access-control roles to 500 or less per cluster.

ROOT
Rules governing node root volumes and root aggregates
A node's root volume contains special directories and files for that node. The root aggregate contains
the root volume. A few rules govern a node's root volume and root aggregate.
A node's root volume is a FlexVol volume that is installed at the factory or by setup software. It is
reserved for system files, log files, and core files. The directory name is /mroot, which is accessible
only through the systemshell by technical support. The minimum size for a node's root volume
depends on the platform model.
The following rules govern the node's root volume:
Unless technical support instructs you to do so, do not modify the configuration or content of
the root volume.
Do not store user data in the root volume.
Storing user data in the root volume increases the storage giveback time between nodes in an
HA pair.
Contact technical support if you need to designate a different volume to be the new root
volume or move the root volume to another aggregate.
The root aggregate must be dedicated to the root volume only.
You must not include or create data volumes in the root aggregate

The root volume of the SVM


is a FlexVol volume that resides at the top level of the namespace hierarchy; additional volumes are
mounted to the SVM root volume to extend the namespace. As volumes are created for the SVM, the
root volume of the SVM contains junction paths.

How SVMs root volumes are used for data access


Every Storage Virtual Machine (SVM) has a root volume that contains the paths where the data
volumes are junctioned into the namespace. NAS clients' data access is dependent on the health of
the root volume in the namespace and SAN clients' data access is independent of the root volume's
health in the namespace.
The root volume serves as the entry point to the namespace provided by that SVM. The root volume
of the SVM is a FlexVol volume that resides at the top level of the namespace hierarchy and contains
the directories that are used as mount points, the paths where data volumes are junctioned into the
namespace.
In the unlikely event that the root volume of an SVM namespace is unavailable, NAS clients cannot
access the namespace hierarchy and therefore cannot access data in the namespace. For this reason, it
is best to create a load-sharing mirror copy for the root volume on each node of the cluster so that the
namespace directory information remains available in the event of a node outage or failover.
You should not store user data in the root volume of an SVM. The root volume of the SVM should be
used for junction paths, and user data should be stored in non-root volumes of the SVM.

Root volume security style


mixed for NFS and CIFS
ntfs for CIFS
unix for NFS, iSCSI and FC

You must ensure that the following requirements are met:


The cluster must have at least one non-root aggregate with sufficient space.
There must be at least 1 GB of space on the aggregate for the SVM root volume.

Create a load-sharing mirror copy for the root volume on each node of the cluster so that the
namespace directory information remains available in the event of a node outage or failover

Restoring the root volume of an SVM


If the root volume of a Storage Virtual Machine (SVM) becomes unavailable, clients cannot mount
the root of the namespace. In such cases, you must restore the root volume by promoting another
volume or creating a new root volume to facilitate data access to the clients.
Before you begin
SVM root volume must be protected by using the load-sharing mirror copy or data-protection copy.
About this task
You can promote any volume that does not have other volumes junctioned to it.
When a new volume is promoted as the SVM root volume, the data volumes are associated with the
new SVM root volume

Choices
For SVMs with FlexVol volumes, promote one of the following volumes to restore the root
volume:
Load-sharing mirror copy
Promoting a load-sharing mirror copy
Data-protection mirror copy
Promoting a data-protection mirror copy
New FlexVol volume
Promoting a new FlexVol volume

The Data SVM Root Volume


Exists on each data SVMone per data SVM
Is the root of the data SVM namespace
Is a normal flexible volume
Contains junctions
Can be moved, copied, and backed up
Can have Snapshot copies
Is usually mirrored

PHYSICAL LAYER: AGGREGATE TYPES


Each node of an HA pair requires three disks (RAID-DP) to be used in the root
aggregate. The root aggregate is created when the system is first initialized. This
aggregate contains vol0, which contains the configuration information and log
files. The root aggregate should not be used for user data.

___________________________________________

CFO and SFO


When a node is first initialized, a root aggregate is created. In clustered Data
ONTAP, the root aggregate (aggr0) is assigned the controller failover (CFO) HA
policy. Because this aggregate is required for a node to operate, it fails over last
(after all operations are complete and the node shuts down) and is the first to be
given back (so that the node can boot).

NOTE: Do not store data volumes on the root aggregate (aggr0). Volumes on
CFO aggregates are not
available to clients or hosts during failover

Data aggregates are treated a little differently. Data can still be served from the
node that has taken over. Additionally, the client might not even be mounted to
the node in the HA pair that is failing over. When the system creates an
aggregate, it assumes that the aggregate is for data and assigns the storage
failover (SFO) HA policy to the aggregate. With the SFO policy, the data
aggregates will fail over first and fail back last in a serial manner.

Hardware-assisted takeover speeds up the takeover process by using a nodes remote


management device (Service Processor [SP] or Remote LAN Module [RLM]) to detect
failures and quickly initiate the takeover, rather than waiting for Data ONTAP to
recognize that the partner's heartbeat has stopped. Without hardwareassisted
takeover, if a failure occurs, the partner waits until it notices that the node is no
longer giving a heartbeat, confirms the loss of heartbeat, and then initiates the
takeover

HA policy and how it affects takeover and giveback


operations
Data ONTAP automatically assigns an HA policy of CFO or SFO to an aggregate that determines
how storage failover operations (takeover and giveback) occur for the aggregate and its volumes.
HA policy is assigned to and required by each aggregate on the system. The two options, CFO
(controller failover), and SFO (storage failover), determine the aggregate control sequence Data
ONTAP uses during storage failover and giveback operations.
Although the terms CFO and SFO are sometimes used informally to refer to storage failover
(takeover and giveback) operations, they actually represent the HA policy assigned to the aggregates.
For example, the terms SFO aggregate or CFO aggregate simply refer to the aggregate's HA policy
assignment.
Aggregates created on clustered Data ONTAP systems (except for the root aggregate containing
the root volume) have an HA policy of SFO. Manually initiated takeover is optimized for
performance by relocating SFO (non-root) aggregates serially to the partner prior to takeover.
During the giveback process, aggregates are given back serially after the taken-over system boots
and the management applications come online, enabling the node to receive its aggregates.
Because aggregate relocation operations entail reassigning aggregate disk ownership and shifting
control from a node to its partner, only aggregates with an HA policy of SFO are eligible for
aggregate relocation.
The root aggregate always has an HA policy of CFO and is given back at the start of the giveback
operation since this is necessary to allow the taken-over system to boot. All other aggregates are
given back serially after the taken-over system completes the boot process and the management
applications come online, enabling the node to receive its aggregates.
Note: Changing the HA policy of an aggregate from SFO to CFO is a Maintenance mode
operation. Do not modify this setting unless directed to do so by a customer support representative
_______________________________________________________

Information for configuring DNS


You must configure DNS on the SVM before creating a CIFS server.

Configuring DNS services for the SVM


You must configure DNS services for the Storage Virtual Machine (SVM) before creating the CIFS
server. Generally, the DNS name servers are the Active Directory-integrated DNS servers for the
domain that the CIFS server will join.

About this task


Active Directory-integrated DNS servers contain the service location records (SRV) for the domain
LDAP and domain controller servers. If the Storage Virtual Machine (SVM) cannot find the Active
Directory LDAP servers and domain controllers, CIFS server setup fails.
Storage Virtual Machines (SVMs) use the hostsname services ns-switch database to determine
which name services to use and in which order to use the them when looking up information about
hosts. The two supported name services for the hosts database are filesand dns.
You must ensure that dnsis one of the sources before you create the CIFS server.
Configuring DNS on the Vserver

You must configure DNS on the Vserver before creating the CIFS server. Generally, the DNS
name servers are the Active Directory-integrated DNS servers for the domain that the CIFS
server will join.

About this task


Active Directory-integrated DNS servers contain the service location records (SRV) for the
domain LDAP and domain controller servers. If the Vserver cannot find the Active Directory
LDAP servers and domain controllers, CIFS server setup fails.

Steps

1. Configure DNS services:

vserver services dns create -vserver vserver_name -domains


FQDN[,...] -name-servers IP-address[,...]

The domain path is constructed from the values in the -domains parameter.

2. Verify that the DNS configuration is correct and that the service is enabled
by using the vserver services dns show command.

Example
The following example configures the DNS service on Vserver vs1:

cluster1::> vserver services dns create -vserver vs1 -domains


iepubs.local,example.com -name-servers 10.1.1.50,10.1.1.51

cluster1::> vserver services dns show -vserver vs1


Name
Vserver State Domains Servers
-------- --------- --------------------------- -------------
vs1 enabled iepubs.local, example.com 10.1.1.50,
10.1.1.51

@@@@@@@@@
@
Configuring DNS services for the SVM

You must configure DNS services for the Storage Virtual Machine (SVM) before creating the
CIFS server. Generally, the DNS name servers are the Active Directory-integrated DNS
servers for the domain that the CIFS server will join.

About this task


Active Directory-integrated DNS servers contain the service location records (SRV) for the
domain LDAP and domain controller servers. If the Storage Virtual Machine (SVM) cannot
find the Active Directory LDAP servers and domain controllers, CIFS server setup fails.

Storage Virtual Machines (SVMs) use the hosts name services ns-switch database to
determine which name services to use and in which order to use the them when looking up
information about hosts. The two supported name services for the hosts database are files and
dns.

You must ensure that dns is one of the sources before you create the CIFS server.

Steps

1. Determine what the current configuration is for the hosts name services
database by using the vserver services name-service ns-switch show
command.

Example

In this example, the hosts name service database uses the default settings.

vserver services name-service ns-switch show -vserver vs1 -database


hosts

Vserver: vs1
Name Service Switch Database: hosts
Name Service Source Order: files, dns

2. If needed, perform the following actions:

a. Add the DNS name service to the hosts name service database in
the desired order or reorder the sources by using the vserver
services name-service ns-switch modify command.

Example

In this example, the hosts database is configured to use DNS and local files in
that order.

vserver services name-service ns-switch modify -vserver vs1


-database hosts -sources dns,files

b. Verify that the name services configuration is correct by using the


vserver services name-service ns-switch show command.
Example

vserver services name-service ns-switch show -vserver vs1


-database hosts

Vserver: vs1
Name Service Switch Database: hosts
Name Service Source Order: dns, files

3. Configure DNS services by using the vserver services name-service dns


create command.

Example

vserver services name-service dns create -vserver vs1 -domains


example.com,example2.com -name-servers 10.0.0.50,10.0.0.51

4. Verify that the DNS configuration is correct and that the service is enabled
by using the vserver services name-service dns show command.

Example

vserver services name-service dns show -vserver vs1

Vserver: vs1
Domains: example.com, example2.com
Name Servers: 10.0.0.50, 10.0.0.51
Enable/Disable DNS: enabled
Timeout (secs): 2
Maximum Attempts: 1

__________________
_---
CIFS server creatiom
Article Number
000027392
Description

This article describes the procedure that should be followed to create a cifs vserver using the
CLI and System Manager.

Procedure

CLI:
Perform the following steps:

1. Run the vserver setup command to start the vserver setup wizard:

den-cluster::> vserver setup

Welcome to the Vserver Setup Wizard, which will lead you through

the steps to create a virtual storage server that serves data to


clients.

You can enter the following commands at any time:

"help" or "?" if you want to have a question clarified,

"back" if you want to change your answers to previous questions, and

"exit" if you want to quit the Vserver Setup Wizard. Any changes

you made before typing "exit" will be applied.

You can restart the Vserver Setup Wizard by typing "vserver setup". To
accept a default

or omit a question, do not enter a value.

Step 1. Create a Vserver.

You can type "back", "exit", or "help" at any question.

2. Enter a name for the vserver:

Enter the Vserver name: vs_cifs

3. Select the protocols you want to configure:

Choose the Vserver data protocols to be configured {nfs, cifs, fcp,


iscsi}:

cifs

4. Select the client services:

Choose the Vserver client services to be configured {ldap, nis, dns}:

dns

5. Select the aggregate where you want the vserver root volume to reside:

Enter the Vserver's root volume aggregate [n1_aggr0]:


6. Select the language for the vserver:

Note: The language once selected, cannot be modified.

Enter the Vserver language setting, or "help" to see all languages


[C]:

7. Select the vserver root volume security style:

Enter the Vserver root volume's security style {unix, ntfs, mixed}
[ntfs]:

Vserver creation might take some time to finish....

Vserver vs_cifs with language set to C created. The permitted


protocols are

cifs.

8. The Step 2 below is optional:

Step 2: Create a data volume

You can type "back", "exit", or "help" at any question.

Do you want to create a data volume? {yes, no} [yes]:yes

Enter the volume name [vol1]: vol3

Enter the name of the aggregate to contain this volume [n1_aggr0]:

Enter the volume size: 20m

Enter the volume junction path [/vol/vol3]: /vol3

It can take up to a minute to create a volume...

Volume vol3 of size 20MB created on aggregate n1_aggr0 successfully.

Do you want to create an additional data volume? {yes, no} [no]:

no

9. The Step 3 below is optional:

Step 3: Create a logical interface.

You can type "back", "exit", or "help" at any question.

Do you want to create a logical interface? {yes, no} [yes]: yes

Enter the LIF name [lif1]: nas_lif1

Which protocols can use this interface [cifs]:


Enter the home node [den-cluster-01]:

Enter the home port {e0a, e0b, e0c, e0d, e0e} [e0a]:

Enter the IP address: 10.26.133.136

Enter the network mask: 255.255.255.0

Enter the default gateway IP address: 10.26.133.1

LIF nas_lif1 on node den-cluster-01, on port e0a with IP address


10.26.133.136

was created.

Do you want to create an additional LIF now? {yes, no} [no]:

10. The Step 4 below is required:

Step 4: Configure DNS (Domain Name Service).

You can type "back", "exit", or "help" at any question.

Do you want to configure DNS? {yes, no} [yes]:

Enter the comma separated DNS domain names: usps.den

Enter the comma separated DNS server IP addresses: 10.26.129.50

DNS for Vserver vs_cifs is configured.

11. The Step 5 below is required:

Step 5: Configure CIFS.

You can type "back", "exit", or "help" at any question.

Do you want to configure CIFS? {yes, no} [yes]:

Enter the CIFS server name [VS_CIFS-DEN-CLU]:

Enter the Active Directory domain name: usps.den

In order to create an Active Directory machine account for the CIFS


server, you
must supply the name and password of a Windows account with sufficient

privileges to add computers to the "CN=Computers" container within the

"usps.den" domain.

Enter the user name: administrator

Enter the password:

12. Set up a CIFS share. This step (at this stage) is optional.

CIFS server "VS_CIFS-DEN-CLU" created and successfully joined the


domain.

Do you want to share a data volume with CIFS clients? {yes, no} [yes]:

yes

Enter the CIFS share name [vol3]:

Enter the CIFS share path [/vol3]:

Select the initial level of access that the group "Everyone" has to
the share

{No_access, Read, Change, Full_Control} [No_access]: Full_Control

The CIFS share "vol3" created successfully.

Default UNIX users and groups created successfully.

UNIX user "pcuser" set as the default UNIX user for unmapped CIFS
users.

Default export policy rule created successfully.

Vserver vs_cifs, with protocol(s) cifs, and service(s) dns has been
configured

successfully.

System Manager:

1. Open System Manager, log in to your cluster, and select the vserver context on the
left pane:
2. Click Create. The Create Vserver Wizard will be displayed:

3. Type a name for the vserver, and then, select an aggregate, language and CIFS:

Note: The language once selected, cannot be modified.


4. Enter the name of the domain and add the IP addresses of the domain servers:

A summary of the information entered will be displayed:


5. Setup the vserver LIFs for the cifs vserver by following the instructions in the
successive screens:
6. Enter the cifs server name, domain and admin username/password:

7. Create the root user and group in the successive screens:


Note: This is created for internal Cluster-Mode purposes.
For more information, see the clustered Data ONTAP CIFS/SMB Server Configuration
Express Guide.

Article Footer
Disclaimer
NetApp provides no representations or warranties regarding the accuracy, reliability, or
serviceability of any information or recommendations provided in this publication, or with
respect to any results that may be obtained by the use of the information or observance of any
recommendations provided herein. The information in this document is distributed AS IS, and
the use of this information or the implementation of any recommendations or techniques
herein is a customers responsibility and depends on the customers ability to evaluate and
integrate them into the customers operational environment. This document and the
information contained herein may be used solely in connection with the NetApp products
discussed in this document.
Attachment 1
Attachment 2
_______________________________________

LIFS roles

A LIF represents a network access point to a node in the cluster. You can configure LIFs on ports
over which the cluster sends and receives communications over the network.
A cluster administrator can create, view, modify, migrate, or delete LIFs. An SVM administrator can
only view the LIFs associated with the SVM.

Logical interfaces (LIFs): for clustered Data ONTAP only


Clustered Data ONTAP
LIFs can be configured on physical
ports, interface groups, or VLANs
LIFs are owned by data SVMs
Ports, interface groups, and
VLANs can be used across
multiple LIFs and SVMs

Logical Interfaces
An IP address or World Wide Port Name (WWPN) is associated with a LIF
If subnets are configured (recommended), IP addresses are automatically
assigned when a LIF is created
If subnets are not configured, IP addresses must be manually assigned when
LIF is created
WWPNs are automatically assigned when an FC LIF is created
One node-management LIF exists per node
One cluster-management LIF exists per cluster
Two* cluster LIFs exist per node
Multiple data LIFs are allowed per port (Client-facing: NFS, CIFS, iSCSI,
and FC access)
For intercluster peering, intercluster LIFs must be created on each node

What LIFs are


A LIF (logical interface) is an IP address or WWPN with associated characteristics, such as a role, a
home port, a home node, a list of ports to fail over to, and a firewall policy. You can configure LIFs
on ports over which the cluster sends and receives communications over the network.
LIFs can be hosted on the following ports:
Physical ports that are not part of interface groups
Interface groups
VLANs
Physical ports or interface groups that host VLANs
While configuring SAN protocols such as FC on a LIF, it will be associated with a WWPN.

Roles for LIFs


A LIF role determines the kind of traffic that is supported over the LIF, along with the failover rules
that apply and the firewall restrictions that are in place. A LIF can have any one of the five roles:
node management, cluster management, cluster, intercluster, and data.

node management LIF


A LIF that provides a dedicated IP address for managing a particular node in a cluster.
Node management LIFs are created at the time of creating or joining the cluster. These
LIFs are used for system maintenance, for example, when a node becomes inaccessible
from the cluster.

cluster management LIF


A LIF that provides a single management interface for the entire cluster.

A cluster-management LIF can fail over to any node-management or data port in the
cluster. It cannot fail over to cluster or intercluster ports.

cluster LIF
A LIF that is used to carry intracluster traffic between nodes in a cluster. Cluster LIFs
must always be created on 10-GbE network ports.
Cluster LIFs can fail over between cluster ports on the same node, but they cannot be
migrated or failed over to a remote node. When a new node joins a cluster, IP addresses
are generated automatically. However, if you want to assign IP addresses manually to the
cluster LIFs, you must ensure that the new IP addresses are in the same subnet range as
the existing cluster LIFs.

data LIF
A LIF that is associated with a Storage Virtual Machine (SVM) and is used for
communicating with clients.
You can have multiple data LIFs on a port. These interfaces can migrate or fail over
throughout the cluster. You can modify a data LIF to serve as an SVM management LIF
by modifying its firewall policy to mgmt.
For more information about SVM management LIFs, see the Clustered Data ONTAP
System Administration Guide for Cluster Administrators.
Sessions established to NIS, LDAP, Active Directory, WINS, and DNS servers use data
LIFs.

intercluster LIF
A LIF that is used for cross-cluster communication, backup, and replication. You must
create an intercluster LIF on each node in the cluster before a cluster peering relationship
can be established.
These LIFs can only fail over to ports in the same node. They cannot be migrated or failed
over to another node in the cluster.

To create a LIF in Data ONTAP 8.3:


c1::> network interface create vserver SVM_A-1 lif SVM_A-1_lif2 role
data data-protocol nfs home-node c1-02 home-port e0f
subnet-name subnet_A

DATA ONTAP NETWORKING


Data ONTAP systems can be analyzed as having three network layers:
Physical: network ports
Virtual: interface groups (ifgrps) and virtual LANs (VLANs)
Logical interfaces (LIFs): for clustered Data ONTAP only

Port Types
Physical port
Ethernet
FC
Unified Target Adapter (UTA)
UTA is a 10-GbE port
UTA2 is configured as either:
10-GbE
or 16-Gbps FC
Virtual port
Interface group (ifgrp)
Virtual LAN (VLAN)
PORT TYPES
Port types can be either physical or virtual.
Physical:
Ethernet port: 1-Gb or 10-Gb Ethernet (10-GbE) ports that can be used in
NFS, CIFS, and iSCSI
environments
FC port: 4-Gbps, 8-Gbps, or 16-Gbps port that can be used as a target in FC
SAN environment. It can be
configured as an initiator for disk shelves or tape drives.
Unified Target Adapter (UTA) port: 10-GbE port that can be used in NFS,
CIFS, iSCSI and FCoE
environments
Unified Target Adapter 2 (UTA2) port: Configured as either a 10-GbE
Ethernet or 16-Gbps FC port
10-Gb ports can be used in NFS, CIFS, iSCSI, and FCoE environments
16-Gbps FC ports can be used as targets in FC SAN environments

Virtual:
Interface group: An interface group implements link aggregation by
providing a mechanism to group together multiple network interfaces (links) into
one logical interface (aggregate). After an interface group is created, it is
indistinguishable from a physical network interface.
VLAN: Traffic from multiple VLANs can traverse a link that interconnects two
switches by using VLAN tagging. A VLAN tag is a unique identifier that indicates
the VLAN to which a frame belongs. A VLAN tag is included in the header of
every frame that is sent by an end-station on a VLAN. On receiving a tagged
frame, a swi tch identifies the VLAN by inspecting the tag, then forwards the
frame to the destination in the identified VLAN.

INTERFACE GROUPS
The following network terms are described as they are implemented within Data
ONTAP:
Interface groups aggregate network interfaces into a trunk.
You can implement link aggregation on your storage system to group
multiple network interfaces (links)
into one logical interface (aggregate).
After an interface group is created, the interface group is indistinguishable
from a physical network
interface.
Be aware that different vendors refer to interface groups by the following terms:
Virtual aggregations
Link aggregations
Trunks
EtherChannel
Interface groups can be implemented in two modes: single-mode and multimode.
In single-mode link aggregation, one interface is active, and the other
interface is inactive (on standby).
In multimode, all links in the link aggregation are active.
A dynamic multimode interface group can detect loss of link status and data flow.
Multimode requires a compatible switch to implement configuration.
Data ONTAP link aggregation complies with the IEEE 802.3ad static standard and
multimode dynamic link:
Link Aggregation Control Protocol (LACP).

Creating Interface Groups


Clustered Data ONTAP
c1::> network port ifgrp create -node c1-01
-ifgrp a0a
distr-func {mac|ip|sequential|port}
-mode {multimode|multimode_lacp|singlemode}

failover groups,

There are two types of failover groups: those created automatically by the
system when a broadcast domain is created, and those that a system
administrator defines.

Failover Groups
These failover groups are created automatically based on the network ports
that are present in the particular broadcast domain:
A Cluster failover group contains the ports in the Cluster broadcast
domain
These ports are used for cluster communication and include all cluster ports from
all nodes in the cluster
A Default failover group contains the ports in the Default broadcast
domain
These ports are used primarily to serve data, but they are also used for cluster
management and node management
Additional failover groups are created for each broadcast domain that you
create
The failover group has the same name as the broadcast domain, and it contains
the same ports as those in the broadcast domain

Failover Groups
Custom failover groups can be created for specific LIF failover
functionality when:
The automatic failover groups do not meet your requirements
Only a subset of the ports that are available in the broadcast
domain are required
Consistent performance is required
For example, create a failover group consisting of only 10-GbE ports that
enables LIFs to fail over only to high-bandwidth ports

A failover group contains a set of network ports (physical ports, VLANs, and interface groups) from
one or more nodes in a cluster. The network ports that are present in the failover group define the
failover targets available for the LIF. A failover group can have cluster management, node
management, intercluster, and NAS data LIFs assigned to it.

Creating a failover group


You create a failover group of network ports so that a LIF can automatically migrate to a different
port if a link failure occurs on the LIF's current port. This enables the system to reroute network
traffic to other available ports in the cluster.
About this task
You use the networkinterfacefailovergroupscreatecommand to create the group and
to add ports to the group.
The ports added to a failover group can be network ports, VLANs, or interface groups (ifgrps).
All of the ports added to the failover group must belong to the same broadcast domain.
A single port can reside in multiple failover groups.
If you have LIFs in different VLANs or broadcast domains, you must configure failover groups
for each VLAN or broadcast domain.
Failover groups do not apply in SAN iSCSI or FC environments.
Step
1. Create a failover group:
networkinterfacefailovergroupscreatevservervserver_name
failovergroupfailover_group_nametargetsports_list
vserver_nameis the name of the SVM that can use the failover group.
failover_group_nameis the name of the failover group you want to create.
ports_listis the list of ports that will be added to the failover group.
Ports are added in the format <node_name>:<port_number>, for example, node1:e0c.
Example
The following command creates failover group fg3 for SVM vs3 and adds two ports:
cluster1::>networkinterfacefailovergroupscreatevservervs3
failovergroupfg3targetscluster101:e0e,cluster102:e0e

port sets are used to manually


manage paths. When a LUN is mapped in 8.3, Data ONTAP identifies the node
that owns the aggregate with the LUN and its HA partner as reporting nodes (also
called local nodes). The reporting nodes report the LUN to the host.

Creating port sets and binding igroups to port sets


In addition to using Selective LUN Map (SLM), you can create a port set and bind the port set to an
igroup to further limit which LIFs can be used by an initiator to access a LUN. If you do not bind a
port set to an igroup, then all the initiators in the igroup can access mapped LUNs through all the
LIFs on the node owning the LUN and the owning node's HA partner.

It is advantageous to use ports sets with SLM when you have multiple targets on a node and you want
to restrict access of a certain target to a certain initiator. Without port sets, all targets on the node will
be accessible by all the initiators with access to the LUN through the node owing the LUN and the
owning node's HA partner.

Verify that your port sets and LIFs are correct:


portsetshowvservervserver_name
VserverPortsetProtocolPortNamesIgroups

vs3portset0iscsilif0,lif1igroup1

Commands for managing port sets


Data ONTAP provides commands to manage your port sets.
See How to limit LUN access in a virtualized environment for more information how you can use
portsets to limit LUN access.
If you want to.... Use this command...
Create a new port set lunportsetcreate
Add LIFs to a port set lunportsetadd
Display LIFs in a port set lunportsetshow
Display igroups that are bound to port sets lunportsetshow
Bind an igroup to a port set lunigroupbind
Unbind an igroup from a port set lunigroupunbind
Remove a LIF from a port set lunportsetremove

You can also create port sets to make a LUN visible only on specific target ports. A port set
consists of a group of FC target ports. You can bind an igroup to a port set. Any host in the igroup
can access the LUNs only by connecting to the target ports in the port set.

giveback and takeover

When takeovers occur


You can initiate takeovers manually or they can occur automatically when a failover event happens,
depending on how you configure the HA pair. In some cases, takeovers occur automatically,
regardless of configuration.
Takeovers can occur under the following conditions:
When you manually initiate takeover with the storagefailovertakeovercommand
When a node in an HA pair with the default configuration for immediate takeover on panic
undergoes a software or system failure that leads to a panic
By default, the node automatically performs a giveback, returning the partner to normal operation
after the partner has recovered from the panic and booted up.
When a node in an HA pair undergoes a system failure (for example, a loss of power) and cannot
reboot
Note: If the storage for a node also loses power at the same time, a standard takeover is not
possible.
When a node does not receive heartbeat messages from its partner
This could happen if the partner experienced a hardware or software failure that did not result in a
panic but still prevented it from functioning correctly.
When you halt one of the nodes without using the for inhibittakeovertrueparameter
Note: In a two-node cluster with cluster HA enabled, halting or rebooting a node using the
-inhibit-takeovertrueparameter causes both nodes to stop serving data unless you first
disable cluster HA and then assign epsilon to the node that you want to remain online.
When you reboot one of the nodes without using the -inhibit-takeovertrueparameter
The onrebootparameter of the storagefailovercommand is enabled by default.
When hardware-assisted takeover is enabled and it triggers a takeover when the remote
management device (Service Processor) detects failure of the partner node

How hardware-assisted takeover speeds up takeover


Hardware-assisted takeover speeds up the takeover process by using a node's remote management
device (Service Processor) to detect failures and quickly initiate the takeover rather than waiting for
Data ONTAP to recognize that the partner's heartbeat has stopped.
Without hardware-assisted takeover, if a failure occurs, the partner waits until it notices that the node
is no longer giving a heartbeat, confirms the loss of heartbeat, and then initiates the takeover.
The hardware-assisted takeover feature uses the following process to take advantage of the remote
management device and avoid that wait:
1. The remote management device monitors the local system for certain types of failures.
2. If a failure is detected, the remote management device immediately sends an alert to the partner
node.
3. Upon receiving the alert, the partner initiates takeover.
Hardware-assisted takeover is enabled by default.
What happens during takeover
When a node takes over its partner, it continues to serve and update data in the partner's aggregates
and volumes. To do this, the node takes ownership of the partner's aggregates, and the partner's LIFs
migrate according to network interface failover rules. Except for specific SMB 3.0 connections,
existing SMB (CIFS) sessions are disconnected when the takeover occurs.
The following steps occur when a node takes over its partner:
1. If the negotiated takeover is user-initiated, aggregate relocation is performed to move data
aggregates one at a time from the partner node to the node that is performing the takeover.
The current owner of each aggregate (except for the root aggregate) is changed from the target
node to the node that is performing the takeover. There is a brief outage for each aggregate as
ownership changes. This outage is briefer than an outage that occurs during a takeover without
aggregate relocation.
You can monitor the progress using the storagefailovershow-takeovercommand.
The aggregate relocation can be avoided during this takeover instance by using the
-bypass-optimizationparameter with the storagefailovertakeovercommand. To
bypass aggregate relocation during all future planned takeovers, set the
-bypass-takeover-optimizationparameter of the storagefailovermodify
command to true.
Understanding takeover and giveback | 25
Note: Aggregates are relocated serially during planned takeover operations to reduce client
outage. If aggregate relocation is bypassed, longer client outage occurs during planned takeover
events. Setting the -bypass-takeover-optimizationparameter of the storage
failovermodifycommand to trueis not recommended in environments that have
stringent outage requirements.
2. If the user-initiated takeover is a negotiated takeover, the target node gracefully shuts down,
followed by takeover of the target node's root aggregate and any aggregates that were not
relocated in Step 1.
3. Before the storage takeover begins, data LIFs migrate from the target node to the node performing
the takeover or to any other node in the cluster based on LIF failover rules.
The LIF migration can be avoided by using the -skip-lifmigrationparameter with the
storagefailovertakeovercommand.
Clustered Data ONTAP 8.3 File Access Management Guide for CIFS
Clustered Data ONTAP 8.3 File Access Management Guide for NFS
Clustered Data ONTAP 8.3 Network Management Guide
4. Existing SMB (CIFS) sessions are disconnected when takeover occurs.
Attention: Due to the nature of the SMB protocol, all SMB sessions except for SMB 3.0
sessions connected to shares with the ContinuousAvailabilityproperty set, will be
disruptive. SMB 1.0 and SMB 2.x sessions cannot reconnect after a takeover event. Therefore,
takeover is disruptive and some data loss could occur.
5. SMB 3.0 sessions established to shares with the ContinuousAvailabilityproperty set can
reconnect to the disconnected shares after a takeover event.
If your site uses SMB 3.0 connections to Microsoft Hyper-V and the Continuous
Availabilityproperty is set on the associated shares, takeover will be nondisruptive for those
sessions.
Clustered Data ONTAP 8.3 File Access Management Guide for CIFS
If the node doing the takeover panics
If the node that is performing the takeover panics within 60 seconds of initiating takeover, the
following events occur:
The node that panicked reboots.
After it reboots, the node performs self-recovery operations and is no longer in takeover mode.
Failover is disabled.
If the node still owns some of the partner's aggregates, after enabling storage failover, return these
aggregates to the partner using the storagefailovergivebackcommand.
26 | High-Availability Configuration Guide
Related concepts
HA policy and how it affects takeover and giveback operations on page 28
How automatic giveback works on page 80
Related information
Clustered Data ONTAP 8.3 man page: storage failover takeover - Take over the storage of a node's
partner
Clustered Data ONTAP 8.3 man page: storage failover show-takeover - Display takeover status
What happens during giveback
The local node returns ownership of the aggregates and volumes to the partner node after you resolve
any issues on the partner node or complete maintenance operations. In addition, the local node
returns ownership when the partner node has booted up and giveback is initiated either manually or
automatically.
The following process takes place in a normal giveback. In this discussion, Node A has taken over
Node B. Any issues on Node B have been resolved and it is ready to resume serving data.
1. Any issues on Node B have been resolved and it displays the following message:
Waitingforgiveback
2. The giveback is initiated by the storagefailovergivebackcommand or by automatic
giveback if the system is configured for it.
This initiates the process of returning ownership of Node B's aggregates and volumes from Node
A back to Node B.
3. Node A returns control of the root aggregate first.
4. Node B completes the process of booting up to its normal operating state.
5. As soon as Node B reaches the point in the boot process where it can accept the non-root
aggregates, Node A returns ownership of the other aggregates, one at a time, until giveback is
complete.
You can monitor the progress of the giveback with the storagefailovershowgiveback
command.
I/O resumes for each aggregate once giveback is complete for that aggregate; this reduces the overall
outage window for each aggregate.

HA policy and how it affects takeover and giveback


operations
Data ONTAP automatically assigns an HA policy of CFO or SFO to an aggregate that determines
how storage failover operations (takeover and giveback) occur for the aggregate and its volumes.
HA policy is assigned to and required by each aggregate on the system. The two options, CFO
(controller failover), and SFO (storage failover), determine the aggregate control sequence Data
ONTAP uses during storage failover and giveback operations.
Although the terms CFO and SFO are sometimes used informally to refer to storage failover
(takeover and giveback) operations, they actually represent the HA policy assigned to the aggregates.
For example, the terms SFO aggregate or CFO aggregate simply refer to the aggregate's HA policy
assignment.
Aggregates created on clustered Data ONTAP systems (except for the root aggregate containing
the root volume) have an HA policy of SFO. Manually initiated takeover is optimized for
performance by relocating SFO (non-root) aggregates serially to the partner prior to takeover.
During the giveback process, aggregates are given back serially after the taken-over system boots
and the management applications come online, enabling the node to receive its aggregates.
Because aggregate relocation operations entail reassigning aggregate disk ownership and shifting
control from a node to its partner, only aggregates with an HA policy of SFO are eligible for
aggregate relocation.
The root aggregate always has an HA policy of CFO and is given back at the start of the giveback
operation since this is necessary to allow the taken-over system to boot. All other aggregates are
given back serially after the taken-over system completes the boot process and the management
applications come online, enabling the node to receive its aggregates.
Note: Changing the HA policy of an aggregate from SFO to CFO is a Maintenance mode
operation. Do not modify this setting unless directed to do so by a customer support representative

_____________________________________________________________________________
Enabling and reverting LIFs to home ports
During a reboot, some LIFs might have been migrated to their assigned failover ports. Before and
after you upgrade, revert, or downgrade a cluster, you must enable and revert any LIFs that are not on
their home ports

Reverting a LIF to its home port


You can revert a LIF to its home port after it fails over or is migrated to a different port either
manually or automatically. If the home port of a particular LIF is unavailable, the LIF remains at its
current port and is not reverted.

S-ar putea să vă placă și