Documente Academic
Documente Profesional
Documente Cultură
A set of load-sharing mirrors consists of a source volume that can fan out to one or more
destination volumes. Each load-sharing mirror in the set must belong to the same Storage
Virtual Machine (SVM) as the source volume of the set. The load-sharing mirrors should also
be created on different aggregates and accessed by different nodes in the cluster to achieve
proper load balancing of client requests.
Before you can replicate data from the source FlexVol volume to the load-sharing mirror
destination volumes, you must create the mirror relationships by using the snapmirror create
command.
Steps
1. Use the snapmirror create command with the -type LS parameter to create
a load-sharing mirror relationship between the source volume and a
destination volume.
Example
When you create a relationship for a load-sharing mirror, the attributes for
that load-sharing mirror (throttles, update schedules, and so on) are
shared by all of the load-sharing mirrors that share the same source
volume.
Example
Load sharing mirrors are read only unless you are accessing it via the admin share.
Check out page 57 (7.2 Accessing Load-Sharing Mirror Volumes) of the following TR:
SnapMirror Configuration and Best Practices Guide for Clustered Data ONTAP -
http://www.netapp.com/us/media/tr-4015.pdf
"By default, all client requests for access to a volume in an LS mirror set are granted read-
only access. Read-write access is granted by accessing a special administrative mount point,
which is the path that servers requiring read-write access into the LS mirror set must mount.
All other clients will have read-only access."
When you are accessing the admin share for write access, you are accessing the source
volume."After changes are made to the source volume, the changes must be replicated to the
rest of the volumes in the LS mirror set using the snapmirror update-ls-set command, or with
a scheduled update."
LS mirrors are for file access (NAS), not for block (SAN).
1. to protect the vserver root volume in case of a disaster and the root volume is lost
In the case of a vserver root volume disaster, any of the load-sharing mirrored destinations
can be promoted to the full read/write root volume
In the case of load-balancing client requests, you need to have a load-sharing mirror setup for
each node in your cluster. If the client requests data from a volume on a particular node that
does not hold the root volume, the client connection gets re-directed to the root volume of the
node where the data resides.
For example:
If I have a 2 node-cluster Node1 and Node2, Node1 holding the root volume with Node2
holding a load-sharing mirror of the root. I have data on a volume on Node2, without a load-
sharing mirror the client will reference the root volume in Node1 in order to access data on
Node2.
With a load-sharing mirror setup on Node2 the client can access data on Node2 without
having to reference the root volume on Node1.
At this point we want to create a schedule so that the load-sharing mirror updates periodically
Lets say we add 2 more nodes to the cluster to make a 4 node cluster. We would create a
volume on each node (steps 1,2), create the snapmirror relationship (steps 3,4) and initialize
each relationship separately. So in step 5 we would replace the initialize-ls-set with:
and we would apply the job schedule of 20mins to these destinations by re-apply the
command:
____________________________________
Roles
RBAC: PREDEFINED ROLES IN CLUSTERED DATA ONTAP
Clustered Data ONTAP includes administrative access-control roles that can be
used to subdivide
administration duties for SVM administration tasks.
The vsadmin role is the superuser role for an SVM. The admin role is the superuser for a cluster.
Clustered Data ONTAP 8.1 and later versions support the vsadmin role. The
vsadmin role grants the data SVM administrator full administrative privileges for
the SVM. Additional roles include the vsadminprotocol role, the vsadmin-readonly role, and the
vsadmin-volume role. Each of these roles provides a unique SVM administration privilege
.
A cluster administrator with the readonly role can grant read-only capabilities. A cluster
administrator with the none role cannot grant capabilities.
Cluster administrators can administer the entire cluster and its resources. They can also set up data
SVMs and delegate SVM administration to SVM administrators. The specific capabilities that cluster
administrators have depend on their access-control roles. By default, a cluster administrator with the
admin account name or role name has all capabilities for managing the cluster and SVMs.
SVM administrators can administer only their own SVM storage and network resources, such as
volumes, protocols, LIFs, and services. The specific capabilities that SVM administrators have
depend on the access-control roles that are assigned by cluster administrators.
The number of customized access-control roles that you can create per cluster without any
performance degradation depends on the overall Data ONTAP configuration; however, it is best to
limit the number of customized access-control roles to 500 or less per cluster.
ROOT
Rules governing node root volumes and root aggregates
A node's root volume contains special directories and files for that node. The root aggregate contains
the root volume. A few rules govern a node's root volume and root aggregate.
A node's root volume is a FlexVol volume that is installed at the factory or by setup software. It is
reserved for system files, log files, and core files. The directory name is /mroot, which is accessible
only through the systemshell by technical support. The minimum size for a node's root volume
depends on the platform model.
The following rules govern the node's root volume:
Unless technical support instructs you to do so, do not modify the configuration or content of
the root volume.
Do not store user data in the root volume.
Storing user data in the root volume increases the storage giveback time between nodes in an
HA pair.
Contact technical support if you need to designate a different volume to be the new root
volume or move the root volume to another aggregate.
The root aggregate must be dedicated to the root volume only.
You must not include or create data volumes in the root aggregate
Create a load-sharing mirror copy for the root volume on each node of the cluster so that the
namespace directory information remains available in the event of a node outage or failover
Choices
For SVMs with FlexVol volumes, promote one of the following volumes to restore the root
volume:
Load-sharing mirror copy
Promoting a load-sharing mirror copy
Data-protection mirror copy
Promoting a data-protection mirror copy
New FlexVol volume
Promoting a new FlexVol volume
___________________________________________
NOTE: Do not store data volumes on the root aggregate (aggr0). Volumes on
CFO aggregates are not
available to clients or hosts during failover
Data aggregates are treated a little differently. Data can still be served from the
node that has taken over. Additionally, the client might not even be mounted to
the node in the HA pair that is failing over. When the system creates an
aggregate, it assumes that the aggregate is for data and assigns the storage
failover (SFO) HA policy to the aggregate. With the SFO policy, the data
aggregates will fail over first and fail back last in a serial manner.
You must configure DNS on the Vserver before creating the CIFS server. Generally, the DNS
name servers are the Active Directory-integrated DNS servers for the domain that the CIFS
server will join.
Steps
The domain path is constructed from the values in the -domains parameter.
2. Verify that the DNS configuration is correct and that the service is enabled
by using the vserver services dns show command.
Example
The following example configures the DNS service on Vserver vs1:
@@@@@@@@@
@
Configuring DNS services for the SVM
You must configure DNS services for the Storage Virtual Machine (SVM) before creating the
CIFS server. Generally, the DNS name servers are the Active Directory-integrated DNS
servers for the domain that the CIFS server will join.
Storage Virtual Machines (SVMs) use the hosts name services ns-switch database to
determine which name services to use and in which order to use the them when looking up
information about hosts. The two supported name services for the hosts database are files and
dns.
You must ensure that dns is one of the sources before you create the CIFS server.
Steps
1. Determine what the current configuration is for the hosts name services
database by using the vserver services name-service ns-switch show
command.
Example
In this example, the hosts name service database uses the default settings.
Vserver: vs1
Name Service Switch Database: hosts
Name Service Source Order: files, dns
a. Add the DNS name service to the hosts name service database in
the desired order or reorder the sources by using the vserver
services name-service ns-switch modify command.
Example
In this example, the hosts database is configured to use DNS and local files in
that order.
Vserver: vs1
Name Service Switch Database: hosts
Name Service Source Order: dns, files
Example
4. Verify that the DNS configuration is correct and that the service is enabled
by using the vserver services name-service dns show command.
Example
Vserver: vs1
Domains: example.com, example2.com
Name Servers: 10.0.0.50, 10.0.0.51
Enable/Disable DNS: enabled
Timeout (secs): 2
Maximum Attempts: 1
__________________
_---
CIFS server creatiom
Article Number
000027392
Description
This article describes the procedure that should be followed to create a cifs vserver using the
CLI and System Manager.
Procedure
CLI:
Perform the following steps:
1. Run the vserver setup command to start the vserver setup wizard:
Welcome to the Vserver Setup Wizard, which will lead you through
"exit" if you want to quit the Vserver Setup Wizard. Any changes
You can restart the Vserver Setup Wizard by typing "vserver setup". To
accept a default
cifs
dns
5. Select the aggregate where you want the vserver root volume to reside:
Enter the Vserver root volume's security style {unix, ntfs, mixed}
[ntfs]:
cifs.
no
Enter the home port {e0a, e0b, e0c, e0d, e0e} [e0a]:
was created.
"usps.den" domain.
12. Set up a CIFS share. This step (at this stage) is optional.
Do you want to share a data volume with CIFS clients? {yes, no} [yes]:
yes
Select the initial level of access that the group "Everyone" has to
the share
UNIX user "pcuser" set as the default UNIX user for unmapped CIFS
users.
Vserver vs_cifs, with protocol(s) cifs, and service(s) dns has been
configured
successfully.
System Manager:
1. Open System Manager, log in to your cluster, and select the vserver context on the
left pane:
2. Click Create. The Create Vserver Wizard will be displayed:
3. Type a name for the vserver, and then, select an aggregate, language and CIFS:
Article Footer
Disclaimer
NetApp provides no representations or warranties regarding the accuracy, reliability, or
serviceability of any information or recommendations provided in this publication, or with
respect to any results that may be obtained by the use of the information or observance of any
recommendations provided herein. The information in this document is distributed AS IS, and
the use of this information or the implementation of any recommendations or techniques
herein is a customers responsibility and depends on the customers ability to evaluate and
integrate them into the customers operational environment. This document and the
information contained herein may be used solely in connection with the NetApp products
discussed in this document.
Attachment 1
Attachment 2
_______________________________________
LIFS roles
A LIF represents a network access point to a node in the cluster. You can configure LIFs on ports
over which the cluster sends and receives communications over the network.
A cluster administrator can create, view, modify, migrate, or delete LIFs. An SVM administrator can
only view the LIFs associated with the SVM.
Logical Interfaces
An IP address or World Wide Port Name (WWPN) is associated with a LIF
If subnets are configured (recommended), IP addresses are automatically
assigned when a LIF is created
If subnets are not configured, IP addresses must be manually assigned when
LIF is created
WWPNs are automatically assigned when an FC LIF is created
One node-management LIF exists per node
One cluster-management LIF exists per cluster
Two* cluster LIFs exist per node
Multiple data LIFs are allowed per port (Client-facing: NFS, CIFS, iSCSI,
and FC access)
For intercluster peering, intercluster LIFs must be created on each node
A cluster-management LIF can fail over to any node-management or data port in the
cluster. It cannot fail over to cluster or intercluster ports.
cluster LIF
A LIF that is used to carry intracluster traffic between nodes in a cluster. Cluster LIFs
must always be created on 10-GbE network ports.
Cluster LIFs can fail over between cluster ports on the same node, but they cannot be
migrated or failed over to a remote node. When a new node joins a cluster, IP addresses
are generated automatically. However, if you want to assign IP addresses manually to the
cluster LIFs, you must ensure that the new IP addresses are in the same subnet range as
the existing cluster LIFs.
data LIF
A LIF that is associated with a Storage Virtual Machine (SVM) and is used for
communicating with clients.
You can have multiple data LIFs on a port. These interfaces can migrate or fail over
throughout the cluster. You can modify a data LIF to serve as an SVM management LIF
by modifying its firewall policy to mgmt.
For more information about SVM management LIFs, see the Clustered Data ONTAP
System Administration Guide for Cluster Administrators.
Sessions established to NIS, LDAP, Active Directory, WINS, and DNS servers use data
LIFs.
intercluster LIF
A LIF that is used for cross-cluster communication, backup, and replication. You must
create an intercluster LIF on each node in the cluster before a cluster peering relationship
can be established.
These LIFs can only fail over to ports in the same node. They cannot be migrated or failed
over to another node in the cluster.
Port Types
Physical port
Ethernet
FC
Unified Target Adapter (UTA)
UTA is a 10-GbE port
UTA2 is configured as either:
10-GbE
or 16-Gbps FC
Virtual port
Interface group (ifgrp)
Virtual LAN (VLAN)
PORT TYPES
Port types can be either physical or virtual.
Physical:
Ethernet port: 1-Gb or 10-Gb Ethernet (10-GbE) ports that can be used in
NFS, CIFS, and iSCSI
environments
FC port: 4-Gbps, 8-Gbps, or 16-Gbps port that can be used as a target in FC
SAN environment. It can be
configured as an initiator for disk shelves or tape drives.
Unified Target Adapter (UTA) port: 10-GbE port that can be used in NFS,
CIFS, iSCSI and FCoE
environments
Unified Target Adapter 2 (UTA2) port: Configured as either a 10-GbE
Ethernet or 16-Gbps FC port
10-Gb ports can be used in NFS, CIFS, iSCSI, and FCoE environments
16-Gbps FC ports can be used as targets in FC SAN environments
Virtual:
Interface group: An interface group implements link aggregation by
providing a mechanism to group together multiple network interfaces (links) into
one logical interface (aggregate). After an interface group is created, it is
indistinguishable from a physical network interface.
VLAN: Traffic from multiple VLANs can traverse a link that interconnects two
switches by using VLAN tagging. A VLAN tag is a unique identifier that indicates
the VLAN to which a frame belongs. A VLAN tag is included in the header of
every frame that is sent by an end-station on a VLAN. On receiving a tagged
frame, a swi tch identifies the VLAN by inspecting the tag, then forwards the
frame to the destination in the identified VLAN.
INTERFACE GROUPS
The following network terms are described as they are implemented within Data
ONTAP:
Interface groups aggregate network interfaces into a trunk.
You can implement link aggregation on your storage system to group
multiple network interfaces (links)
into one logical interface (aggregate).
After an interface group is created, the interface group is indistinguishable
from a physical network
interface.
Be aware that different vendors refer to interface groups by the following terms:
Virtual aggregations
Link aggregations
Trunks
EtherChannel
Interface groups can be implemented in two modes: single-mode and multimode.
In single-mode link aggregation, one interface is active, and the other
interface is inactive (on standby).
In multimode, all links in the link aggregation are active.
A dynamic multimode interface group can detect loss of link status and data flow.
Multimode requires a compatible switch to implement configuration.
Data ONTAP link aggregation complies with the IEEE 802.3ad static standard and
multimode dynamic link:
Link Aggregation Control Protocol (LACP).
failover groups,
There are two types of failover groups: those created automatically by the
system when a broadcast domain is created, and those that a system
administrator defines.
Failover Groups
These failover groups are created automatically based on the network ports
that are present in the particular broadcast domain:
A Cluster failover group contains the ports in the Cluster broadcast
domain
These ports are used for cluster communication and include all cluster ports from
all nodes in the cluster
A Default failover group contains the ports in the Default broadcast
domain
These ports are used primarily to serve data, but they are also used for cluster
management and node management
Additional failover groups are created for each broadcast domain that you
create
The failover group has the same name as the broadcast domain, and it contains
the same ports as those in the broadcast domain
Failover Groups
Custom failover groups can be created for specific LIF failover
functionality when:
The automatic failover groups do not meet your requirements
Only a subset of the ports that are available in the broadcast
domain are required
Consistent performance is required
For example, create a failover group consisting of only 10-GbE ports that
enables LIFs to fail over only to high-bandwidth ports
A failover group contains a set of network ports (physical ports, VLANs, and interface groups) from
one or more nodes in a cluster. The network ports that are present in the failover group define the
failover targets available for the LIF. A failover group can have cluster management, node
management, intercluster, and NAS data LIFs assigned to it.
It is advantageous to use ports sets with SLM when you have multiple targets on a node and you want
to restrict access of a certain target to a certain initiator. Without port sets, all targets on the node will
be accessible by all the initiators with access to the LUN through the node owing the LUN and the
owning node's HA partner.
vs3portset0iscsilif0,lif1igroup1
You can also create port sets to make a LUN visible only on specific target ports. A port set
consists of a group of FC target ports. You can bind an igroup to a port set. Any host in the igroup
can access the LUNs only by connecting to the target ports in the port set.
_____________________________________________________________________________
Enabling and reverting LIFs to home ports
During a reboot, some LIFs might have been migrated to their assigned failover ports. Before and
after you upgrade, revert, or downgrade a cluster, you must enable and revert any LIFs that are not on
their home ports