Sunteți pe pagina 1din 55

1

SnapVault

SNAPVAULT
SnapVault is a disk-based storage backup feature of Data ONTAP. SnapVault enables data stored on
multiple storage systems to be backed up to a central, secondary storage system quickly and efficiently as
read-only Snapshot copies.
In the event of data loss or corruption on a storage system, backed-up data can be restored from the
SnapVault secondary with less downtime and uncertainty than is associated with conventional tape
backup and restore operations.
Additionally, users who wish to perform a restore of their own data may do so without the intervention of
a system administrator. The SnapVault secondary may be configured with NFS exports and CIFS shares
to let users copy the file from the Snapshot copy to the correct location.

THEORY OF OPERATION
On storage systems running Data ONTAP, the qtree is the basic unit of SnapVault backup and restore.
SnapVault backs up specified qtrees on the primary system to associated qtrees on the SnapVault
secondary system. If data needs to be restored to the primary system, SnapVault transfers the specified
versions of the qtrees back to their associated primary qtrees.
The non-qtree part of a primary system volume can be replicated to a SnapVault secondary qtree. Nonqtree data is any data on a storage system that is not contained in a qtree. The backed-up data can be
restored to a qtree on the primary system, but cannot be restored as non-qtree data.
You can also back up a primary volume to a qtree on the secondary system. Any qtrees in the primary
volume become directories in the secondary qtree. SnapVault cannot restore the data back to a volume.
When restoring data, what was a source volume is restored as a qtree.
Note that volume-to-qtree backups are not supported for volumes containing Data ONTAP LUNs.

INITIAL TRANSFER AND BACKUP


Initial transfer
In response to the snapvault start command, the secondary system requests initial transfers of
qtrees specified for backup from the primary system volume to the secondary system volume. These
transfers establish SnapVault relationships between the primary and secondary qtrees. To initialize qtrees,
you do not need to create the qtrees on the secondary; the qtrees are created when the baseline transfers
are started.
Incremental backup
In response to the snapvault snap sched command-line input, the primary system creates
scheduled SnapVault Snapshot copies of the volume containing the qtrees to be backed up.
In response to the snapvault snap sched x command-line input, the secondary system carries
out scheduled update transfers and Snapshot copies creation.
For each secondary qtree, SnapVault retrieves, from the Snapshot data of each corresponding primary
qtree, the incremental changes to the primary qtrees made since the last data transfer. Only the changed
data blocks are sent to the secondary.
When the transfer is completed, the secondary takes a Snapshot copy of its own volume. Note that
SnapVault does not transfer Snapshot copies; it only transfers selected data from within Snapshot copies.

SNAPVAULT CONFIGURATION

PREREQUISITES
You must purchase and install a separate SnapVault license for each primary (sv_ontap_pri) and
secondary (sv_ontap_sec) storage system.
SnapVault evaluation licenses are available upon request on the NOW (NetApp on the Web) site:
now.netapp.com/eservice/evallicense
In Data ONTAP 7.3 and later, you can install both the sv_ontap_pri and the sv_ontap_sec
licenses on the same storage system. This system is then able to send and receive SnapVault backups,
whether from other appliances or locally within itself.
NOTE: You cannot mix primary and secondary qtrees in the same volume, as this is unsupported and
causes undesirable effects.
You cannot license a SnapVault secondary and a SnapVault primary on the same node of an active-active
configured system.
Optionally, you can increase the number of possible concurrent streams on FAS2040, FAS3040,
FAS3070, FAS3100 and FAS6000 storage systems by installing the nearstore_option license. This
license should not be installed on these storage systems if they are intended to handle primary application
workloads.
Port 10566 must be open in both directions for SnapVault backup and restore operations.
If NDMP is in use for control management, then port 10000 must be open on the primary and the
secondary systems.

2. CONFIGURATION OF PRIMARY (CONT.)


Turn off the normal Snapshot schedules, which will be replaced by SnapVault Snapshot schedules.
pri> snap sched vol1 0 0 0
pri> snap sched oracle 0 0 0
Set up schedules for the home directory hourly Snapshot copies.
pri> snapvault snap sched vol1 sv_hourly 22@0-22
This schedule takes a Snapshot copy every hour, except for 11:00 p.m. It keeps nearly a full day of hourly
copies, and combined with the daily or weekly backups at 11:00 p.m., ensures that copies from the most
recent 23 hours are always available.
Set up schedules for the home directory daily Snapshot copies.
pri> snapvault snap sched vol1 sv_daily 7@23
This schedule takes a Snapshot copy once each night at 11:00 p.m. and retains the seven most recent
copies.
The schedules created in Step 3 and Step 4 give 22 hourly and 7 daily Snapshot copies on the source to
recover from before needing to access any copies on the secondary. This enables more rapid restores.
However, it is not necessary to retain a large number of copies on the primary; higher retention levels are
configured on the secondary.

3. CONFIGURATION OF SECONDARY (CONT.)


Create a FlexVol volume for use as a SnapVault destination.
sec> aggr create sv_flex 10
sec> vol create vault sv_flex 100g
The size of the volume should be determined by how much data you need to store and other site-specific
requirements, such as the number of Snapshot copies to retain and the rate of change for the data on the
primary FAS system.
Depending on site requirements, you may want to create several different SnapVault destination volumes.
You may find it easiest to use different destination volumes for datasets with different schedules and
Snapshot copy retention needs.
Optional: Set the Snapshot reserve to zero on the SnapVault destination volume.
sec> snap reserve vault 0
Due to the nature of backups using SnapVault, a destination volume that has been in use for a significant
amount of time often has four or five times as many blocks allocated to Snapshot copies as it does to the
active file system. Because this is the reverse of a normal production environment, many users find that it
is easier to keep track of available disk space on the SnapVault secondary if SnapReserve is effectively
turned off.
Turn off the normal Snapshot schedules, which will be replaced by SnapVault Snapshot schedules.
sec> snap sched vault 0 0 0

3. CONFIGURATION OF SECONDARY (CONT.)


Set up schedules for the hourly backups.
sec> snapvault snap sched -x vault sv_hourly 4@0-22
This schedule checks all primary qtrees backed up to the vault volume once per hour for a new Snapshot
copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the
primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0.
Note that you are keeping only the four most recent hourly Snapshot copies on the SnapVault secondary.
A user who wants to recover from a backup made within the past day has 23 backups to choose from on
the primary FAS system and has no need to restore from the SnapVault secondary. Keeping four hourly
Snapshot copies on the secondary merely ensures that you have at least the most recent four backups in
the event of a major problem affecting the primary system.
NOTE: If you dont use the -x option, the secondary does not contact the primary and transfer the
Snapshot copy. A Snapshot copy of the destination volume is merely created.

3. CONFIGURATION OF SECONDARY (CONT.)

9
Set up schedules for the daily backups.
sec> snapvault snap sched -x vault sv_daily 12@23@sun-fri
This schedule checks all primary qtrees backed up to the vault volume once each day at 11:00 p.m.
(except on Saturdays) for a new Snapshot copy called sv_daily.0. If it finds such a copy, it updates the
SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination
volume, called sv_daily.0.
In this example, you maintain the most recent 12 daily backups, which, combined with the most recent 2
weekly backups (see next page).

3. CONFIGURATION OF SECONDARY (CONT.)


Set up schedules for the weekly backups.
sec> snapvault snap sched vault sv_weekly 13@23@sat
This schedule creates a Snapshot copy of the vault volume at 11:00 p.m. each Saturday for a new
Snapshot copy called sv_weekly.0. There is no need to create the weekly schedule on the primary.
Because you have all the data on the secondary for this Snapshot copy, you will simply create and retain
the weekly copies on the secondary only.
In this example, you maintain the most recent 13 weekly backups, for a full 3 months of online backups.

10

4. PERFORM THE INITIAL BASELINE TRANSFER


At this point, you have configured schedules on both the primary and secondary systems, and SnapVault
is enabled and running. However, SnapVault does not yet know which qtrees to back up, or where to store
them on the secondary. Snapshot copies will be taken on the primary, but no data will be transferred to the
secondary.
To provide SnapVault with this information, use the SnapVault start command on the secondary:
sec> snapvault start -S pri:/vol/vol1/users /vol/vault/pri_users
sec> snapvault start -S pri:/vol/oracle/- /vol/vault/oracle
If you later create another qtree called otherusers in the vol1 volume on the primary, it can be completely
configured for backups with a single command:
sec> snapvault start -S
pri:/vol/vol1/otherusers/vol/vault/pri_otherusers
No additional steps are needed because the Snapshot schedules are already configured on both the
primary and secondary for that volume.

SNAPVAULT ADMINISTRATION

11

LISTING SNAPSHOT COPIES


Use the snapvault status command either from the primary or the secondary system to check the
status of a data transfer, and to see how recently a qtree has been updated.
snapvault status [option] hostname:/vol/vol_name/qtree_name
Options can be one or more of the following:
-c lists all the secondary system qtrees, their corresponding primary system qtrees, maximum speed of
scheduled transfers, and maximum number of times SnapVault attempts to start a scheduled transfer
before skipping that transfer. This option can be run only from the secondary system.
-l displays the long format of the output, which contains more detailed information.

12
-s lists all the Snapshot copies scheduled on the primary or secondary storage system. Information
includes volume, Snapshot copy base name, current status, and Snapshot copy schedule.

MANUALLY UPDATING THE VAULT


Update information in the primary source location of /vol/vol1/users. Normally, we would have to wait
for a scheduled Snapshot to occur before data would be backed to the vault. We can force an unscheduled
update by issuing the following command on the secondary:
sec> snapvault update /vol/vol1/users
SnapVault updates the qtree on the secondary storage system with the data from a new Snapshot copy of
the qtree it creates on the primary storage system.
sec> snap list o /vol/vault/pri_users

13

MANUAL SNAPSHOT COPIES


Because we just carried out a manual update of a secondary qtree, we might want to immediately
incorporate that update into the retained Snapshot copies on the secondary storage system.
sec> snapvault snap create vol1 sv_nightly
SnapVault creates a new Snapshot copy and, based on the specified Snapshot copy basename, numbers it
just as if that Snapshot copy had been created by the SnapVault schedule process. SnapVault names the
new Snapshot copy sv_nightly.0, renames the older Snapshot copies, and deletes the oldest sv_nightly
Snapshot copy.
The snapvault snap create command does not update the data in the secondary storage system
qtree from the data in the primary storage system prior to creating the new Snapshot copy.

14

LOG FILES
The SnapVault logs record whether the transfer finished successfully or failed. If there is a problem with
the updates, it is useful to look at the log file to see what has happened since the last successful update.
The logs include the start and end of each transfer, along with the amount of data transferred.
The SnapVault logs information is stored on the primary and secondary storage systems root volume, in
the /etc/log/snapmirror file.

APPLICATION-CONSISTENT BACKUP

NAMED SNAPSHOT FEATURE FOR SNAPVAULT


This feature allows customers to back up data using SnapVault from any arbitrary Snapshot copy at the
disaster recovery site.
In Data ONTAP 7.3.1 and earlier releases, SnapVault could not back up data from a specified Snapshot
copy that was residing on the volume SnapMirror destination volume. SnapVault would only transfer
data from the latest Snapshot copy that was created by volume SnapMirror.
In Data ONTAP 8.0, SnapVault can back up data from any arbitrary Snapshot copy (either a userspecified or scheduled Snapshot copy) from the volume SnapMirror destination. The SnapVault backup
from the disaster recovery site continues to be the same as the SnapVault backup from primary storage
system to secondary, with following restrictions:
For a SnapVault scheduled update from the disaster recovery site, administrators need to set up the
SnapVault primary schedule at the volume SnapMirror source.
In the case of a SnapVault update from a named Snapshot (that is, snapvault update -s
<snapname>), the administrator needs to make sure that the named Snapshot copy exists at the volume
SnapMirror source. To prevent any Snapshot copy from getting deleted by Data ONTAP applications, use
the new snapvault preserve command.
Data ONTAP 8.0 only supports SnapVault in 7-Mode.

15

16

RESTORE DATA FROM SECONDARY TO PRIMARY

SINGLE-FILE RESTORE
Users who wish to perform a restore of their own data may do so without the intervention of a system
administrator. The SnapVault secondary may be configured with NFS exports and CIFS shares to let users
copy the file from the Snapshot copy to the correct location.
NOTE: SnapVault backups transfer all of the file permissions and access control lists held by the original
data; if users were not authorized to access a file on the original file system, they will not be authorized to
access the backup copies of that file. This allows self-service restores to be performed safely.
To restore a single file, you can also use the Data ONTAP ndmpcopy command or the NetApp
Protection Manager software (if deployed).

17

QTREE OR VOLUME QTREE


You use the snapvault restore command to restore a backed-up qtree saved to the secondary
system.
Starting with Data ONTAP 7.3 and later, you can restore the data to an existing qtree on the primary
storage system using a baseline restore or incremental restore.
snapvault restore [option] -s snapname S
sec_system:/vol/volname/sec_qtree pri_system:/vol/volname/pri_qtree
Options can be one or more of the following:
The -f option forces the command to proceed without first asking for confirmation from the user.
The -k option sets the maximum transfer rate in kilobytes per second.
The -r option attempts an incremental restore. The incremental restore can be used to revert the
changes made to a primary system qtree since any backed-up version on the secondary system.
The -s option specifies that the restore operation must be from the Snapshot snapname on the
secondary system.
The -w option causes the command not to return after the baseline transfer starts. Instead, it waits
until the transfer completes (or fails). At that time, it prints the completion status and then returns.
Starting with Data ONTAP 7.3 and later, the SCSI connectivity of applications to all LUNs within the
qtree being restored will be maintained throughout the restore process in order to make the restore
operation nondisruptive to applications. However, I/O operations will not be allowed during the restore
operation. Only baseline restores and incremental restores can be nondisruptive.

18

BASELINE RESTORE TO A QTREE


In this scenario, we have users home directories to SnapVault. We intend to restore a primary qtree to the
exact qtree location on the primary storage system from which we backed it up, therefore we need to
delete the existing qtree first from the primary storage system. We will delete the qtree by way of the
regular CIFS or NFS routine.
We will now restore the qtree from the secondary to the primary by issuing the following command:
pri> snapvault restore S sec:/vol/vault/pri_users /vol/vol1/users
Use the -f flag to override the confirmation prompt and directly proceed with the restore.

INCREMENTAL RESTORE
SnapVault incremental restore is based on qtree SnapMirror resync-style Snapshot copy negotiation.
The primary qtree is resynced to the specified Snapshot copy on the secondary.
The resync rolls back the primary qtree to the specified Snapshot copy and an incremental restore transfer
is initiated from the specified Snapshot copy.
The restore operation transfers only incremental changes from the secondary qtree to the specified
primary qtree.
You use the new -r option to perform a SnapVault incremental restore.
Example:
pri> snapvault restore -r -S sec:/vol/vault/pri_users /vol/vol1/users
Restore will overwrite existing data in /vol/volname/pri_qtree
Are you sure you want to continue (yes/no)? Yes
Transfer started.
Monitor progress with 'snapvault status' or the snapmirror log.

19
When you want to restore over an existing primary qtree, it is recommended that you first attempt an
incremental restore. If the incremental restore fails due to lack of common Snapshot copies, then attempt
an in-place baseline restore. This is because the incremental restore is a more efficient restore.

RESTARTING OR RELEASING SNAPVAULT


When you use the snapvault restore command to restore a primary qtree, SnapVault places a
residual SnapVault Snapshot copy on the volume of the restored primary qtree. This Snapshot copy is not
automatically deleted. If you have configured this volume to retain the maximum 255 Snapshot copies
allowed by Data ONTAP, you must manually delete this residual Snapshot copy, or else no new Snapshot
copies can be created.
We now will remove a residual SnapVault Snapshot copy to insure a proper functioning SnapVault
relationship.
pri> snap list oracle
Find the residual Snapshot. It is will be distinguished by the following syntax: Primaryhost (nvramid)_primaryvolume_restoredqtree-dst.2 For example: priv (1990911275)_oracle_tree-sec.2
Remove this qtree with the following command:
pri> snap delete oracle pri(1990911275)_oracle-sec.2

20

NONDISRUPTIVE RESTORE
SCSI connectivity to LUNs is maintained throughout the in-place and incremental restores by way of the
following process:
The primary qtree is made read-only. The LUNs attributes in the primary qtree are reserved in a
temporary staging area. LUN maps are updated. External SCSI requests are processed using the
information stored in the staging area. Hosts see the same LUNs with the same drive letters at all times.
To display the LUNS in the preserved staging qtrees, use the lun show staging command.
Example:
Original LUN location: /vol/san_vol1/qtree1/LUN1
Staging qtree naming convention:
/vol/<vol_name>/Staging_<Volume_UUID>_<Transaction_ID>
pri> lun show staging
/vol/san_vol1/Staging_19e45590-8948-11dc-bb1500a09802437a_199999999999999999/LUN1 100m (104857600)
(r/w,online,mapped)
When the restore has completed, the LUNs attributes that are stored in the staging area are applied to the
restored LUNs. The primary qtree is broken to be write-enabled, and I/O operations are resumed to the
restored LUNs.

21

LUN CLONE BACKUP


When integrated with Data ONTAP, SnapDrive for Windows or SnapDrive for Windows with
Microsoft Volume Shadow Copy Service creates two Snapshot copies upon LUN backup:
A backing Snapshot copy containing the LUN to be cloned
A backup Snapshot copy containing both the LUN and the clone
In versions of Data ONTAP earlier than 7.3, SnapVault backs up a LUN clone as a new LUN during the
initial baseline transfer. Therefore, the LUN clone and its backing LUN get replicated as two separate
LUNs on the secondary.
With Data ONTAP 7.3 or later, SnapVault is able to back up LUN clones in optimized mode using
SnapDrive for Windows. The LUN clones are transferred as clones and space savings with the parent
LUN is preserved. The SnapVault initial baseline transfer is performed at the command-line interface but
after the SnapVault relationship is handed off to SnapDrive for Windows, transfers must not be run from
the command-line interface.
On the secondary, the backing Snapshot copy is locked after the backup Snapshot copy is transferred.
Limitations:
In optimized mode, the primary qtree must not contain LUN clones.
The transfer fails if the backing Snapshot copy is missing on the secondary.
A SnapVault restore will also fail if the backing Snapshot copy is missing on the primary.
Finally, in optimized mode, cascades from volume SnapMirror destinations are not supported.

22
@@@@@@@@

6.7 SnapVault
SnapVault performs backup of qtrees and directories from a primary storage system (source) to a
secondary storage system (destination).

23

24

@@@@@@@@@@@@@@@@@@

SNAPVAULT FEATURE
The SnapVault feature enables you to create and archive Snapshot copies from one volume to another
volume or, typically, from a local controller to a remote controller. The feature provides a consistent,
recoverable, offsite, long-term backup and archive capability.
CONFIGURATION

SnapVault is a licensed feature that must be enabled before it can be configured and used. The SnapVault
feature has two licenses. One license is for the primary controller (the backup source), and the other
license is for the secondary controller (the archive destination).
License the primary controller (sv_ontap_pri).
license add <licnum>
License the secondary controller (sv_ontap_sec).
license add <licnum>
NOTE: The two licenses enable different functionalities, and the correct license must be enabled on the
appropriate controller (for example, the production or the disaster recovery controller).

25
The second step in configuring a SnapVault relationship (after licensing) is to create the destination
volume. Typically, you create the destination volume on a remote controller that provides lower-cost
storage (for example, Serial Advanced Technology Attachment or SATA disks).
Create a normal destination volume by running the following command on the destination system:
vol create <vol_name> (with parameters to suit)
Check the volumes status and size by running the following command:
vol status b
NOTE: The volume must be online and in a writable state.
Do not create the destination qtrees. The destination qtrees are created automatically when the
SnapVault relationship is initialized.
NOTE: Although the destination volume remains writable, the individual destination qtrees are readonly.
It is important that you know the requirements and states of the source and destination volumes and that
you understand how SnapMirror requirements for the source and destination volumes differ.

NetApp University

The technology behind the SnapVault feature is based on the qtree SnapMirror function. This function
determines many of the features and limitations of the SnapVault feature. For example, the basic unit of
SnapVault backup is the qtree, and all SnapVault transfers are based on schedules for asynchronous mode.
Before you can enable the SnapVault relationship, you must configure the SnapVault access control
between the source and destination storage controllers. For descriptions of the access control settings,
refer to the Security section.
After the source and destination volumes are defined, you can configure the SnapVault schedules on the
primary and secondary controllers and start the incremental backups. You can also perform the initial
baseline transfer, copying the data from the source qtree to the destination qtree.
1. Configure the primary controller and define a SnapVault schedule.
snapvault snap sched vol1 sv_hourly 5@mon-fri@9-19

26
2. Configure the secondary controller and perform the baseline transfer.
snapvault start S pri:/vol/vol1/q1 sec:/vol/vol1/q1

When the baseline transfer is completed, the destination volume is an exact replica of the source volume.
3. Define a SnapVault schedule.
snapvault snap sched x vol1 sv_hourly 5@mon-fri@9-19

The x parameter instructs the secondary controller to request a resynchronization with the primary
controller. This request reports the current file system state and then creates a Snapshot copy to retain the
data.
NOTE: The SnapVault schedule definition is in the following format:
<snapshots_to_retain>@<day_of_the_week><@hour_of_the_day>
The schedule can be specified in more than one way. Because the day of the week is specified as a
mnemonic expression, you can define the schedule as the following:
<snapshots_to_retain><@hour_of_the_day>@<day_of_the_week>
ADMINISTRATION

The administration of a SnapVault relationship can be performed from either the primary or the secondary
system, although some functions are available only on their respective systems.
You use the snapvault status command to display the state of the currently defined SnapVault
relationships, as shown in Figure 23:

NOTE: You can use the snapvault status c option to display the SnapVault qtree configuration
parameters.
You can use the snapvault command to manage all aspects of the SnapVault relationship, such as
updating the secondary system or restoring the backup. The following are examples of frequently used
snapvault functions:
snapvault update sec_hostname:/vol/vol_name/qtree
When executed on the secondary system, this command triggers a manual (unscheduled) update of the
specified qtree destination.
snapvault release <path> <other_hostname>:<path>
When executed on either system, this command deletes the SnapVault relationship.
snapvault restore S sec_hostname:/vol/vol_name/qtree
pri_hostname:/vol/vol_name/qtree
When executed on the primary system, this command restores the qtree contents from the backup.
To restore to the original qtree location on the primary system, you must break the SnapVault
relationship or restore to a new qtree (and rename later).
To restore a small amount of data (like one file), you may prefer to copy the files from a CIFS share on
the secondary qtree.

27
snapvault start r <path>
When executed on the secondary system, this command resynchronizes the relationship and resumes
backup operations after the SnapVault restore is completed.
NOTE: For information about the other SnapVault commands, refer to the product manual.
Some third-party backup applications use SnapVault integration. To enable these applications to
communicate with the controller, you must enable the NDMP protocol and define the user name and
password for the application.
PERFORMANCE

One of the challenges of a new SnapVault configuration is the transfer of the baseline copy from the
primary to the secondary system. Although the WAN connection may be adequate to handle the
incremental backup traffic, it may not be adequate to complete the baseline transfer in a timely manner. In
this case, you should consider using the Logical Replication (LREP) function. The LREP function can
perform the initial baseline transfer by using external disk media, such as a USB drive connected to a
laptop computer.
Because SnapVault backups are scheduled activities (asynchronous), they are constrained only by the
bandwidth of the connection and are not significantly affected by the link latency.
Similar to the qtree SnapMirror process, the SnapVault process accesses the primary qtree at the filesystem level and therefore sees (and backs up) the original version of any deduplicated data. Although
this process may cause more data to be sent across the WAN than is expected, the secondary qtree is
written in the original capacity. Further deduplication can be scheduled on the secondary system.
SECURITY

By default, no access is granted for SnapVault traffic, and specific access must be granted to any remote
controller in a backup relationship.
The primary controller must grant access to the secondary controller so that the secondary controller can
pull backups from the source. And the secondary controller must grant access to the primary
controller so that the primary controller can request restores of the backups.
SnapVault access can be configured on the primary and secondary controllers by using the following
command:
options snapvault.access host=<other controller>
TROUBLESHOOTING

Comprehensive logging of all SnapVault activity is enabled by default. Because the SnapVault function is
based on the qtree SnapMirror function, all log information for the SnapVault and qtree SnapMirror
functions is stored to the same file.
The log information is saved to the /etc/log/snapmirror.[0-5] files. The log can be disabled by executing
the following command: options snapmirror.log.enable [on|off]
@@@@@@@@@@@
SnapVault protects data on both NetApp and non-NetApp primary systems by maintaining a number
of read-only versions of that data on a SnapVault secondary system and the SnapVault primary
system

28

What SnapVault is
SnapVault is a disk-based storage backup feature of Data ONTAP. SnapVault enables data stored on
multiple systems to be backed up to a central, secondary system quickly and efficiently as read-only
Snapshot copies.
In the event of data loss or corruption on a system, backed-up data can be restored from the
SnapVault secondary system with less downtime and uncertainty than is associated with conventional
tape backup and restore operations.
The following terms are used to describe the SnapVault feature:
Primary systema system whose data is to be backed up
Secondary systema system to which data is backed up
Primary system qtreea qtree on a primary system whose data is backed up to a secondary qtree
on a secondary system
Secondary system qtreea qtree on a secondary system to which data from a primary qtree on a
primary system is backed up
Open systems platforma server running AIX, Solaris, HP-UX, Red Hat Linux, SUSE Linux, or
Windows, whose data can be backed up to a SnapVault secondary system
Open Systems SnapVault agenta software agent that enables the system to back up its data to a
SnapVault secondary system
SnapVault relationshipthe backup relationship between a qtree on a primary system or a
directory on an open systems primary platform and its corresponding secondary system qtree
SnapVault Snapshot copythe backup images that SnapVault creates at intervals on its primary
and secondary systems
SnapVault Snapshot copies capture the state of primary qtree data on each primary system. This
data is transferred to secondary qtrees on the SnapVault secondary system. The secondary system
creates and maintains versions of Snapshot copies of the combined data for long-term storage and
possible restore operations.
SnapVault Snapshot basenamea name that you assign to a set of SnapVault Snapshot copies
using the snapvaultsnapschedcommand. As incremental Snapshot copies for a set are
taken and stored on both the primary and secondary systems, the system appends a number (0, 1,
2, 3, and so on) to the basenames to track the most recent and earlier Snapshot updates.
SnapVault baseline transferan initial complete backup of a primary storage qtree or an open
systems platform directory to a corresponding qtree on the secondary system
SnapVault incremental transfera follow-up backup to the secondary system that contains only
the changes to the primary storage data between the current and last transfer actions

29

Advantages of using SnapVault


The SnapVault disk-based backup and restore system enables you to perform fast and simple data
restore operations.
You can also perform the following operations:
Browse backed-up files online.
Schedule frequent and efficient backup of large amounts of data.
Minimize media consumption and system overhead through incremental backup.
If tape backup is necessary, offload the tape backup task from the primary storage systems to the
SnapVault secondary storage system, which centralizes the operation and saves resources.
Configure and maintain a single storage system for backing up data stored on multiple platforms:
Data ONTAP, AIX, Solaris, HP-UX, Linux, Windows, or VMware ESX server systems.
Note: SnapVault behavior is independent of the Data ONTAP version that is installed on the

primary or secondary system, and the aggregate type. For example, you can back up data from a
primary system that has Data ONTAP 7.3 installed to a secondary system that has Data ONTAP
7.1 installed. You can also back up data from a primary system that has Data ONTAP 7.1 installed
to a secondary system that has Data ONTAP 7.3 installed.

What data gets backed up and restored through SnapVault


The data structures that are backed up and restored through SnapVault depend on the primary system.
On systems running Data ONTAP, the qtree is the basic unit of SnapVault backup and restore.
SnapVault backs up specified qtrees on the primary system to associated qtrees on the SnapVault
secondary system. If necessary, data is restored from the secondary qtrees back to their associated
primary qtrees.
On open systems storage platforms, the directory is the basic unit of SnapVault backup.
SnapVault backs up specified directories from the native system to specified qtrees in the
SnapVault secondary system.
If necessary SnapVault can restore an entire directory or a specified file to the open systems
platform.

The destination system uses a slightly more disk space and directories than the source system.
Note: You can back up the qtrees from multiple primary systems, or directories from multiple open

systems storage platforms, to associated qtrees on a single SnapVault secondary volume.


The following illustration shows the backup of qtrees and directories on different systems to a single
secondary volume:

30

Types of SnapVault deployment


You can deploy SnapVault in three ways as per business requirements.
Basic SnapVault deployment
Primary to secondary to tape backup variation
Primary to secondary to SnapMirror variation
What basic SnapVault deployment is
The basic SnapVault backup system deployment consists of a primary system and a secondary
system.
Primary storage systems
Primary systems are the platforms that run Data ONTAP and open systems storage platforms to be
backed up.
On primary systems, SnapVault backs up primary qtree data, non-qtree data, and entire volumes,
to qtree locations on the SnapVault secondary systems.
Supported open systems storage platforms include Windows servers, Solaris servers, AIX servers,
Red Hat Linux servers, SUSE Linux servers, and HP-UX servers. On open systems storage
platforms, SnapVault can back up directories to qtree locations on the secondary system.
SnapVault can restore directories and single files. For more information, see the Open Systems
SnapVault Installation and Administration Guide.
Secondary storage system
The SnapVault secondary system is the central disk-based unit that receives and stores backup data
from the system as Snapshot copies. You can configure any system as a SnapVault secondary system;
however, it is best if you enable NearStoreoption.

31

Primary to secondary to tape backup variation


A common variation to the basic SnapVault backup deployment adds a tape backup of the
SnapVault secondary system.
This deployment can serve two purposes:
It enables you to store an unlimited number of network backups offline while keeping the most
recent backups available online in secondary storage. This can help in the quick restoration of
data. If you run a single tape backup off the SnapVault secondary storage system, the
storageplatforms are not subject to the performance degradation, system unavailability, and
complexity of direct tape backup of multiple systems.
It can be used to restore data to a SnapVault secondary system in case of data loss or corruption
on that system.
Primary to secondary to SnapMirror variation
In addition to the basic SnapVault deployment, you can replicate the SnapVault secondary using
SnapMirror. This protects the data stored on the SnapVault secondary against problems with the
secondary system itself.
The data backed up to SnapVault secondary storage is replicated to a SnapMirror destination.
If the secondary system fails, the data mirrored to the SnapMirror destination can be converted to
a secondary system and used to continue the SnapVault backup operation with minimum
disruption.

How SnapVault backup works


Backing up qtrees using SnapVault involves starting the baseline transfers, making scheduled
incremental transfers, and restoring data upon request.
How to start the baseline transfers:
In response to command-line input, the SnapVault secondary system requests initial base transfers
of qtrees specified for backup from a primary storage volume to a secondary storage volume.
These transfers establish SnapVault relationships between the primary and secondary qtrees.
Each primary system, when requested by the secondary system, transfers initial base images of
specified primary qtrees to qtree locations on the secondary system.

32

Data compression with SnapVault


SnapVault operates at the logical level and when data compression is enabled on the source system,
the data is uncompressed in memory on the source system before it is backed up.
When data compression is enabled on the source volume, no bandwidth savings are achieved over the
network because the data is uncompressed on the source volume before it is sent for replication. If
inline compression is enabled on the destination volume, the data is compressed inline at the
destination before it is written to the disk. If inline compression is not enabled on the destination
volume, you must manually compress the data after the SnapVault transfer is completed to achieve
storage space savings in the destination volume.
Note: Inline compression does not guarantee compression of all the data that is being transferred

using SnapVault. The space savings at the destination and the source systems are the same if inline
compression is enabled on the source system.

SnapVault primary and secondary on the same system


In Data ONTAP 7.3 and later, the SnapVault primary and secondary features can be on the same
storage system.
The system can be used in the following ways:
SnapVault destination for one or multiple backup relationships.
Both SnapVault source and SnapVault destination for the same backup relationship. For example,
by using SnapVault you can back up data from FC aggregates to ATA aggregates connected to the
same system.
Note: The source and destination qtrees cannot be within the same volume.

Setting the snapvault.access option


The snapvault.accessoption controls which systems can request data transfers. This option
persists across reboots.
Steps

1. On the primary system: To set the primary systems to grant access only to the secondary
systems, enter the following command:
optionssnapvault.accesshost=snapvault_secondary
Note: In the snapvault.accessoption, up to 255 characters are supported after host=.

Setting this option on the SnapVault primary system determines which secondary system can
access data from that primary system.
2. On the secondary system: To allow the primary systems to restore data from the secondary
system, enter the following command:
optionssnapvault.accesshost=snapvault_primary1,snapvault_primary2,...

Setting this option on the SnapVault secondary system determines which SnapVault primary
systems can access the secondary system.
The system must be able to resolve the host name entered as snapvault_primaryto an IP
address in the /etc/hostsfile, or else the system needs to be running DNS or NIS. You can also
use the literal IP address instead of the host name. The syntax for specifying which systems are
allowed access to the secondary system is described in the na_protocolaccess(8) man page. For
more information about the optionscommand, see the na_options(1) man page.

33

Guidelines for creating a SnapVault relationship


You need to follow certain guidelines when creating a SnapVault relationship.
When you create a SnapVault relationship, you must be aware of the following guidelines for
volumes and qtrees:
You must establish a SnapVault relationship between volumes that have the same vollang
settings.
After you establish a SnapVault relationship, you must not change the language assigned to the
destination volume.
You must avoid white space (spaces and tab characters) in names of source and destination qtrees.
You must not rename volumes or qtrees after establishing a SnapVault relationship.
The qtree cannot exist on the secondary system before the baseline transfer .

What non-qtree data is


Non-qtree data is any data on a storage system that is not contained in its qtrees.
Non-qtree data can include the following items:
Configuration and logging directories (for example, /etcor /logs) that are not normally visible
to clients
Directories and files on a volume that has no qtree configured

What volume data backup involves


When you back up a source volume using SnapVault, the volume is backed up to a qtree on the
secondary system; therefore, any qtrees in the source volume become directories in the
destination qtree.

Reasons for backing up a volume using SnapVault


You want to back up a volume that contains many qtrees.
You want the Snapshot copy management that SnapVault provides.
You want to consolidate the data from several source volumes on a single
destination volume.
Limitations to backing up a volume to a qtree
Before you perform a volume-to-qtree backup, consider the following limitations:
You lose the qtree as a unit of quota management.
Quota information from the qtrees in the source volume is not saved when they are replicated as
directories in the destination qtree.
You lose qtree security information.
If the qtrees in the source volume had different qtree security styles, those security styles are lost
in the replication to the destination qtree and are replaced by the security style of the volume.
The use of SnapVault for backing up volumes to qtrees is not integrated with the NetApp

34
Management Console data protection capability.
It is not a simple process to restore data.
SnapVault cannot restore the data back to a volume. When restoring data, the original source
volume is restored as a qtree. Also, incremental restores are not supported.
Volume-to-qtree backup is not supported for volumes containing Data ONTAP LUNs.

Restoring a qtree to the original volume structure


You can use the snapvaultrestorecommand so that the source volume you backed up to a
qtree is restored as a qtree on the primary system.
Steps

1. To restore the backed-up qtree to the original volume structure with multiple qtrees on the
primary system, re-create all of the qtrees in the volume on the primary system by using the
qtreecreatecommand.
pri_system>qtreecreate/vol/projs/project_x

2. Restore the data for each qtree by using the ndmpcopycommand.


The following command restores data from the backed-up project_xdirectory on the
secondary
system to the re-created project_xqtree on the primary system.
pri_system>ndmpcopysausername:passwordsec_system:/vol/vol1/projs/
project_x/vol/projs/project_x

3. Stop qtree updates and remove the qtree on the secondary system by using the snapvault
stopcommand. The following command removes the projsqtree from the secondary system:
sec_system>snapvaultstop/vol/vol1/projs

4. Reinitialize a baseline copy of each qtree to the secondary system by using the snapvault
startcommand. The following command reinitializes the SnapVault backup:
sec_system>snapvaultstartSpri_system:/vol/projs/vol/vol1/projs

How to avoid Snapshot copy schedule conflicts


If SnapVault is scheduled to perform Snapshot copy management at the same time as default snap
schedactivity, then the Snapshot copy management operations scheduled using the snapsched
command might fail with syslog messages, Skippingcreationofhourlysnapshot, and
Snapshotalreadyexists.
To avoid this condition, you should disable the conflicting times using snapsched, and use the
snapvaultsnapschedcommand to configure equivalent schedules to create Snapshot copies.
Note: You can disable the snapschedschedule and only use the snapvaultsnapsched

command to create Snapshot copies. Therefore, to track the schedule for creating Snapshot copies,
look at the snapvaultsnapschedoutput, and not the snapschedoutput.

35

Attention: The combined total of Snapshot copies retained for this and other

Snapshot sets cannot exceed 251 Snapshot copies per volume. If it does, SnapVault
will not create new Snapshot copies.

Preserving older SnapVault Snapshot copies on SnapVault


secondary volumes
Data ONTAP 7.3.2 and later enable you to preserve the older SnapVault Snapshot copies on the
SnapVault secondary volumes. The preserved SnapVault Snapshot copies are not deleted
automatically even if the maximum limit for a specified schedule is reached. If required, delete
them manually.
Preserving SnapVault Snapshot copies on the SnapVault secondary system
For example, you want to create and preserve up to 250 SnapVault Snapshot copies and display
a warning message when the number of SnapVault Snapshot copies reaches 240. Because the
system does not backup any more SnapVault Snapshot copies once the configured limit for the
specified schedule is reached, the warning message is required. The following command
enables you to preserve 250 SnapVault Snapshot copies and issue the warning message when
the number of SnapVault Snapshot copies reaches 240:
snapvaultsnapschedxopreserve=on,warn=10vol1sv_nightly250@

Checking SnapVault transfers


To ensure SnapVault transfers are taking place as expected, you can check the transfer status
using the snapvaultstatuscommand.
Step

1. To check the status of a data transfer and see how recently a qtree has been updated, enter the
following command:
snapvaultstatus[l|s|c|t][[[system_name:]qtree_path]...]
ldisplays the long format of the output, which contains more detailed information.
sdisplays the SnapVault Snapshot copy basename, status, and schedule for each volume.
cdisplays the configuration parameters of all SnapVault qtrees on the system. This option

can be run only from the secondary system.


tdisplays the relationships that are active.
Note: A relationship is considered active if the source or destination is involved in any one

of the following activities: transferring data to or from the network, reading or writing to a
tape device, waiting for a tape change, or performing local on-disk processing or clean-up.
system_nameis the name of the system for which you want to see the status of SnapVault
operations.
qtree_pathis the path of the qtree or qtrees for which you want to see the status of SnapVault
operations. You can specify more than one qtree path.

36

Examples for checking the status

37

What the status fields mean


You can see the information fields that SnapVault can display for the snapvaultstatusand
snapvaultstatuslcommands.

38

39

40

41

About LUN clones and SnapVault


A LUN clone is a space-efficient copy of another LUN. Initially, the LUN clone and its parent share
the same storage space. More storage space is consumed only when one LUN or the other changes.
In releases prior to Data ONTAP 7.3, SnapVault considers each LUN clone as a new LUN.
Therefore, during the initial transfer of the LUN clone, all data from the clone and the backing LUN
is transferred to the secondary system.
Note: LUNs in this context refer to the LUNs that Data ONTAP serves to clients, not to the array

LUNs used for storage on a storage array.


Starting with Data ONTAP 7.3, SnapVault can transfer LUN clones in an optimized way by using
SnapDrive for Windows. To manage this process, SnapDrive for Windows creates two Snapshot
copies:
Backing Snapshot copy, which contains the LUN to be cloned
Backup Snapshot copy, which contains both the LUN and the clone
Modes of transfer
Starting with Data ONTAP 7.3, a SnapVault transfer with LUN clones can run in two modes:
In non-optimized mode, a LUN clone is replicated as a LUN. Therefore, a LUN clone and its
backing LUN get replicated as two separate LUNs on the destination. SnapVault does not
preserve space savings that come from LUN clones.
In optimized mode, a LUN clone is replicated as a LUN clone on the destination. Transfers of
LUN clones to the secondary system in optimized mode are possible only with SnapDrive for
Windows.
These modes apply to newly created LUN clones. On successive update transfers, only the
incremental changes are transferred to the destination in both modes.

How to change SnapVault settings


You can use the snapvaultmodifycommand to change the primary system (source) qtree that you
specified using the snapvaultstartcommand. You can change the SnapVault settings for
transfer speed and number of tries before quitting. You might need to make these changes if there are
hardware or software changes to the systems.
The meaning of the options is the same as for the snapvaultstartcommand. If an option is set,
it changes the configuration for that option. If an option is not set, the configuration of that option is
unchanged.
Note: The descriptions and procedures in this section pertain to SnapVault backup of systems

running Data ONTAP only. For descriptions and procedures pertaining to SnapVault backup of
open systems drives and directories, see the Open Systems SnapVault documentation.
The snapvaultmodifycommand is available only from the secondary system. You can also use
this command to modify the tries count after the relationship has been set up. This is useful when
there is a planned network outage.
You use the snapvaultmodifycommand to change the source if the primary system, volume, or
qtree is renamed. This ensures the continuity of the existing SnapVault relationship between the
primary and secondary systems. However, you cannot copy a primary qtree to another volume or

42
system and use this command to take backups from that new location.
If you need to change the SnapVault schedule, use the snapvaultsnapschedcommand.

Changing settings for SnapVault backup relationships


You can change the settings for SnapVault backup relationships that you entered with the snapvault
startcommand, by using the snapvaultmodifycommand.
Step

1. From the secondary system, enter the following command on a single line:
snapvaultmodify[kkbs][tn][ooptions][S
[pri_system:]pri_qtree_path][sec_system:]sec_qtree_path
kkbsspecifies a value in kilobytes per second for the throttle (transfer speed) for the primary
system. A value of unlimitedlets the transfer run as fast as it can. Other valid values are whole

positive numbers.
tnspecifies the number of times to try the transfer before giving up. The default is 2.

If set to 0, the secondary system does not update the qtree. This is one way to temporarily stop
updates to a qtree.
optionsis opt_name=opt_value[[, opt_name=opt_value]...]. For more details about the

available options, see the SnapVault man page.

43

Why you manually update a qtree on the secondary


system
You can use the snapvaultupdatecommand to manually update the SnapVault qtree on the
secondary system from a Snapshot copy on the primary system. You might want to update at an
unscheduled time to protect the primary system data.
Manual updates are useful in the following situations:
A disk failed on the primary system and you want extra protection for the data.
The nightly backup failed due to a network problem.
The primary system hardware is going to be reconfigured.
You want to transfer a Snapshot copy of a quiesced database.
Note: The descriptions and procedures in this section pertain to SnapVault backup of systems

running Data ONTAP only. For SnapVault backup of open systems drives and directories, see the
Open Systems SnapVault documentation.

Why you create a Snapshot copy manually


In certain cases, you might want to create a manual (unscheduled) Snapshot copy.
Creating a manual Snapshot copy is useful in these situations:
You anticipate planned downtime or you need to recover from downtime (during which a
Snapshot copy was not taken on time).
You have just carried out a manual update of a secondary qtree, and you want to immediately
incorporate that update into the retained Snapshot copies on the secondary system.
@@@@

Netapp Snapvault guide


Netapp SnapVault is a heterogeneous disk-to-disk backup solution for Netapp filers and
heterogeneous OS systems (Windows, Linux , Solaris, HPUX and AIX). Basically, Snapvault
uses netapp snapshot technology to take point-in-time snapshot and store them as online
backups. In event of data loss or corruption on a filer, the backup data can be restored from the
SnapVault filer with less downtime. It has significant advantages over traditional tape backups,
like

Reduce backup windows versus traditional tape-based backup

Media cost savings

No backup/recovery failures due to media errors

44

Simple and Fast recovery of corrupted or destroyed data

Snapvault consists of major two entities snapvault clients and a snapvault storage server. A
snapvault client (Netapp filers and unix/windows servers) is the system whose data should be
backed-up. The SnapVault server is a Netapp filer which gets the data from clients and backs
up data. For Server to Netapp Snapvault, we need to install Open System Snapvault client
software provided by Netapp, on the servers. Using the snapvault agent software, the Snapvault
server can pull and backup data on to the backup qtrees. SnapVault protects data on a client
system by maintaining a number of read-only versions (snapshots) of that data on a SnapVault
filer. The replicated data on the snapvault server system can be accessed via NFS or CIFS. The
client systems can restore entire directories or single files directly from the snapvault filer.
Snapvault requires primary and secondary license.
How snapvault works?
When snapvault is setup, initially a complete copy of the data set is pulled across the network to
the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it
is duplicating the entire source data set on the server much like a level-zero backup to tape.
Each subsequent backup transfers only the data blocks that has changed since the previous
backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree
and creates a snapshot image of the volume for the data that is to be backed up. SnapVault
creates a new Snapshot copy with every transfer, and allows retention of a large number of
copies according to a schedule configured by the backup administrator. Each copy consumes an
amount of disk space proportional to the differences between it and the previous copy.
Snapvault commands :
Initial step to setup Snapvault backup between filers is to install snapvault license and enable
snapvault on all the source and destination filers.
Source filer filer1
filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=filer2
Destination filer filer2
filer2> license add XXXXX
filer2> options snapvault.enable on
filer2> options snapvault.access host=filer1
Consider filer2:/vol/snapvault_volume as the snapvault destination volume, where all backups
are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups
on the destination filer (filer2) using snapvault manually disable scheduled snapshots on the

45

destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled
snapshots, with below command.
filer2> snap sched snapvault_volume 0 0 0
Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data
from source to destination before scheduling snapvault backups. On the destination filer execute
the below commands to initiate the base-line transfer. The time taken to complete depends upon
the size of data on the source qtree and the network bandwidth. Check snapvault status on
source/destination filers for monitoring the base-line transfer progress.
filer2> snapvault start -S filer1:/vol/datasource/qtree1 filer2:/vol/snapvault_volume/qtree1
Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules
have to be created for incremental backups. The retention period of the backup depends on the
schedule created. The snapshot name should be prefixed with sv_. The schedule is in the form
of [@][@].
On source filer:
For example, let us create the schedules on source as below 2 hourly, 2 daily and 2 weekly
snapvault . These snapshot copies on the source enables administrators to recover directly from
source filer without accessing any copies on the destination. This enables more rapid restores.
However, it is not necessary to retain a large number of copies on the primary; higher retention
levels are configured on the secondary. The commands below shows how to create hourly, daily
& weekly snapvault snapshots.
filer1> snapvault snap sched datasource sv_hourly 2@0-22
filer1> snapvault snap sched datasource sv_daily 2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun
On snapvault filer:
Based on the retention period of the backups you need, the snapvault schedules on the
destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour
for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault
qtrees with new data from the primary and then takes a Snapshot copy on the destination volume,
called sv_hourly.0. If you dont use the -x option, the secondary does not contact the primary and
transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.
filer2> snapvault snap sched -x snapvault_volume sv_hourly 6@0-22
filer2> snapvault snap sched -x snapvault_volume sv_daily 14@23@sun-fri
filer2> snapvault snap sched -x snapvault_volume sv_weekly 6@23@sun

46

To check the snapvault status, use the command snapvault status either on source or
destination filer. And to see the backups, do a snap list on the destination volume that will
give you all the backup copies, time of creation etc.
Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination
volume through NFS or CIFS and copy the required data from the backup snapshot.
You can also try Netapp Protection manager to manage the snapvault backups either from OSSV
or from Netapp primary storage. Protection manager is based on Netapp Operations manager
(aka Netapp DFM). It is a client based UI, with which you connect to the Ops Manager and
protect your storages.
SNAPVAULT VERSUS SNAPMIRROR: WHATS THE DIFFERENCE?
The following list describes some of the key differences between SnapVault software and the qtree-based
SnapMirror feature.
SnapMirror software uses the same software and licensing on the source appliance and the
destination server. SnapVault software has SnapVault primary systems and SnapVault secondary
systems, which provide different functionality. The SnapVault primaries are the sources for data
that is to be backed up. The SnapVault secondary is the destination for these backups.
NOTE: As of Data ONTAP 7.2.1, SnapVault Primary and SnapVault secondary can be installed on
different heads of the same cluster. Data ONTAP 7.3 supports installing both the primary and
secondary on a standalone system.
SnapVault destinations are typically read-only. Unlike SnapMirror destinations, they cannot be
made into read-write copies of the data. This means that backup copies of data stored on the
SnapVault server can be trusted to be true, unmodified versions of the original data.
Note: A SnapVault destination can be made into read-write with the SnapMirror/SnapVault bundle.
For more information, see Appendix B.
SnapMirror transfers can be scheduled every few minutes; SnapVault transfers can be scheduled
at most once per hour.
Multiple qtrees within the same source volume consume one Snapshot copy each (on the source
system) when qtree-based SnapMirror software is used, but consume only one Snapshot copy total
when SnapVault software is used.
The SnapMirror software deletes SnapMirror Snapshot copies when they are no longer needed for
replication purposes. The copies are retained or deleted on a specified schedule.
SnapMirror relationships can be reversed, allowing the source to be resynchronized with changes
made at the destination. SnapVault provides the ability to transfer data from the secondary to the
primary only for restore purposes. The direction of replication cannot be reversed.
SnapMirror can be used to replicate data only between NetApp storage systems running Data
ONTAP. SnapVault can be used to back up both NetApp and Open Systems primary storage,
although the secondary storage system must be a FAS system or a NearStore system.

47

4 CONFIGURING SNAPVAULT
This section provides step-by-step procedures for configuring SnapVault and examples of configurations.
The following examples assume that you are configuring backups for a single FAS system named
fas3050-pri, using a single NearStore system named fas3070-sec. The home directories are in a
qtree on fas3050-pri called /vol/vol1/users; the database is on fas3050-pri in the volume
called /vol/oracle, and is not in a qtree.
STEP TWO: SCHEDULE SNAPSHOT COPIES ON THE SNAPVAULT PRIMARIES
The following steps occur on the SnapVault primary, fas3050-pri.
1. License SnapVault and enable it.
fas3050-pri> license add ABCDEFG
fas3050-pri> options snapvault.enable on
fas3050-pri> options snapvault.access host=fas3070-sec
2. Turn off the normal Snapshot schedules, which will be replaced by SnapVault Snapshot schedules.
fas3050-pri> snap sched vol1 0 0 0
fas3050-pri> snap sched oracle 0 0 0
3. Set up schedules for the home directory hourly Snapshot copies.
fas3050-pri> snapvault snap sched vol1 sv_hourly 22@0-22
This schedule takes a Snapshot copy every hour, except for 11 p.m. It keeps nearly a full day of
hourly copies, and, combined with the daily or weekly backups at 11 p.m., makes copies from the
most recent 23 hours always available.
4. Set up schedules for the home directory daily Snapshot copies.
fas3050-pri> snapvault snap sched vol1 sv_daily 7@23
This schedule takes a Snapshot copy once each night at 11 p.m. and retains the seven most recent
copies.
The schedules created in steps 3 and 4 give 22 hourly and 7 daily Snapshot copies on the source
to recover from before needing to access any copies on the secondary. This enables more rapid
restores. However, it is not necessary to retain a large number of copies on the primary; higher
retention levels are configured on the secondary.
STEP THREE: SCHEDULE SNAPSHOT COPIES ON THE SNAPVAULT SECONDARY
The following steps occur on the SnapVault secondary, fas3070-sec.
1. License SnapVault and enable it.
fas3070-sec> license add HIJKLMN
fas3070-sec> options snapvault.enable on
fas3070-sec> options snapvault.access host=fas3050-pri
2. Create a FlexVol volume for use as a SnapVault destination.
fas3070-sec> aggr create sv_flex 10
fas3070-sec> vol create vault sv_flex 100g
The size of the volume should be determined by how much data you need to store and other

48
site-specific requirements, such as the number of Snapshot copies to retain and the rate of
change for the data on the primary FAS system.
Depending on site requirements, you may want to create several different SnapVault
destination volumes. You may find it easiest to use different destination volumes for data sets
with different schedules and Snapshot copy retention needs.
3. Optional (recommended): Set the Snapshot reserve to zero on the SnapVault destination volume.
fas3070-sec> snap reserve vault 0
Due to the nature of backups using SnapVault, a destination volume that has been in use for a
significant amount of time often has four or five times as many blocks allocated to Snapshot copies
as it does to the active file system. Because this is the reverse of a normal production environment,
many users find that it is easier to keep track of available disk space on the SnapVault secondary if
SnapReserve is effectively turned off.
4. Turn off the normal Snapshot schedules, which will be replaced by SnapVault Snapshot schedules.
fas3070-sec> snap sched vault 0 0 0
5. Set up schedules for the hourly backups.
fas3070-sec> snapvault snap sched -x vault sv_hourly 4@0-22
This schedule checks all primary qtrees backed up to the vault volume once per hour for a new
Snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new
data from the primary and then takes a Snapshot copy on the destination volume, called
sv_hourly.0.
Note that you are keeping only the four most recent hourly Snapshot copies on the SnapVault
secondary. A user who wants to recover from a backup made within the past day has 23 backups
to choose from on the primary FAS system and has no need to restore from the SnapVault
secondary. Keeping four hourly Snapshot copies on the secondary merely lets you have at least
the most recent four backups in the event of a major problem affecting the primary system.
Note: If you do not use the -x option, the secondary does not contact the primary and transfer the
Snapshot copy. Only a Snapshot copy of the destination volume is created.
6. Set up schedules for the daily backups.
fas3070-sec> snapvault snap sched -x vault sv_daily 12@23@sun-fri
This schedule checks all primary qtrees backed up to the vault volume once each day at 11 p.m.
(except on Saturdays) for a new Snapshot copy called sv_daily.0. If it finds such a copy, it updates
the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the
destination volume, called sv_daily.0.
In this example, you maintain the most recent 12 daily backups, which, combined with the most
recent 2 weekly backups (see step 7), slightly exceeds the requirements shown in Table 2, in
Section 2.7.
7. Set up schedules for the weekly backups.
fas3070-sec> snapvault snap sched vault sv_weekly 13@23@sat
This schedule creates a Snapshot copy of the vault volume at 11 p.m. each Saturday for a new
Snapshot copy called sv_weekly.0. There is no need to create the weekly schedule on the primary.
Because you have all the data on the secondary for this Snapshot copy, you will simply create and
retain the weekly copies only on the secondary.
In this example, you maintain the most recent 13 weekly backups, for a full three months of online
backups.

49

STEP FOUR: PERFORM THE INITIAL BASELINE TRANSFER


At this point, you have configured schedules on both the primary and secondary systems, and SnapVault
isenabled and running. However, Snapvault does not yet know which qtrees to back up, or where to store
them on the secondary. Snapshot copies will be taken on the primary, but no data will be transferred to
the secondary.
To provide SnapVault with this information, use the snapvault start command on the secondary:
fas3070-sec> snapvault start -S fas3050-pri:/vol/vol1/users
/vol/vault/fas3050-pri_users
fas3070-sec> snapvault start -S fas3050-pri:/vol/oracle//vol/vault/oracle
If you later create another qtree called otherusers in the vol1 volume on fas3050-pri, it can be
completely configured for backups with a single command:
fas3070-sec> snapvault start -S fas3050-pri:/vol/vol1/otherusers
/vol/vault/fas3050-pri_otherusers
No additional steps are needed because the Snapshot schedules are already configured on both primary
and secondary for that volume.

7.2 COMMON MISCONFIGURATIONS


This section examines some common misconfigurations that a user may encounter with SnapVault. You
should consider these possible problems during the planning phase in order to achieve a successful
SnapVault deployment.
TIME ZONES, CLOCKS, AND LAG TIME
One thing to consider when scheduling is that the SnapVault operations are initiated by the clock on the
storage system. For example, on the primary, the Snapshot copies are scheduled by using the snapvault
snap sched command. When this time is reached, the primary storage system creates its copy. On the
secondary, you use the snapvault snap sched -x command (-x tells the secondary to contact the primary
for the Snapshot data) to schedule the SnapVault transfer. This can pose a huge problem with lag times if
the clocks are skewed.
MANAGING THE NUMBER OF SNAPSHOT COPIES
With Data ONTAP 6.4 and later, each volume on the SnapVault secondary system can have up to 255
Snapshot copies. SnapVault software requires the use of 4 Snapshot copies (regardless of the number of
qtrees or data sets being backed up), leaving 251 copies for scheduled or manual Snapshot creation. In
most cases, fewer than 251 copies are maintained due to limitations on available disk space. It is
recommended that you do not attempt to retain more than 250 total Snapshot copies of a volume. With
improper scheduling, this limit can quickly be reached on the secondary because SnapVault takes a
Snapshot copy of the volume after every transfer. Again, its important to make sure that the qtrees within
a SnapVault destination have the same characteristics to avoid reaching the 250-copy limit.
VOLUME TO QTREE SNAPVAULT
When issuing the snapvault start command, you are not required to specify a qtree name for the source;
however, this practice is not recommended. This type of relationship increases the performance of the
SnapVault transfer, but it also increases the time it takes to perform a backup. Since you must specify a
qtree for the SnapVault destination, an entire volume then resides in a qtree on the destination. When its
time for the restore via the Data ONTAP CLI, the entire contents of the qtree, which contains all the data
from the source volume, are restored to a qtree on the SnapVault primary system. Once the data is
restored, you must then manually copy the data back to the appropriate location.

50

12 APPENDIX C: TROUBLESHOOTING SNAPVAULT ERRORS


It is important to check the logs on both the primary and secondary when troubleshooting errors with
SnapVault. The errors are located in /etc/logs/snapmirror on both the primary and secondary storage
systems. Here are some of the common errors encountered when running SnapVault displayed either on
the console or in the log file.
source contains no new data; suspending transfer to destination
The Snapshot copies on the primary do not contain any new data, so no data is transferred.

destination requested Snapshot that does not exist on the source


The SnapVault secondary has initiated a transfer, but the Snapshot copy doesnt exist on the source. Either
the snapvault command was entered incorrectly or the Snapshot copy was deleted on the primary.

request denied by source filer; check access permissions on source


To resolve this error, check options snapvault.access on the primary. You may see this issue if a new
secondary is being configured, or if the hostname or IP address of the secondary has changed.

snapvault is not licensed


The license sv_ontap_pri or sv_ontap_sec is not on the storage system. Input the license key to unlock the
snapvault commands.

Transfer aborted: service not enabled on the source


This error appears when a SnapVault secondary contacts the primary for the transfer. If there is a SnapVault
license on the primary, verify that SnapVault is on with the options snapvault.enable command.

snapvault: request while snapvault client not licensed on this filer


This error is displayed on the console of the primary, and means that a secondary has requested a
SnapVault transfer, but is not currently licensed on the primary. Check the licensing on the primary and the
command syntax on the secondary.

5 PROTECTING THE SNAPVAULT SECONDARY


Although SnapVault is incredibly effective in protecting the data stored on primary storage systems, some
sites may also want to take measures to protect against disasters that affect the SnapVault secondary itself.
In a SnapVault environment, the loss or failure of a SnapVault secondary does not affect primary systems
any more than does the loss or failure of a tape library in a traditional backup environment. In fact, some
data protection continues, because the loss of a SnapVault secondary does not interrupt the process of
creating Snapshot copies on the primary systems.
You could simply configure a replacement system in response to a lost or failed SnapVault secondary. This
requires restarting backups from each primary qtree, including a complete baseline transfer of each qtree. If
the SnapVault secondary is located on the same network as the primaries, this may not be a problem. You
can perform periodic backups of the SnapVault secondary to tape with an NDMP-enabled backup
application to preserve long-term archive copies of data.
One of the best options is to protect the SnapVault secondary with SnapMirror technology. Simply use
volume-based mirroring to copy all of the SnapVault destination volumes (including all Snapshot copies) to
another SnapVault secondary at a remote site. If the original SnapVault secondary fails, the extra SnapVault
secondary can continue to back up the SnapVault primaries. One other option is to take periodic backups of
the SnapVault secondary using the SnapMirror store command to copy the entire volume (including all
Snapshot copies) to tape.

51

KNOWN SNAPVAULT BEHAVIORS


The following section will discuss known SnapVault behaviors, which the user should be aware of before
implementing SnapVault.

6.1 TRANSFER OVERHEAD


For every transferred inode, the SnapVault primary sends a 4kB header. Also, all changed data is rounded
up to 4kB. Thus, a 1-byte file is much more expensive than a 0-byte file. When a file is created, deleted, or
renamed, that changes a directory, causing a 4kB header transfer for that directory. If a file or directory is
larger than 2MB, an additional 4kB header is transferred for every 2MB.
In addition to the inodes, the SnapVault primary transfers all the changed ACLs for a volume. Unlike all other
inodes, ACLs are not associated with a qtree. This increases the number of files or directories that can
share an ACL, but can use extra network bandwidth on Qtree SnapMirror. Given the overhead with ACLs,
this also causes the baseline transfer to consume more space on the secondary storage system.

6.2 SNAPMIRROR-SNAPVAULT INTERLOCK


When you are using SnapVault in combination with Volume SnapMirror, it is important to understand their
relationship with Snapshot. You can use SnapVault to protect a Volume SnapMirror destination, but there
are some things that need to taken into consideration. Schedules must be managed to accommodate the
interlock that keeps SnapVault and SnapMirror from stepping on each other. If a SnapMirror session is
transferring data when SnapVault begins, the current SnapMirror transfer is aborted. The only way to
accomplish this configuration is to suspend SnapMirror transfers until the SnapVault transfers are complete.
In addition, when SnapVault is used to protect a VSM destination, SnapVault ignores any specified snapshot
(as part of the snapvault update command, or a SnapVault schedule) and uses the most recent VSMcreated
snapshot.
This issue does not affect using SnapMirror to protect a SnapVault destination.

6.3 QUIESCING A SLOW TRANSFER


Because SnapVault transfers and schedules are based on the volume, it is important to group qtrees with
the same characteristics into the same volume. Obviously, there will be instances where a qtree has an
abnormal rate of change, which cant be avoided.
What needs to be avoided is grouping into a volume qtrees that dont have similar characteristics. For
example, suppose that you have a volume (/vol/vault) that has 16 qtrees (qtree1 through qtree16). Assume
that each qtree has to transfer 1GB worth of changed data, except for qtree4, which has 10GB worth of
changed data. This volume is scheduled to complete only one daily transfer, at 11 p.m.
Given this scenario, qtree4 holds up the SnapVault transfer because SnapVault cannot take a Snapshot
copy of the destination volume. When you run the snapvault status command on the secondary system, all
completed qtrees show a status of quiescing. The one qtree that is still being transferred shows a status of
transferring, and displays the amount of data that has transferred. The other 15 qtrees in the volume do not
have an available Snapshot copy until the last qtree in the destination volume has completed its transfer. If
there is a slow link between the primary and the secondary system, the 10GB of changed data can take a
long time to transfer. This would clearly be a flaw in the layout of the schedule and qtrees to the secondary
volume. Figure 4 shows an example of a SnapVault transfer with qtrees in a quiescing state.
Figure 4) Example of a transfer in a quiescing state.

Notice that in the example qtree4 is still transferring while all other qtrees are in a quiescing state. It would
be a good idea to monitor qtree4 in this SnapVault transfer to see if it continues to cause the other qtrees to
be in a quiescing state. The change rate of qtree 4 may not be similar to the other qtrees on the destination
volume, and it would make more sense to move this qtree to another volume.

6.4 SINGLE FILE RESTORE


When it is necessary to restore a single file, you cannot use the snapvault restore command. The snapvault
restore command allows you to restore the entire qtree contents back to the original primary qtree. After you
have restored the entire contents of the qtree, you can choose either to resume the scheduled SnapVault
backups (snapvault start -r) or to cancel the SnapVault relationship and the corresponding backups

52
(snapvault release).
For single file restores, use the ndmpcopy command in Data ONTAP or Protection Manager (if available); or
use CIFS/NFS and copy the file from the Snapshot copy to the correct location.

6.5 INCREMENTAL RESTORE


Before Data ONTAP 7.3, a restore operation had to be performed to a qtree that did not exist on the source.
Starting with Data ONTAP 7.3, you can restore to an existing qtree, and only the blocks required to recover
to the specified point in time (snapshot) are transferred and stored on the primary system. To use this
functionality, both the primary and secondary systems must be running Data ONTAP 7.3 or later. Details of
the syntax and procedures for performing such a restore are found in the Data ONTAP Data Protection
Online Backup and Recovery Guide.

6.6 TRADITIONAL VOLUMES VERSUS FLEXIBLE VOLUMES


When you are setting up the secondary volumes, its a good idea to use flexible volumes to maximize
performance. This allows resizing the volumes as needed, making it easier to retain more Snapshot copies if
necessary. In addition, it allows the user to reduce the size of the volume if the number of Snapshot copies
that need to be retained changes. The configuration of the secondary volume is independent of the primary,
so if the source volumes on the primary are traditional volumes, you can still choose to have the destination
volumes be flexible volumes.
In addition to the resizing feature of flexible volumes, FlexClone and SnapMirror can also be used to make
a copy of the SnapVault destination that is writable. FlexClone volumes are a point-in-time copy of the
parent volume (SnapVault destination). Changes made to the parent volume after the FlexClone volume is
created are not reflected in the FlexClone volume.

6.7 SIZING VOLUMES ON THE SECONDARY


The sizing of volumes on the secondary can vary based on the RTO, RPO, and granularity required, plus
the rate of change for the source volume. In addition to the rate of change on the source volumes and/or
qtrees, you must consider performance and tape backup factors. Because the rate of change can fluctuate,
you should determine the average rate of change for the qtrees and then group like qtrees into the same
destination volume. The ability to manipulate the size of a flexible volume makes it an ideal volume type for
the SnapVault destination. If the rate of change, retention requirements, or size of the primary changes, you
can adjust the size of the destination volume.
Grouping the qtrees by the desired Snapshot schedule and then adding together the disk space
requirements for each group of qtrees determines the overall amount of space required by each group. If
this results in volume sizes larger than desired (or larger than supported by Data ONTAP) the groups should
be split into smaller ones.
Also available in Data ONTAP is the snap delta command. This command reports the rate of change
between Snapshot copies. The command compares all copies in a volume, or just the copies specified.
Although snap delta can be used to help determine the rate of change for sizing the secondary volume, the
future work load should also be considered.

6.8 CONCURRENT TRANSFERS


There is a maximum number of concurrent replication operations for each NetApp system. A storage system
might not reach the maximum number of simultaneous replication operations for the following reasons:
Storage system resources, such as CPU usage, memory, disk bandwidth, or network bandwidth,
are taken away from SnapMirror or SnapVault operations.

Each storage system in a cluster has a maximum number of simultaneous replication operations. If
a failover occurs, the surviving storage system cannot process more than the maximum number of
simultaneous replication operations specified for that storage system. These can be operations that
were scheduled for the surviving storage system, the failover storage system, or both.
Note: Take this limitation into consideration when you are planning SnapMirror or SnapVault replications
using clusters.
NetApp systems with a NearStore license are optimized as a destination for QSM and SnapVault replication
operations. Replication operations of which the NearStore system is the QSM source, SnapVault source,

53
Volume SnapMirror (VSM) source, or VSM destination count twice against the maximum number.
For details on the maximum concurrent streams, see the Data ONTAP Data Protection Online Backup and
Recovery Guide for your version of Data ONTAP.

6.9 PERFORMANCE IMPACT ON PRIMARY DURING TRANSFER


Because a SnapVault transfer is a pull operation, resource usage on the secondary is expected. Remember
that a SnapVault transfer also requires resource usage on the primary. This is important because you want
to make sure that you dont negatively affect the primary storage system for a SnapVault transfer when
setting up SnapVault schedules. Many factors affect how many resources on the primary are used. For this
example, suppose that you have two data sets, both 10GB in size. The first data set, dataset1, has
approximately a million small files, and the second data set, dataset2, has five files, all 2GB in size. During
the baseline transfer, dataset1 requires more CPU usage on the primary or requires a longer transfer time
than dataset2. For SnapVault, maximum throughput is generally limited by CPU and disk I/O consumption at
the destination.

6.10 QUEUING YOUR TRANSFERS


When scheduling transfers, you must take into consideration the size of the transfer and group like qtrees
into the same destination volume. Because scheduling is volume based, not qtree based, poor scheduling
causes many issues. There is a limit on the number of concurrent streams supported by the platform you are
running. For the list of such limits, refer to the Data ONTAP Data Protection Online Backup and Recovery
Guide. If you schedule more than the allowed number of concurrent streams, the remaining qtrees to be
transferred are queued. However, there is a limit to the number of qtrees that you can queue. You can
schedule up to 1024 transfers with Data ONTAP 7.3 (for both SnapMirror and SnapVault). Any queued
transfers in addition to 1024 are not scheduled for transfer, causing backups to be lost.
Prior to Data ONTAP 7.3, the maximum number of concurrent SnapVault targets supported by a storage
system was equal to the maximum number of concurrent SnapVault transfers possible for the system. A
SnapVault target is a process that controls the creation of a scheduled SnapVault Snapshot copy on a
SnapVault destination volume. There will be a SnapVault target for each SnapVault destination volume that
has qtrees being updated.
There is a maximum number of concurrent SnapVault targets for each platform. Only the qtrees in those
volumes can be updated concurrently. If the number of SnapVault targets exceeds the limit, the number of
concurrent SnapVault transfers might be affected. Despite the maximum number of concurrent SnapVault
targets, you can configure SnapVault relationships in as many volumes as required. However, only the
qtrees in the limited number of volumes can be updated. Please see the release notes for the most up-todate
limits.
Note: SnapVault transfers are scheduled based on the volume and not the qtree. Therefore, if a destination
volume has 32 qtrees, all 32 qtrees are transferred when the schedule is run.

6.11 SNAPVAULT WITHIN A CLUSTERED SYSTEM


SnapVault in Data ONTAP 7.2.1 includes the ability to use SnapVault within a clustered system. You can
therefore install a SnapVault primary license on one head (or controller) of a clustered system, and a
SnapVault secondary license on the other one. Another type of configuration enabled by this new
functionality includes bidirectional backup between two different clustered systems. This feature enables
customers to use SnapVault within a cluster from FC drives to SATA drives in the same system. In the event
that a cluster fails over, the SnapVault transfers will continue to run, but the maximum number of concurrent
transfers is the same as a single head.

6.12 SNAPVAULT WITHIN A SINGLE SYSTEM


SnapVault in Data ONTAP 7.3 includes the ability to use SnapVault within a standalone system. You can
therefore install a SnapVault primary and SnapVault secondary on a single controller. This functionality lets
you use SnapVault to send the data from FC drives to lower-cost ATA drives and provide local recovery and
retention on a single controller. In addition, you could also use two storage systems to act as a SnapVault
destination for the other system, enabling bidirectional SnapVault transfers between two storage systems.
One limitation is that a SnapVault volume cannot contain both primary and secondary qtrees.

6.13 SNAPVAULT AND DEDUPLICATION ON FAS


Starting with Data ONTAP 7.3, SnapVault and FAS deduplication are integrated to work together on the
SnapVault destination system. After an update transfer completes for all qtrees in the target, a base
Snapshot copy is taken. An archive Snapshot copy is also taken, which is the Snapshot copy used by the

54
retention policy. After the Snapshot copies are in place, SnapVault then calls FAS deduplication to start.
After FAS deduplication completes successfully, SnapVault takes another snapshot and moves the previous
archive snapshot to the new archive snapshot. If dedupe fails/aborts, the archive snapshot is not moved and
the duplicate blocks will remain locked in the snapshot until that snapshot is recycled.
FAS deduplication might be implemented and scheduled on the SnapVault primary system, but SnapVault
and FAS deduplication are not integrated; the schedules are independent of each other. Because SnapVault
is replicated at the qtree level, deduplication is not maintained during the transfer.
For more information on FAS Deduplication, please see TR3505, NetApp Deduplication for FAS Deployment
and Implementation Guide.

6.14 NDMP MANAGEMENT APPLICATIONS AND DATA ONTAP 7.3 CHANGES


Various NDMP-based management applications (Protection Manager, Syncsort, CommVault, Bakbone)
provide the ability to monitor and manage your SnapVault and Open System SnapVault transfers.
Customers using both an NDMP management application and Data ONTAP 7.3 will not benefit from the
increased concurrent streams for Data ONTAP 7.3.
@@

Enabling SnapVault
Setting up SnapVault backup on the primary systems means preparing the primary storage system
and SnapVault secondary storage system to perform their backup tasks. In Data ONTAP 8.2 and later,
a single SnapVault license is used for SnapVault primary and SnapVault secondary instead of two
separate SnapVault licenses. You must license and prepare your storage system before you can use
SnapVault to back up data.

Enabling licenses for SnapVault


You need to enable the SnapVault license for the SnapVault primary and secondary system. If you are
using an HA pair, you must enable the SnapVault license on both the nodes.

Guidelines for creating a SnapVault relationship


You need to follow certain guidelines when creating a SnapVault relationship.
When you create a SnapVault relationship, you must be aware of the following guidelines for
volumes and qtrees:
You must establish a SnapVault relationship between volumes that have the same vollang
settings.
After you establish a SnapVault relationship, you must not change the language assigned to the
destination volume.
You must avoid white space (spaces and tab characters) in names of source and destination
qtrees.
You must not rename volumes or qtrees after establishing a SnapVault relationship.
The qtree cannot exist on the secondary system before the baseline transfer.

@@@@

55

S-ar putea să vă placă și