Sunteți pe pagina 1din 43

ORACLE DATABASE BACKUP AND

RECOVERY WITH VMAX3


EMC VMAX Engineering White Paper

ABSTRACT

With the introduction of the third generation VMAX disk arrays and local replication
and enhanced remote replication capabilities, Oracle database administrators have a
new way to protect their Oracle databases effectively and efficiently with
unprecedented ease of use and management.

June, 2015

EMC WHITE
To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store.

Copyright 2015 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part Number H14232

2
TABLE OF CONTENTS

EXECUTIVE SUMMARY .............................................................................. 6


Audience ........................................................................................................... 7
Terminology ...................................................................................................... 7
VMAX3 Product Overview .................................................................................... 8
VMAX3 SnapVX Local Replication Overview ........................................................... 8
VMAX3 SRDF Remote Replication Overview ........................................................... 9

ORACLE DATABASE REPLICATION WITH TIMEFINDER AND SRDF


CONSIDERATIONS .................................................................................. 11
Number of Snapshots, Frequency, and Retention ................................................. 11
Snapshot Link Copy vs. No Copy Option .............................................................. 11
Oracle Database Restart vs. Recovery Solutions ................................................... 11
RMAN and VMAX3 Storage Replication ................................................................ 12
Command Execution and Host Users................................................................... 12
Oracle DBaaS SnapClone Integration on VMAX3 ................................................... 12
Storage Layout and ASM Disk Group Considerations ............................................. 12
Remote Replication Considerations ..................................................................... 13

ORACLE BACKUP AND DATA PROTECTION TEST CASES........................... 14


Test Configuration ............................................................................................ 14
Test Overview.................................................................................................. 14
Test Case 1: Creating a local restartable database replica for database clones ......... 15
Test Case 2: Creating a local recoverable database replica for backup and recovery . 17
Test Case 3: Performing FULL or incremental RMAN backup from a SnapVX replica .. 19
Test Case 4: Performing database recovery of Production using a recoverable
snapshot ......................................................................................................... 20
Test Case 5: Using SRDF/S and SRDF/A for database disaster recovery .................. 21
Test Case 6: Creating remote restartable copies .................................................. 23
Test Case 7: Creating remote recoverable database replicas.................................. 24
Test Case 8: Parallel recovery from remote backup image ..................................... 26
Test Case 9: Leveraging Access Control List replications for storage snapshots ........ 27

3
CONCLUSION .......................................................................................... 27
APPENDIX I - CONFIGURING ORACLE DATABASE STORAGE GROUPS FOR
REPLICATION ......................................................................................... 28
Creating snapshots for Oracle database storage groups ........................................ 28
Linking Oracle database snapshots for backup offload or repurposing ..................... 29
Restoring Oracle database using storage snapshot ............................................... 30
Creating a cascaded snapshot from an existing snapshot ...................................... 31

APPENDIX II SRDF MODES AND TOPOLOGIES ..................................... 31


SRDF modes .................................................................................................... 31
SRDF topologies ............................................................................................... 32

APPENDIX III SOLUTIONS ENABLER CLI COMMANDS FOR TIMEFINDER


SNAPVX MANAGEMENT ........................................................................... 33
Creation of periodic snaps ................................................................................. 33
Listing details of a snap .................................................................................... 33
Linking the snap to a storage group ................................................................... 33
Verifying current state of the snap ..................................................................... 34
Listing linked snaps .......................................................................................... 34
Restore from a snap ......................................................................................... 35

APPENDIX IV SOLUTIONS ENABLER CLI COMMANDS FOR SRDF


MANAGEMENT ......................................................................................... 35
Listing local and remote VMAX SRDF adapters ..................................................... 35
Creating dynamic SRDF groups .......................................................................... 35
Creating SRDF device pairs for a storage group ................................................... 35
Listing the status of SRDF group ........................................................................ 36
Restoring SRDF group....................................................................................... 37

APPENDIX V - SOLUTIONS ENABLER ARRAY BASED ACCESS CONTROL


MANAGEMENT ......................................................................................... 38
Identifying the unique ID of the Unisphere for VMAX and database management
host ............................................................................................................... 38
Adding the UNIVMAX host to AdminGrp for ACL management ................................ 38
Authenticate UNIVMAX host for ACL management ................................................ 39
Create access group for database host ................................................................ 40
4
Create database device access pool .................................................................... 41
Install Solutions Enabler on database hosts using non-root user ............................ 42

REFERENCES ........................................................................................... 43

5
EXECUTIVE SUMMARY
Many applications are required to be fully operational 24x7x365 and the data for these applications continues to grow. At the same
time, their RPO and RTO requirements are becoming more stringent. As a result, there is a large gap between the requirements for
fast and efficient protection and replication, and the ability to meet these requirements without overhead or operations disruption.

DBAs require the ability to create local and remote database replicas in seconds without disruption of Production host CPU or I/O
activity for purposes such as testing patches, running reports, creating development sandbox environments, publishing data to
analytic systems, offloading backups from Production, developing Disaster Recovery (DR) strategy, and more.

Traditional solutions rely on host based replication. The disadvantages of this solution are the additional host I/O and CPU cycles
consumed by the need to create such replicas, the complexity of monitoring and maintaining them, especially across multiple
servers, and the elongated time and complexity associated with their recovery. EMC and Oracle have made the creation of such
replicas more efficient, integrated, easier to create, faster to restore, and very robust in features.

TimeFinder local replication values to Oracle include:

The ability to create instant and consistent database replicas for repurposing, across a single database, multiple databases,
including external data or message queues, and across multiple VMAX3 storage systems.

The ability to perform TimeFinder replica creation or restore operations in seconds, regardless of the database size. The
target devices (in case of a replica) or source devices (in case of a restore) are available immediately with their data, even
as incremental data changes are copied in the background.

The ability to create valid backup images without hot-backup mode that can be taken, and more importantly restored, in
seconds regardless of database size, leveraging the Oracle Snapshot Storage Optimization Oracle 12c feature. Prior to
Oracle 12c, such valid backup images were achieved by using hot backup mode for only a few seconds.
The ability to utilize RMAN Block Change Tracking (BCT) file from a TimeFinder replica, when offloading backups from
Production to a backup host. Recovery operations can take place on either the Backup or Production host.
VMAX3 TimeFinder SnapVX snapshots are consistent by default. Each source device can have up to 256 space-efficient
snapshots that can be linked to up to 1024 target devices, maintaining incremental refresh relationships. The linked targets
can remain space-efficient, or a background copy of all the data can take place, making it a full copy. In this way, SnapVX
allows unlimited number of cascaded snapshots.
With Oracle 12c Cloud Control DBaaS Snap Clone, DBAs can perform storage provisioning and replications directly from
Enterprise Manager (TimeFinder is called via APIs).
With Oracle VM 3.3 Storage Connect, storage devices can be provisioned to VMs, or VMs with their physical and virtual
storage can be cloned (TimeFinder is called via APIs).
SRDF remote replication values to Oracle include:

Synchronous and Asynchronous consistent replication of a single or multiple databases, including external data or message
queues, across multiple VMAX3 storage array systems if necessary. The point of consistency is created before a disaster
strikes, rather than taking hours to achieve afterwards when using replications that are not consistent across applications
and databases.
Disaster Recovery (DR) protection for two or three sites, including cascaded or triangular relationships, where SRDF always
maintains incremental updates between source and target devices.

SRDF and TimeFinder are integrated. For example, while SRDF replicates the data remotely, TimeFinder can be used on the
remote site to create writable snapshots or backup images of the database. This allows the DBAs to perform remote backup
operations or create remote database copies.

SRDF and TimeFinder can work in parallel to restore remote backups. For example, while a remote TimeFinder backup is
being restored to the remote SRDF devices, in parallel SRDF will copy the restored data to the local site. This parallel restore
capability provides DBA with faster accessibility to remote backups and shortens recovery times.
VMAX replication and Silent Data Corruption:

Both VMAX and VMAX3 arrays protect all user data with T10-DIF from the moment it enters the storage until it is retrieved by the
host, including for local and remote replication.

With Oracle ASMlib (Oracle 10g and above), Oracle and EMC integrated the T10-DIF standard for end-to-end data integrity
validation. Each read or write between Oracle and VMAX storage is validated. Starting with Oracle 12c, Oracle ASM Filter Driver
(AFD) provides a wider host OS support for T10-DIF with VMAX, and includes other features such as the ability to reclaim deleted

6
ASM files in VMAX storage, protection from non-Oracle writes to Oracle data, and more. Internally, both TimeFinder and SRDF use
T10-DIF to validate all replicated data.

Together, Oracle ASM and VMAX3 offer the most protected and robust platform for database storage and replications that maintains
data integrity for each database read or write as well as its replications. By utilizing VMAX3 local and remote replications, DBAs gain
the ability to protect and repurpose their databases quickly and easily, without the time and complexity associated with host-based
replications, and the increasing RTOs associated with growing databases size.

AUDIENCE
This white paper is intended for database and system administrators, storage administrators, and system architects who are
responsible for implementing, managing, and maintaining Oracle databases backup and replication on VMAX3 storage arrays. It is
assumed that readers have some familiarity with Oracle and the EMC VMAX3 family of storage arrays, and are interested in achieving
higher database availability, performance, and ease of storage management.

TERMINOLOGY
The following table provides explanation to important terms used in this paper.

Term Description
Restartable vs. Oracle distinguishes between a restartable and recoverable state of the database. A restartable
state requires all log, data, and control files to be consistent (see Storage consistent replication).
Recoverable database
Oracle can be simply started, performing automatic crash/instance recovery without user
intervention. Recoverable state on the other hand requires a database media recovery, rolling
forward transaction log to achieve data consistency before the database can be opened.

RTO and RPO Recovery Time Objective (RTO) refers to the time it takes to recover a database after a failure.
Recovery Point Objective (RPO) refers to any amount of data loss after the recovery completes,
where RPO=0 means no data loss of committed transactions.

Storage consistent Storage consistent replication refers to storage replications (local or remote) in which the target
devices maintain write-order fidelity. That means that for any two dependent I/Os that the
replication
application issues, such as log write followed by data update, either both will be included in the
replica or only the first. To the Oracle database, after a host crash or Oracle shutdown abort, the
snapshot data appears in a state from which Oracle can simply recover by performing
crash/instance recovery when starting.

Starting with Oracle 11g, Oracle allows database recovery from storage consistent replications
without the use of hot-backup mode (details in Oracle support note: 604683.1). The feature has
become more integrated with Oracle 12c and is called Oracle Storage Snapshot Optimization.

VMAX3 HYPERMAX OS HYPERMAX OS is the industrys first open converged storage hypervisor and operating system. It
enables VMAX3 arrays to embed storage infrastructure services like cloud access, data mobility
and data protection directly on the array. This delivers new levels of data center efficiency and
consolidation by reducing footprint and energy requirements. In addition, HYPERMAX OS delivers
the ability to perform real-time and non-disruptive data services.

VMAX3 Storage Group A collection of host addressable VMAX3 devices. A Storage Group can be used to (a) present
devices to host (LUN masking), (b) specify FAST Service Levels (SLOs) to a group of devices, and
(c) manage device grouping for replication software such as SnapVX and SRDF.

Storage Groups can be cascaded, such as the child storage groups used for setting FAST Service
Level Objectives (SLOs) and the parent storage groups used for LUN masking of all the database
devices to the host.

VMAX3 TimeFinder TimeFinder SnapVX is the latest generation in TimeFinder local replication software, offering
higher scale and a wider feature set while maintaining the ability to emulate legacy behavior.
SnapVX

VMAX3 TimeFinder Previous generations of TimeFinder referred to Snapshot as a space-saving copy of the source
device, where capacity was consumed only for data changed after the snapshot time. Clone, on
SnapVX Snapshot vs.
the other hand referred to full copy of the source device. With VMAX3 arrays, TimeFinder SnapVX
Clone snapshots are always space-efficient. When they are linked to host-addressable target devices,
the user can choose to keep the target devices space-efficient or perform a full copy.

7
VMAX3 Product Overview
The EMC VMAX3 family of storage arrays is built on the strategy of simple, intelligent, modular storage, and incorporates a Dynamic
Virtual Matrix interface that connects and shares resources across all VMAX3 engines, allowing the storage array to seamlessly grow
from an entry-level configuration into the worlds largest storage array. It provides the highest levels of performance and availability
featuring new hardware and software capabilities.

The newest additions to the EMC VMAX3 family, VMAX 100K, 200K and 400K, deliver the latest in Tier-1 scale-out multi-controller
architecture with consolidation and efficiency for the enterprise. It offers dramatic increases in floor tile density, high capacity flash
and hard disk drives in dense enclosures for both 2.5" and 3.5" drives, and supports both block and file (eNAS).

The VMAX3 family of storage arrays comes pre-configured from factory to simplify deployment at customer sites and minimize time
to first I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. While VMAX3 can ship as an
all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and
reads even further, it can also ship as hybrid, multi-tier storage that excels in providing FAST 1 (Fully Automated Storage Tiering)
enabled performance management based on Service Level Objectives (SLO). VMAX3 new hardware architecture comes with more
CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely
fast internal memory-to-memory and data-copy fabric.

1 8 redundant VMAX3 Engines


Up to 4 PB usable capacity
Up to 256 FC host ports
Up to 16 TB global memory (mirrored)
Up to 384 Cores, 2.7 GHz Intel Xeon E5-2697-v2
Up to 5,760 drives
SSD Flash drives 200/400/800/1,600 GB 2.5/3.5
300 GB 1.2 TB 10K RPM SAS drives 2.5/3.5
300 GB 15K RPM SAS drives 2.5/3.5
2 TB/4 TB SAS 7.2K RPM 3.5

Figure 1 VMAX3 storage array

Figure 1 shows possible VMAX3 components. Refer to EMC documentation and release notes to find the most up to date supported
components.

To learn more about VMAX3 and FAST best practices with Oracle databases, refer to the white paper: Deployment best practice for
Oracle database with VMAX3 Service Level Object Management.

VMAX3 SnapVX Local Replication Overview


EMC TimeFinder SnapVX software delivers instant and storage-consistent point-in-time replicas of host devices that can be used for
purposes such as the creation of gold copies, patch testing, reporting and test/development environments, backup and recovery,
data warehouse refreshes, or any other process that requires parallel access to or preservation of the primary storage devices.

The replicated devices can contain the database data, Oracle home directories, data that is external to the database (e.g. image
files), message queues, and so on.

VMAX3 TimeFinder SnapVX combines the best aspects of previous TimeFinder offerings and adds new functionality, scalability, and
ease-of-use features.

1
Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the
available storage resources to meet the application I/O demand, even as new data is added, and access patterns continue to change over time.

8
Some of the main SnapVX capabilities related to native snapshots (emulation mode for legacy behavior is not covered) include:

With SnapVX, snapshots are natively targetless. They only relate to a group of source devices and cannot be otherwise
accessed directly. Instead, snapshots can be restored back to the source devices, or linked to another set of target devices
which can be made host-accessible.
Each source device can have up to 256 snapshots that can be linked to up to 1024 targets.
Snapshot operations are performed on a group of devices. This group is defined by using either a text file specifying the list
of devices, a device-group (DG), composite-group (CG), a storage group (SG), or simply specifying the devices. The
recommended way is to use a storage group.
Snapshots are taken using the establish command. When a snapshot is established, a snapshot name is provided, and an
optional expiration date. The snapshot time is saved with the snapshot and can be listed. Snapshots also get a generation
number (starting with 0). The generation is incremented with each new snapshot, even if the snapshot name remains the
same.
SnapVX provides the ability to create either space-efficient replicas or full-copy clones when linking snapshots to target
devices.
Use the -copy option to copy the full snapshot point-in-time data to the target devices during link. This will make the
target devices a stand-alone copy. If -copy option is not used, the target devices provide the exact snapshot point-in-time
data only until the link relationship is terminated, saving capacity and resources by providing space-efficient replicas.
SnapVX snapshots themselves are always space-efficient as they are simply a set of pointers pointing to the data source
when it is unmodified, or to the original version of the data when the source is modified. Multiple snapshots of the same
data utilize both storage and memory savings by pointing to the same location and consuming very little metadata.
SnapVX snapshots are always consistent. That means that snapshot creation always maintains write-order fidelity. This
allows easy creation of restartable database copies, or Oracle recoverable backup copies based on Oracle Storage Snapshot
Optimization. Snapshot operations such as establish and restore are also consistent that means that the operation either
succeeds or fails for all the devices as a unit.
Linked-target devices cannot restore any changes directly to the source devices. Instead, a new snapshot can be taken
from the target devices and linked back to the original source devices. In this way, SnapVX allows unlimited number of
cascaded snapshots.
FAST Service Levels apply to either the source devices, or to snapshot linked targets, but not to the snapshots themselves.
SnapVX snapshot data resides in the same Storage Resource Pool (SRP) as the source devices, and acquire an Optimized
FAST Service Level Objective (SLO) by default.
See Appendix III for a list of basic TimeFinder SnapVX operations.

For more information on SnapVX, refer to the TechNote: EMC VMAX3 TM Local Replication and the EMC Solutions Enabler CLI Guides.

VMAX3 SRDF Remote Replication Overview


The EMC Symmetrix Remote Data Facility (SRDF) family of software is the gold standard for remote replications in mission critical
environments. Built for the industry leading high-end VMAX storage array, the SRDF family is trusted for disaster recovery and
business continuity. SRDF offers a variety of replication modes that can be combined in different topologies, including two, three, and
even four sites. SRDF and TimeFinder are closely integrated to offer a combined solution for local and remote replication.

Some of the main SRDF capabilities include:

SRDF modes of operation:


o SRDF Synchronous (SRDF/S) mode which is used to create a no-data-loss of committed transactions solution. The
target devices are an exact copy of the source devices (Production).

o SRDF Asynchronous (SRDF/A) mode which is used to create consistent replicas at unlimited distances without write
response time penalty to the application. The target devices are typically seconds to minutes behind the source devices
(Production), though consistent (restartable).

o SRDF Adaptive Copy (SRDF/ACP) mode which allows bulk transfers of data between source and target devices
without write-order fidelity and without write performance impact to source devices. SRDF/ACP is typically used for data
migrations as a Point-in-Time data transfer. It is also used to catch up after a long period that replication was
suspended and many changes are owed to the remote site. SRDF/ACP can be set to continuously send changes in bulk
until the delta between source and target is reduced to a specified skew. At this time SRDF/S or SRDF/A mode can
resume.

SRDF groups:

9
o An SRDF group is a collection of matching devices in two VMAX3 storage arrays together with the SRDF ports that are
used to replicate these devices between the arrays. HYPERMAX OS allows up to 250 SRDF groups per SRDF director.
The source devices in the SRDF group are called R1 devices, and the target devices are called R2 devices.
o SRDF operations are performed on a group of devices contained in an SRDF group. This group is defined by using either
a text file specifying the list of devices, a device-group (DG), composite/consistency-group (CG), or a storage group
(SG). The recommended way is to use a storage group.
SRDF consistency:
o An SRDF Consistency Group is an SRDF group to which consistency was enabled.

o Consistency can be enabled for either Synchronous or Asynchronous replication mode.


o An SRDF consistency group always maintains write-order fidelity (also called: dependent-write consistency) to make
sure that the target devices always provide a restartable replica of the source application. Note: Even when consistency
is enabled the remote devices may not yet be consistent while SRDF state is sync-in-progress. This happens when SRDF
initial synchronization is taking place before it enters a consistent replication state.
o SRDF consistency also implies that if a single device in a consistency group cant replicate, then the whole group will
stop replicating to preserve target devices consistency.
o Multiple SRDF groups set in SRDF/A mode can be combined within a single array or across arrays. Such grouping of
consistency groups is called multi-session consistency (MSC). MSC maintains dependent-write consistent
replications across all the participating SRDF groups.
SRDF sessions:
o An SRDF session is created when replication starts between R1 and R2 devices in an SRDF group.
o SRDF session can establish replication between R1 to R2 devices. Only on first establish R1 and R2 devices require full
copy. Any subsequent establish (for example, after an SRDF split or suspend) will be incremental, only passing changed
data.
o SRDF session can restore the content of R2 devices back to R1. Restore will be incremental, moving only changed data
across the links. TimeFinder and SRDF can restore in parallel, for example: bring back a remote backup image.
o During replication the devices to which data is replicated are write-disabled (read-only).
o An SRDF session can be suspended, temporarily halting replication until a resume command is issued
o An SRDF session can be split, which not only suspends the replication but also makes the R2 devices read-writable.
o An SRDF checkpoint command will not return the prompt until the content of the R1 devices has reached the R2
devices. This option helps in creating remote database backups when SRDF/A is used.
o An SRDF swap will change R1 and R2 personality, and the replication direction for the session.
o An SRDF failover makes the R2 devices writable. R1 devices, if still accessible, will change to Write_Disabled (read-
only). The SRDF session will be suspended and application operations will proceed on the R2 devices.
o An SRDF failback copies changed data from R2 devices back to R1, and makes the R1 devices writable. R2 devices are
made Write_Disabled (read-only).
o SRDF replication sessions can go in either direction (bi-directional) between the two arrays, where different SRDF
groups can replicate in different directions.

Appendix II, SRDF Modes and Topologies, and Appendix IV, Solutions Enabler CLI Commands for SRDF Management, provide
additional information.
For more information on SRDF, refer to the VMAX3 Family with HYPERMAX OS Product Guide.

10
ORACLE DATABASE REPLICATION WITH TIMEFINDER AND SRDF
CONSIDERATIONS

Number of Snapshots, Frequency, and Retention


VMAX3 TimeFinder SnapVX allows up to 256 snapshots per source device with minimal cache and capacity impact. SnapVX minimizes
the impact of Production host writes by using intelligent Redirect-on-Write and Asynchronous-Copy-on-First-Write. Both methods
allow Production host I/O writes to complete without delay due to background data copy while Production data is modified and the
snapshot data preserves its Point-in-Time consistency.

If snapshots are used as part of a disaster protection strategy then the frequency of creating snapshots can be determined based on
the RTO and RPO needs.

For a restart solution where no roll-forward is planned, snapshots taken at very short intervals (seconds or minutes)
ensure that RPO is limited to that interval. For example, if a snapshot is taken every 30 seconds, there will be no more than
30 seconds of data loss if it is needed to restore the database without recovery.
For a recovery solution, frequent snapshots ensure that RTO is short as less data will need recovery during roll forward of
logs to the current time. For example, if snapshots are taken every 30 seconds, roll forward of the data from the last
snapshot will be much faster than rolling forward from a nightly backup or hourly snapshots. Linked targets for the existing
snapshots can be further used to create additional Point-in-Time snapshots for repurposing or backups.
Because snapshots consume storage capacity based on the database change rate, old snapshots should be terminated when no
longer needed.

Snapshot Link Copy vs. No Copy Option


SnapVX snapshots cannot be directly accessed by a host. They can be either restored to the source devices or linked to up to 1024
sets of target devices. When linking any snapshot to target devices, SnapVX allows using the copy or no-copy option where no-copy
is the default. Targets created using either of these options can be presented to the mount host and all the Test Cases described in
this document can be executed on them. A no-copy link can be changed to copy on demand to create a full-copy linked target.

No-copy option: No-copy linked targets remain space efficient by sharing pointers with Production and the snapshot. Only changes
to either the linked targets or Production devices consume additional storage capacity to preserve the original data. However, reads
to the linked targets may affect Production performance as they share their storage via pointers to unmodified data. Another by-
product of no-copy linked targets is that they do not retain their data after they are unlinked. When the snapshot is unlinked, the
target devices no longer provide a coherent copy of the snapshot point-in-time data as before, though they can be relinked later.

Copy option: Alternatively, the linked-targets can be made a stand-alone copy of the source snapshot point-in-time data by using
copy option. When the background copy is complete, the linked targets will have their own copy of the point-in-time data of the
snapshot, and will not be sharing pointers with Production. If at that point the snapshot is unlinked, the target devices will maintain
their own coherent data, and if they are later relinked they will be incrementally refreshed from the snapshot (usually, after the
snapshot is refreshed).

No-copy linked targets are useful for storage capacity efficiency due to shared pointers. They can be used for short term and light
weight access to avoid affecting Productions performance. When longer retention period of the linked targets is anticipated, or heavy
workload, it could be better to perform a link-copy and have them use independent pointers to storage. It should be noted that
during the background copy the storage backend utilization will increase and the operator may want to time such copy operations to
periods of low system utilization to avoid any application performance overhead.

Oracle Database Restart vs. Recovery Solutions


TimeFinder SnapVX creates consistent snapshots by default, which are well suited for a database restart solution. A restartable
database replica can simply be opened, and it will perform crash or instance recovery just as if the server rebooted or the DBA
performed a shutdown abort. To achieve a restartable solution all data, control, and redo log files must participate in the consistent
snapshot. Archive logs are not required and are not used for a database restart.

Traditionally hot backup mode is used to create a database recoverable solution. A recoverable database replica can perform
database recovery to a desired point in time using archive and redo logs. Oracle database 12c enhanced the ability to create
database recoverable solution based on storage replications by leveraging storage consistency instead of hot-backup mode. This
feature of Oracle 12c is called: Oracle Snapshot Storage Optimization and is demonstrated in Test Case 2.

For a snapshot that will be recovered on the Production host and therefore relies on the available logs and archive logs, the snapshot
can include just the data files. However, if the snapshot will be recovered on another host (such as when using linked targets) an

11
additional snapshot of the archive logs should be taken, following the best practice described in Test Case 2 for recoverable replicas.
Redo logs are not required in the snapshot.

It is possible to create a hybrid replica that can be used for either recovery or restart. This can be done by including all data, control,
and redo logs in the first replica, and archive logs in the second (following the best practice for recoverable database replica). In that
case, if a restartable solution is performed, the archive log replica will not be used. If a recoverable solution is used, the replica of
the online logs will not be restored (especially since we dont want to overwrite Productions redo logs if those are still available).

RMAN and VMAX3 Storage Replication


Oracle recovery manager (RMAN) is integrated with Oracle server for tighter control and integrity of Oracle database backups and
recovery. RMAN validates every block prior to the backup to ensure data integrity and provides several options for higher efficiency,
parallelism, detailed history, cataloging, and backup file retention policies. RMAN backups can be offloaded to an alternate host by
using a linked SnapVX target and mounting an Oracle database instance from this copy. As RMAN backups use DBID when storing
the backup catalog information, such backups can be restored directly to production database. As the size of the database grows,
VMAX3 array snapshot-based backups allow creating recoverable database copies and mounting them to an alternate host for RMAN
backups. Alternatively an Oracle database mounted using the VMAX3-based recoverable snapshots can be registered in RMAN
catalog database for proper tracking of such snapshot-based backup images for use in a production database recovery using RMAN.

A typical Oracle database backup strategy involves running full database backups on a periodic basis; however, running incremental
backups frequently backs up only the changes since the prior backup. To further improve the efficiency of incremental backups,
RMAN allows the use of block change tracking to maintain metadata for the changes in the backup. Once enabled, RMAN incremental
backups use block change tracking files (BCT) to quickly identify the changed blocks. The RMAN block change tracking mechanism
can also be deployed when offloading backups to alternate hosts using VMAX3 snapshots. RMAN based backups are described in Test
Case 3.

Command Execution and Host Users


Typically, an Oracle host user account is used to execute Oracle RMAN or SQL commands. A storage admin host user account is used
to perform storage management operations (such as TimeFinder SnapVX, or multipathing commands). A different host user account
may be used to setup and manage Data Domain systems. This type of role and security segregation is common and often helpful in
large organizations where each group manages their respective infrastructure with a high level of expertise.

There are different ways to address this:

Allow the database backup operator (commonly a DBA) a controlled access to commands in Solutions Enabler and Data
Domain, leveraging VMAX Access Controls (ACLs).
Use SUDO, allowing the DBA to execute specific commands for the purpose of their backup (possibly in combination with
Access Controls).
It is beyond the scope of this paper to document how access controls are executed; however, it is important to mention that
Solutions Enabler can be installed for a non-root user as described in Test Case 9. Solutions Enabler has a robust set of Access
Controls that fit this situation. Similarly, for Oracle database replication or backup purposes, additional user accounts other than
sysadmin can be created that can manage such processes appropriately. Oracle also allows setting up a backup user and only
providing them a specific set of authorizations appropriate for their task.

Oracle DBaaS SnapClone Integration on VMAX3


Oracle Database as a Service (DBaaS) provides self-service deployment of Oracle databases and resource pooling to cater to multi-
tenant environments. Oracle DBaaS SnapClone is storage agnostic and self-service approach to creating rapid and space efficient
clone orchestrated through Oracle Enterprise Manager Cloud Control (EM12c). SnapClone is developed by Oracle and integrated with
VMAX3 SMI-S provider for management and storage provisioning for TEST and DEV copies from a TEST Master database using
VMAX-based storage snapshots. It also offers storage ceiling or capacity quotas per thin pool to give control over database
storage consumption to DBAs. At this point SnapClone is available on the VMAX platform and development of VMAX3 support is
underway. Please refer to Oracle note: https://docs.oracle.com/cd/E24628_01/doc.121/e28814/cloud_db_portal.htm#EMCLO619 for
more information about SnapClone functionality.

Storage Layout and ASM Disk Group Considerations

Storage design principles for Oracle on VMAX3


The storage design principles for Oracle on VMAX3 are documented in the white paper: Deployment Best Practices for Oracle
Database with VMAX3 Service Level Objective Management. Two key points are described below:

12
ASM Disk Groups and Oracle files:
o A minimum of 3 sets of database devices should be defined for maximum flexibility: data/control files, redo logs, and
FRA (archive logs), each in its own Oracle ASM disk group (for example, +DATA, +REDO, +FRA).
o The separation of data, redo and archive log files allows backup and restore of only the appropriate file types at the
appropriate time. For example, Oracle backup procedures require the archive logs to be replicated at a later time than
the data files. Also, during restore, if the redo logs are still available on the Production host, we can restore only data
files without overwriting the Productions redo logs.
o If only database restart solution is required, then the data and log files can be mixed and replicated together (although
they may be other reasons to separate them, such as for better performance management).
o When Oracle RAC is used it is recommended to use a separate ASM disk group for Grid infrastructure (for example,
+GRID). The +GRID ASM disk group should not contain user data. In this way, the cluster information is not part of a
database backup and if a recovery is performed on another clustered server, it can already have its +GRID ASM disk
group configured ahead of time.
Partition alignment on x86 based systems

o Oracle recommends on Linux and Windows systems to create at least one partition on each storage device. Due to the
legacy BIOS issue, by default such partitions are rarely aligned. It is therefore strongly recommended to move the
beginning of the first partition using fdisk or parted to an offset of 1MB (2048 blocks).

o By having the beginning of the partition aligned, I/O to VMAX3 will be aligned with storage tracks and FAST extents
achieving best performance.

Remote Replication Considerations


It is recommended for an SRDF/A solution to always use Consistency Enabled to ensure that if a single device cannot replicate, the
entire SRDF group will stop replicating, maintaining a consistent database replica on the target devices.

SRDF is a restart solution and since database crash recovery never uses archive logs there is no need to include FRA (archive logs) in
the SRDF replication. However, there are two reasons why they could be included:

If Flashback database functionality is required for the target. Replicating the flashback logs in the same consistency group
as the rest of the database allows the use of Flashback database on the target.
To allow offload of backup operations to the remote site as archive logs are required to create a stand-alone backup image
of the database. In this case, the archive logs can use a different SRDF group and mode, potentially leveraging SRDF/A,
even if data, control, and log files are replicated with SRDF/S.
It is always recommended to have a database replica available at the SRDF remote site as a gold copy protection from rolling
disasters. Rolling disasters is a term used when a first interruption to normal replication activities is followed by a secondary
database failure on the source, leaving the database without an immediately available valid replica. For example, if SRDF replication
was interrupted for any reason for a while (planned or unplanned) and changes were accumulated on the source, once the
synchronization resumes and until the target is synchronized (SRDF/S) or consistent (SRDF/A), the target is not a valid database
image. For that reason it is best practice before such resynchronization to take a TimeFinder gold copy replica at the target site. This
preserves the last valid image of the database as a safety measure from rolling disasters.

When using Oracle RAC on the Production host, since RAC uses shared storage by virtue of replicating all the database components
(data, log, and control files), the target database can be started in cluster or single-instance. Regardless of the choice, it is not
recommended to replicate the cluster layer (voting disks or cluster configuration devices) since these contain local hosts and subnets
information. It is best practice that if a cluster layer is required at the mount hosts, it should be configured ahead of time, based on
mount hostnames and subnets, and therefore be ready to bring up the database when needed.

13
ORACLE BACKUP AND DATA PROTECTION TEST CASES

Test Configuration
This section provides examples using Oracle database backup and data protection on VMAX3 arrays. Figure 2 depicts the overall test
configuration used to describe these Test Cases.

Figure 2 Oracle local and D/R test configuration

Test Overview

General test notes


The FINDB database was configured to run an industry standard OLTP workload with 70/30 read/write ratio and 8KB block
size, using Oracle database 12c and ASM. No special database tuning was done as the focus of the test was not on
achieving maximum performance, but rather comparative differences of a standard database workload.
DATA and REDO storage groups (and ASM disk groups) were cascaded into a parent storage group (FINDB_SG) for ease of
provisioning, performance management, and data protection.
Storage groups were created on local and remote VMAX arrays for linking point-in-time snapshots for various Test Cases.
Production Oracle database SnapVX and SRDF operations were run on FINDB_SG.

Database configuration details


The following tables show the Test Cases test environment. Table 1 shows the VMAX3 storage environment, Table 2 shows the host
environment, and Table 3 shows the databases storage configuration.

Table 1 Test storage environment


Configuration aspect Description

Storage array Single engine VMAX 200K

HYPERMAX OS 5977.596

Drive mix (including spares) 17 x EFDs - RAID5 (3+1)

66 x 15K HDD - RAID1


34 x 1TB 7K HDD - RAID6 (6+2)

14
Table 2 Test host environment
Configuration aspect Description

Oracle Oracle Grid and Database release 12.1.0.2

Linux Oracle Enterprise Linux 6

Multipathing Linux DM Multipath

Hosts 2 x Cisco C240, 96 GB memory

Volume Manager Oracle ASM

Table 3 Test database configuration


Production
ASM Disk Storage Production Local Linked SRDF R2 SnapVX
Database Groups Groups (SG) Parent SG Target SG SRDF R2 SG Target SG

+DATA: 4 x 1 TB DATA_SG
thin LUNs
Name: FINDB_SG FINDB_MNT FINDB_R2 FINDB_R2_TGT
+REDO: 4 x 150 REDO_SG
FINDB
GB thin LUNs
Size: 1.5 TB
+FRA: 4 x 100 FINFRA_SG - FINFRA_R2 FINFRA_R2_TGT
FINFRA_MNT
GB thin LUNs

High level test cases:


1. Creating a local restartable database replica for database clones
2. Creating a local recoverable database replica for backup and recovery
3. Performing full or incremental RMAN backups from a SnapVX replica (including Block Change Tracking)

4. Performing database recovery of Production using a recoverable snapshot


5. Using SRDF/S and SRDF/A for database Disaster Recovery
6. Creating remote restartable copies

7. Creating remote recoverable database replicas


8. Parallel recovery from remote backup image
9. Leveraging self-service replications for DBAs

Test Case 1: Creating a local restartable database replica for database clones

Objectives:
The purpose of this Test Case is to demonstrate the use of SnapVX to create a local database restartable copy, also referred to as a
database clone. The database clone can be started on a Mount host for purposes such as logical error detection or creation of Test,
Development, and Reporting environments. These environments can be periodically refreshed from Production.

Note: A restartable database replica must include all database control, data, and redo log files, and therefore the cascaded storage
group FINDB_SG was used.

High level steps:


1. Create a snapshot of Production database containing all control, data, and redo log files.
2. Link the snapshot to target devices and present them to the Mount host.
3. Start the Oracle database on the Mount host.

15
Groups used:
Server Storage Group ASM Disk Group

Production Host FINDB_SG DATA, REDO

Mount Host FINDB_MNT DATA, REDO

Detailed steps:
On Production host:
Create a snapshot of the Production database containing all control, data, and redo log files.
# symsnapvx sg FINDB_SG name FINDB_Restart establish

On Mount host:
Complete pre-requisites:

o GRID infrastructure and Oracle binaries should be installed ahead of time on the mount host. If RAC is used on the
Mount host then it should be pre-configured so the ASM disk groups from the snapshots can simply be mounted into the
existing cluster. If RAC is not used on the Mount host see steps later to bring up Oracle High Availability Services
(HAS).
o The storage group FINDB_MNT contains the linked-target devices of Productions snapshot. It should be added to a
masking view to make the target devices accessible to the Mount host.

If refreshing an earlier snapshot, shut down the database instance that will be refreshed and dismount its ASM disk groups:

o Login to the database instance and shut it down.


SQL> shutdown immediate;

o Log in to the ASM instance and dismount the ASM disk groups.
SQL> dismount diskgroup DATA;
SQL> dismount diskgroup REDO;

Link the Production snapshot based on FINDB_SG to the target storage group FINDB_MNT. For the first link use the link
option. For all other links use the relink option.

# symsnapvx sg FINDB_SG lnsg FINDB_MNT snapshot_name FINDB_Restart relink

If RAC is used on the Mount host then it should be already configured and running using a separate ASM disk group and
therefore +DATA and +REDO can simply be mounted. Skip to the next step. If RAC is not used, an ASM instance may not
be running yet. Bring it up following the procedure below before mounting +DATA and +REDO ASM disk groups.
o As the Grid infrastructure user (ASM instance user), start the Oracle high-availability services.

$ crsctl start has


CRS-4123: Oracle High Availability Services has been started.

o Log in to the ASM instance and update the ASM disk string before mounting the ASM disk groups.

$ sqlplus "/ as sysasm"


SQL> alter system set asm_diskstring='/dev/mapper/ora*p1';

Mount the ASM disk groups that now contain the snapshot point-in-time data.

SQL> mount diskgroup DATA;


SQL> mount diskgroup REDO;

Log in to the database instance and start up the database (do not perform database recovery). The database will perform
crash (or instance) recovery and will open.
SQL> startup

Note: Since there is no roll forward of transactions, the creation of database clones using SnapVX is very fast. The time it takes
to Oracle to complete crash recovery and opens depends on the amount of transactions in the log since the last checkpoint.

16
Test Case 2: Creating a local recoverable database replica for backup and recovery

Objectives:
The purpose of this Test Case is to demonstrate the use of SnapVX to create a local recoverable database replica. Such a database
replica can be used to recover Production, or can be mounted on a Mount host and used for RMAN backup and running reports.

Note: As long as the database replica is only mounted, or opened in read-only mode, it can be used to recover Production.

High level steps:


1. For Oracle databases prior to 12c, place the Production database in hot-backup mode.

2. Create a consistent snapshot of Production control, data, and redo log files (which are contained in +DATA and +REDO ASM disk
groups).

3. For Oracle databases prior to 12c, end hot-backup mode.

4. As the database user, switch logs, archive the current log, and save backup control file to +FRA ASM disk group.

5. If the replica is used to offload RMAN incremental backups to a Mount host then switch RMAN Block Change Tracking file
manually.

6. Create a snapshot of Production archive logs contained in +FRA ASM disk group.

7. Link both snapshots to target devices and present them to the Mount host.

8. Mount the ASM disk groups on the Mount host.

9. Mount the database instance on the Mount host (do not open it).

10. Optionally, catalog the database backup with RMAN.

Note: See Test Case 3 for details on how the snapshot can be used to perform RMAN backups.

Groups used:
Server Storage Group ASM Disk Group

Production Host FINDB_SG DATA, REDO

FINFRA_SG FRA

Mount Host FINDB_MNT DATA, REDO

FINFRA_MNT FRA

Detailed steps:
On Production host:
Pre-Oracle 12c, place the Production database in hot-backup mode.

SQL> alter database begin backup;

Create a snapshot of Production control, data, and redo log files contained in +DATA and +REDO ASM disk groups.
# symsnapvxsg FINDB_SG name Snapshot_Backup establish

Pre-Oracle 12c, end hot-backup mode.

SQL> alter database end backup;

As the database user, switch logs, archive the current log, and save backup control file to +FRA ASM disk group.

SQL> alter system switch logfile;


SQL> alter system archive log current;

17
SQL> alter database backup controlfile to +FRA/CTRLFILE_BKUP REUSE;

If the replica is not used to offload RMAN incremental backups to a Mount host (Test Case 3) then skip to the next step.
Otherwise, as a database user on Production, switch RMAN Block Change Tracking file manually.

SQL> execute dbms_backup_restore.bctswitch();

Note: When RMAN incremental backups are taken using RMAN Block Change Tracking (BCT), then RMAN switches the version of
the file with each backup automatically. However, when the RMAN backup is offloaded to a Mount host, RMAN will update the
BCT file on the Mount host. Oracle provides an API for such cases that switches the BCT file manually on Production after
incremental backups from the Mount host.

Note: By default Oracle only keeps 8 versions in the BCT file for incremental backups. That means that if more than 8
incremental backup are taken before another level 0 (full) backup takes place, RMAN will not be able to use the BCT file and will
revert to scanning the whole database. To increase the number of versions in the BCT file use the init.ora parameter
_bct_bitmaps_per_file (see Oracle support notes: 1192652.1 and 1528510.1)

Create a snapshot of Production archive logs contained in +FRA ASM disk group.
# symsnapvx sg FINFRA_SG name FRA_Backup establish

On Mount host:
Complete pre-requisites:
o GRID infrastructure and Oracle binaries should be installed ahead of time on the mount host. If RAC is used on the
Mount host then it should be pre-configured so the ASM disk groups from the snapshots can simply be mounted into the
existing cluster. If RAC is not used on the Mount host, see steps later to bring up Oracle High Availability Services
(HAS).
o The storage groups FINDB_MNT and FINFRA_MNT contain the linked-target devices of Productions snapshots. They
should be added to a masking view to make the target devices accessible to the Mount host.
If refreshing an earlier snapshot, shut down the database instance and dismount the ASM disk groups.
o Login to the database instance and shut it down.

SQL> shutdown immediate;

o Log in to the ASM instance and dismount the ASM disk groups that will be refreshed.
SQL> dismount diskgroup DATA;
SQL> dismount diskgroup REDO;
SQL> dismount diskgroup FRA;

Link the Production snapshots based on FINDB_SG and FINFRA_SG to the target storage group: FINDB_MNT and
FINFRA_MNT respectively. For the first link use the link option. For all other links use the relink option.

Note: By default SnapVX link uses no-copy mode. To have a stand-alone copy with all the data from the source, a copy mode
can be used by adding -copy to the command.

# symsnapvx sg FINDB_SG lnsg FINDB_MNT snapshot_name Snapshot_Backup relink


# symsnapvx sg FINFRA_SG lnsg FINFRA_MNT snapshot_name FRA_Backup relink

If ASM instance is not running, follow the steps in the Test Case 1 to start the ASM instance and update the ASM disk string.
Mount the ASM disk groups that now contain the snapshot point-in-time data.

SQL> mount diskgroup DATA;


SQL> mount diskgroup REDO;
SQL> mount diskgroup FRA;

Log in to the database instance and mount the database (but do not open it with resetlogs).
SQL> startup mount

Optionally, catalog the backup data files (all in +DATA disk group) with RMAN.

RMAN> catalog start with +DATA noprompt;

18
Test Case 3: Performing FULL or incremental RMAN backup from a SnapVX replica

Objectives:
The purpose of this test case is to offload RMAN backups to a Mount host using SnapVX snapshot. The RMAN backup can be full or
incremental. In incremental backup, RMAN Block Change Tracking (BCT) is used from the Mount host.

High level steps:


1. If RMAN incremental backups are used then enable block change tracking on Production.
2. Perform Test Case 2 to create a recoverable replica of Production and mount it to the Mount host.
3. Perform RMAN full or incremental backup from the Mount host

Groups used:
Server Storage Group ASM Disk Group

Production Host FINDB_SG DATA, REDO

FINFRA_SG FRA

Mount Host FINDB_MNT DATA, REDO

FINFRA_MNT FRA

Detailed steps:
On Production host:
If RMAN incremental backups are used then enable block change tracking on Production. Make sure that the block change
tracking file is created in the +FRA ASM disk group. For example:

SQL> alter database enable block change tracking using file +FRA/BCT/change_tracking.f reuse;

Perform Test Case 2 to create recoverable replica of Production and mount it to the Mount host.
o If RMAN incremental backups are used, make sure to switch BCT file manually after the step that archives the current
log file, as described in Test Case 2. At the end of this step, ASM disk group +FRA will be mounted to the Mount host
with the Block Change Tracking file included, and Productions BCT file will start tracking block changes with a new
version.

On Mount host:
If no RMAN incremental backups are used then simply run RMAN backup script and perform a full database backup.
o Example for creating full backup (simplest form):

RMAN> run
{
Backup database;
}

If RMAN incremental backups are used perform a full backup (also called level 0 backup) periodically, followed by level 1
backup. For example, a weekly level 0 backup and daily level 1 backups. The DBA can determine an incremental backup
strategy between Differential or Cumulative incremental backups (refer to Oracle documentation for more details).

o Example for creating first full backup as part of incremental backup strategy:
RMAN> run
{
Backup incremental level 0 database;
}

o Example for creating an incremental backup:


RMAN> run
{
Backup incremental level 1 database;
}

19
o Verify for level 1 backups that the BCT file was used:

SQL> select count(*) from v$backup_datafile where used_change_tracking='YES';

Test Case 4: Performing database recovery of Production using a recoverable snapshot

Objectives:
The purpose of this Test Case is to leverage a previously taken recoverable snapshot to perform a database recovery of the
Production database. The Test Case demonstrates full and point-in-time recovery. It also demonstrates how to leverage the Oracle
12c Storage Snapshot Optimization feature during database recovery.

High level steps:


1. Perform Test Case 2 to create recoverable replica of Production, though there is no need to mount it to the Mount host.
2. Restore the SnapVX snapshot of +DATA ASM disk group alone to Production. (Do not restore the +REDO ASM disk group to
avoid overwriting the current redo logs on Production if they survived).
3. Recover the Production database using the archive logs and optionally the current redo log.

Groups used:
Server Storage Group ASM Disk Group

Production Host (Parent) FINDB_SG (DATA, REDO)


(Child) DATA_SG DATA
(Child) REDO_SG REDO

FINFRA_SG FRA

Detailed steps:
On Production host during backup:
Perform Test Case 2 to create a recoverable replica of Production. The linked target (or Mount host) will not be used in this
scenario, only the original snapshot of Production.
On Production host during restore:
Restore the SnapVX snapshot of +DATA ASM disk group alone to Production (do not restore the +REDO ASM disk group to
avoid overwriting the current redo logs on Production if they survived). To do so, use the child storage group: DATA_SG
instead of the cascaded storage group FINDB_SG that was used to create the original snapshot.
o If Productions +DATA ASM disk group was still mounted then, as Grid user, use asmcmd or SQL dismount it (repeat
on all nodes if RAC is used).
SQL> alter diskgroup DATA dismount;

o Restore the SnapVX snapshot of the +DATA ASM disk group alone.

# symsnapvx sg DATA_SG snapshot_name Snapshot_Backup restore

o It is not necessary to wait for the snapshot restore to complete; however, at some point after it completed, terminate
the snapshot-restore session as a best practice.
# symsnapvx sg DATA_SG snapshot_name Snapshot_Backup verify restored
# symsnapvx sg DATA_SG snapshot_name Snapshot_Backup terminate -restored

SnapVX allows using the source devices as soon as the restore is initiated, even as background copy operation of the
changed data is taking place in the background. There is no need to wait for the restore to complete. Once the restore
starts, the +DATA ASM disk group can be mounted on the Production host to the ASM instance.

o As Grid user, using asmcmd or SQL mount DATA ASM disk group (repeat on all nodes if RAC is used).
SQL> alter diskgroup DATA mount;

20
Recover the Production database using the archive logs and optionally the current redo log.
o When performing full recovery (using the current redo log if still available), follow Oracle database recovery procedures.
For example:
SQL> recover automatic database;
SQL> alter database open;

Note: It might be necessary to point to the location of the online redo logs or archive logs if the recovery process didnt
locate them automatically (common in RAC implementations with multiple online or archive logs locations). The goal is
to apply any necessary archive logs as well as the online logs fully.

o When performing incomplete recovery, when leveraging the Oracle 12c feature Storage Optimized Snapshot, provide
the time of the snapshot during the recovery. If the backup was taken using hot-backup mode, remove the snapshot
time <time> reference. An example for using Storage Snapshot Optimization:
SQL> alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";
SQL> recover database until time '2015-03-21 12:00:00' snapshot time '2015-03-21 11:50:40';
SQL> alter database open RESETLOGS;

If the recovery process requires archive logs that are no longer available on the server, but exist in the +FRA snapshot, use
the snapshot to retrieve the missing archive logs.

Note: it is recommended that a new snapshot of +FRA be taken prior to retrieving the old +FRA snapshot with the missing
archive logs. The new snapshot will contain any additional archive logs that currently exist on the host, but were created after
the old +FRA snapshot was taken and would therefore be lost when it is restored.

o Dismount +FRA ASM disk group.

SQL> alter diskgroup FRA dismount;

o Create a new snapshot of +FRA prior to restoring an old one.


# symsnapvx -sg FINFRA_SG name FRA_Backup establish

o List the FRA snapshots to choose which snapshot generation to restore (generation 0 is always the latest for a given
snapshot-name)

# symsnapvx -sg FINFRA_SG snapshot_name FRA_Backup list -detail

o Restore the appropriate +FRA snapshot and use its archive logs as necessary during the database recovery process.

# symsnapvx -sg FINFRA_SG snapshot_name FRA_Backup restore generation <gen_number>

o Mount +FRA ASM disk group.

SQL> alter diskgroup FRA mount;

o It is not necessary to wait for the snapshot restore to complete; however, at some point after it completed, terminate
the snapshot-restore session as a best practice.

# symsnapvx -sg FINFRA_SG snapshot_name FRA_Backup verify -restored


# symsnapvx -sg FINFRA_SG snapshot_name FRA_Backup terminate -restored

Test Case 5: Using SRDF/S and SRDF/A for database disaster recovery

Objectives:
The purpose of this test case is to leverage VMAX3 SRDF to create remote restartable copies and use that for the production
database disaster recovery.

High level steps:


1. Set up SRDF between production and D/R sites.
2. Perform full establish and set up the appropriate replication mode (synchronous and/or asynchronous).
3. Start the application on the D/R site in the event of disaster at the production site.
4. Perform application restart.

21
Groups used:
SRDF Site Server Storage Group ASM Disk Group

R1 Production Host (Parent) FINDB_SG (DATA, REDO)


(Child) DATA_SG DATA

(Child) REDO_SG REDO

R1 Production Host FINFRA_SG FRA

R2 D/R host (Parent) FINDB_R2 (DATA, REDO)


(Child) DATA_R2_SG DATA
(Child) REDO_R2_SG REDO

R2 D/R host FINFRA_R2 FRA

Detailed steps:
SRDF replication setup example (performed from local storage management host):
Create a dynamic SRDF group between production and D/R sites.

# symrdf addgrp -label FINDB -rdfg 20 -dir 1H:10 -remote_sid 536 -remote_dir 1E:7 -remote_rdfg 20

Pair SRDF devices between the Production and remote storage groups that include the database data, control, and redo log
files.
# symrdf -sg FINDB_SG -rdfg 20 createpair -type R1 -remote_sg FINDB_R2 -establish

Optionally pair SRDF device for FRA storage group if archive logs and flash back logs are also replicated to remote site.

# symrdf -sg FINDB_FRA -rdfg 21 createpair -type R1 -remote_sg FINFRA_R2 -establish

Set the SRDF mode to synchronous or asynchronous. If the FRA ASM disk group includes only archive logs, SRDF/A can be
used for more efficient use of bandwidth. If FRA includes also flashback logs then it should be consistent with FINDDB_SG
and use the same SRDF group/mode.
# symrdf -rdfg 20 set mode synchronous

Enable replication consistency when using SRDF asynchronous (SRDF/A).


# symrdf -rdfg 20 enable

SRDF replication failover example (performed from remote storage management host):
In the event of an outage on Production site, the SRDF link will fail and replication stops. The R2 devices are consistent but
not yet read-writable. Perform SRDF failover to make the R2 devices write-enabled. The commands below describe planned
failover. In the event of a disaster this will be automatically done by SRDF.

[Failover DATA and REDO logs]


# symrdf -sid 535 -rdfg 20 failover

[Failover FRA logs]


# symrdf sid 535 rdfg 21 failover

Mount ASM disk groups on the remote site.

SQL> alter diskgroup DATA mount;


SQL> alter diskgroup REDO mount;
SQL> alter diskgroup FRA mount;
SQL> startup mount

Start an Oracle instance on the remote site.


SQL> startup mount

22
On production host for disaster recovery:
Split SRDF link and initiate the restore operation. Repeat the steps for FRA also if they are restored.

# symrdf sid 535 sg FINDB_SG rdfg 20 split


# symrdf sid 535 sg FINDB_SG rdfg 20 restore

When the restore operation is initiated, restart the production site application.

<Use SQLPLUS>
<For ASM instance>
SQL> alter diskgroup DATA mount;
SQL> alter diskgroup REDO mount;
SQL> alter diskgroup FRA mount;

<For Oracle instance>


SQL> startup mount

When the restore operation completes, failback SRDF to revert the roles back to original.

[Failback DATA and REDO logs]


# symrdf -sid 535 -rdfg 20 failback

[Failback FRA logs]


# symrdf sid 535 rdfg 21 failback

Test Case 6: Creating remote restartable copies

Objectives:
The purpose of this test case is to leverage VMAX3 SRDF and SnapVX to create remote restartable copies and use that for the
production database disaster recovery. The gold copy snapshot created on a D/R site is used to link with a separate target storage
group for D/R testing.

High level steps:


1. Use Test Case 5 to set up the remote site.
2. Use SnapVX to create snapshots using R2 devices.
3. Start an application for D/R testing using the snapshots.

Groups used:
SRDF Site Server Storage Group ASM Disk
Group

R1 Production Host (Parent) FINDB_SG (DATA, REDO)


(Child) DATA_SG DATA

(Child) REDO_SG REDO

R2 - (Parent) FINDB_R2 (DATA, REDO)

(Child) DATA_SG_R2 DATA


(Child) REDO_SG_R2 REDO

R2 D/R host (Parent) FINDB_R2_TGT (DATA, REDO)

(Child) DATA_SG_R2TGT DATA


(Child) REDO_SG_R2TGT REDO

23
Detailed steps:
On production host during normal operation:
Use Test Case 5 to set up the remote site.

On D/R host during normal operation:


Create periodic point-in-time snapshots from R2 devices for D/R testing.

<Create snapshot for DATA and REDO on R2 site to be used for periodic D/R testing>
# symsnapvx sid 536sg FINDB_R2 name FINDB_R2Gold nop v establish
On D/R host during normal operation for D/R testing:
Link the snapshot to the target storage group on R2 site. For subsequent D/R testing, relink can be used to refresh the
existing targets.

# symsnapvx sid 536 sg FINDB_R2 lnsg FINDB_R2TGT snapshot_name FINDB_R2Gold link

Mount ASM disk groups.

SQL> mount diskgroup DATA;


SQL> mount diskgroup REDO;

Restart the Oracle database.

SQL> startup
On production host for disaster recovery using the restartable snapshot:
Split SRDF link to prepare for the restore. Link the snapshot to the target storage group on R2 site. For subsequent D/R
testing, relink can be used to refresh the existing targets.

# symrdf sid 535 sg FINDB_SG rdfg 20 split


# symsnapvx sid 536 sg FINDB_R2 lnsg FINDB_R2TGT snapshot_name FINDB_R2Gold link

Follow the rest of the steps in Test Case 5 for production host disaster recovery to perform SRDF restore.

Test Case 7: Creating remote recoverable database replicas

Objectives:
The purpose of this test case is to leverage VMAX3 SRDF and SnapVX to create remote recoverable copies to use for remote backups
or recovery of the production database from remote backups. The snapshots generated this way also work with the Oracle 12c
snapshot optimization feature. This test case uses the snapshot created off R2 to link with separate storage group for further backup.

Test execution steps:


1. Use Test Case 5 to set up the remote site.

2. Pre-Oracle 12c, use database backup mode prior to snapshot of DATA and REDO.
3. Create control file copies and FRA snapshot.
4. Use SnapVX to create snapshots using R2 devices.

5. Mount snapshots and Oracle instance to prepare for backups as described in Test Case 2.

Groups used:
SRDF Site Server Storage Group ASM Disk
Group

R1 Production Host (Parent) FINDB_SG (DATA, REDO)


(Child) DATA_SG DATA
(Child) REDO_SG REDO

R1 Production Host FINDB_FRA FRA

R2 - (Parent) FINDB_R2 (DATA, REDO)

24
(Child) DATA_SG_R2 DATA

(Child) REDO_SG_R2 REDO

R2 - FINDB_FRA_R2 FRA

R2 D/R host (Parent) FINDB_R2_TGT (DATA, REDO)


(Child) DATA_SG_R2TGT DATA
(Child) REDO_SG_R2TGT REDO

R2 D/R host FRA_R2_TGT FRA

Detailed steps:
On production host:
Pre-Oracle 12c, put the database in hot backup mode.

SQL> alter database begin backup;


a) If SRDF asynchronous mode (SRDF/A) is used for SRDF replication of FINDB_SG then use the SRDF checkpoint command to
make sure that the remote target datafile is also updated with backup mode.
<Issue SRDF checkpoint command>
# symrdf sid 535 sg FINDB_SG checkpoint

Note: The SRDF checkpoint command will return control to the user only after the source device content reached the SRDF
target. This is useful for example when production is placed in hot backup mode before the remote clone is taken.

b) No special action is needed when using SRDF synchronous mode (SRDF/S) on FINDB_SG.
On D/R host:
Create a snapshot for DATA and REDO disk groups on the remote target. Name the snap to identify it as the backup image.
Every time -establish is used with the same snapshot name, the generation number will be incremented while keeping the
older generation as well. This can be avoided by terminating the snap prior to recreating it.

<Create snapshot for DATA and REDO to be used for backup on remote VMAX3>
# symsnapvx sid 536 sg FINDB_R2 name Snapshot_Backup_R2 nop v establish
On production host:
Pre-Oracle 12c, take the database out of backup mode.

<Use SQLPLUS to take database out of backup mode>


SQL> alter database end backup;

Perform a log switch and archive the current log. Also save backup control +FRA/CTRLFILE_BKUP to use with RMAN
backup in FRA disk group to be available in FRA snap along with archived logs.

<Use SQLPLUS>
SQL> alter system switch logfile;
SQL> alter system archive log current;
SQL> alter database backup controlfile to +FRA/CTRLFILE_BKUP REUSE;
a) If SRDF asynchronous mode (SRDF/A) is used for SRDF replication of FINDB_FRA then use the SRDF checkpoint command
to make sure that the remote FRA disk group is updated with necessary archived logs generated during backup mode.
<Issue SRDF checkpoint command>
# symrdf sid 535 sg FINDB_SG checkpoint

Note: The SRDF checkpoint command will return control to the user only after the source device content reached the SRDF
target devices (SRDF will wait two delta sets). For example, this is useful when production is placed in hot backup mode before
the remote clone is taken.

b) No special action is needed when using SRDF synchronous mode (SRDF/S) on FINDB_SG
On D/R host:
Create a snapshot of FRA disk group on the remote target.

<Create VMAX3 Snapshot for FRA disk group >


25
# symsnapvx sid 536 sg FINDB_FRA name FINDB_FRABackup_R2 nop v establish

Link the snapshots Snapshot_Backup_R2 to FINDB_R2_TGT and FINDB_FRABackup_R2 to FRA_R2_TGT to continue with the
rest of the steps and provision the storage to the D/R host.
Use the backup operations described on Mount host in Test Case 2 to continue with further backup.

Test Case 8: Parallel recovery from remote backup image

Objectives:
The purpose of this test case is to demonstrate parallel recovery from a remote backup image by initiating a restore of the remote
target from a remote snapshot and simultaneously starting SRDF restore. This test case is similar to Test Case 4 except that it uses a
remote recoverable copy.

Test scenario:
Use remote snapshot to restore SRDF R2 devices and initiate SRDF restore simultaneously.

Test execution steps:


1. Use Test Case 7 to create a remote database snapshot.
2. Use SnapVX to restore R2 devices.
3. Start SRDF restore.
4. Start production data recovery.

Groups used:
SRDF Site Server Storage Group ASM Disk
Group

R1 Production host (Parent) FINDB_SG (DATA, REDO)


(Child) DATA_SG DATA
(Child) REDO_SG REDO

R1 Production host FINDB_FRA FRA

R2 - (Parent) FINDB_R2 (DATA, REDO)


(Child) DATA_SG_R2 DATA
(Child) REDO_SG_R2 REDO

R2 - FINFRA_R2 FRA

Detailed steps:
On production host during normal operation:
Use Test Case 4 to create a recoverable image on the remote site.

Shut down the Production database and dismount ASM disk groups.
a) Shut down Oracle database.
SQL> shutdown immediate;
b) Dismount ASM diskgroups DATA and REDO and FRA disk groups.
SQL> dismount diskgroup DATA;
SQL> dismount diskgroup REDO;
SQL> dismount diskgroup FRA;

Split SRDF groups.

# symrdf sid 535 sg FINDB_SG rdfg 20 split


26
Restore the remote target snapshot to R2 devices.

<Restore Snap VX remote snapshot>


# symsnapvx sid 536 sg FINDB_R2 snapshot_name FINDB_R2TGT restore

<Verify the completion of the restore>


# symsnapvx sid 536 sg FINDB_R2 snapshot_name FINDB_R2TGT verify summary

<Terminate once restore completes>


# symsnapvx sid 536 sg FINDB_R2 snapshot_name FINDB_R2TGT terminate -restored

Restore FRA disk group from target snap if needed for Production database recovery.

# symsnapvx sid 536 sg FINDB_FRA_R2 snapshot_name FINDB_FRA_R2TGT restore

<Verify the completion of the restore>


# symsnapvx sid 536 sg FINDB_FRA_R2 snapshot_name FINDB_FRA_R2TGT verify summary

<Terminate once restore completes>


# symsnapvx sid 536 sg FINDB_FRA snapshot_name FINDB_FRA_R2TGT terminate -restored

As soon as the restore from snap is initiated, SRDF restore can be started. SRDF will start performing incremental restore
from R2 to R1. The devices will show SyncInProg to indicate that the restore is going on. The state Synchronized will
indicate completion of the restore.

# symrdf sid 536 sg FINDB_R2 rdfg 20 restore

<Verify the completion of the restore>


# symrdf sid 536 list

Mount ASM disk groups on R1 side.

Start up the database.

Test Case 9: Leveraging Access Control List replications for storage snapshots

Objectives:
The purpose of this test case is to demonstrate self-service orchestration of Oracle database snapshots for DBAs. Symmetrix Access
Control Lists are used to grant appropriate privileges to Oracle user to perform self-service database snapshots.

Test execution steps:


1. Configure the Symmetrix Access Control List as described in Appendix V to create Symmetrix access control groups and pools
with Oracle database devices. Grant BASE, BASECTRL and SNAP privileges to these entities.

2. Install Solutions Enabler as the non-root user of the choice that would manage Oracle database backups.

3. Once Symmetrix Access Control is set up, Oracle DBAs can run snapshot operations as non-root user and all the test cases
described earlier in the white paper can be executed.

CONCLUSION
VMAX3 provides a platform for Oracle databases that is easy to provision, manage, and operate with the application performance
needs in mind. This paper provides guidance on the latest features of VMAX3 for local and remote data protection along with various
commonly deployed use cases including backup, D/R and repurposing as TEST/DEV. It also covers self-service database replication
that can be leveraged by database administrators to deploy additional copies under their control.

27
APPENDIX I - CONFIGURING ORACLE DATABASE STORAGE GROUPS FOR
REPLICATION
VMAX3 TimeFinder SnapVX and SRDF allow using VMAX3 Auto-Provisioning Groups Storage Groups for provisioning storage for
Oracle database clusters and also for creating Enginuity Consistent Assist based write order consistent snapshots. Changes to Oracle
database provisioning using these storage groups is reflected into any new snapshots created after that, making it very easy to
manage database growth. This simplifies configuring and provisioning Oracle database storage for data protection, availability and
recoverability. Cascading DATA and REDO into a parent SG allows creation of restartable copies of the database. Separating archive
logs from this group allows independent management of data protection for archived logs. While providing desired control over SLO
management, this allows easy deployment of Oracle 12c database recovery optimization from storage-based snapshots.

This appendix shows how to provision storage for Oracle DATA, REDO, and FRA disk groups to ensure database recovery SLAs are
achievable. Following this provisioning model along with the Test Cases described earlier provides proper deployment guidelines for
Oracle databases on VMAX3 to database and storage administrators.

Figure 3 shows an Oracle server provisioning storage using cascaded storage groups.

Figure 1 Oracle database cascaded storage groups

Creating snapshots for Oracle database storage groups


Figure 4 shows how to create the snapshot for Oracle database storage. A new named storage snapshot can be created or an
existing snapshot can be refreshed using the screen. It also allows setting time to live for the snapshot for automatic expiration
based on a user-provided period in number of days. Additional snapshots from the linked target can also be created in the same
way.

28
Figure 2 Unisphere create snapshot

Linking Oracle database snapshots for backup offload or repurposing


Figure 5 shows how to select existing snapshot to link to a target storage group for backup offloading or repurposing. By default the
snapshots are linked in space saving no copy mode wherein copy operation is differed until the source tracks are written. If the full
copy if desired, copy check box can be used. One snapshot can be linked to multiple targets storage groups, if relink to the same
target storage group is desired select existing target storage group option.

29
Figure 3 Unisphere creating linked target

Restoring Oracle database using storage snapshot


Figure 6 shows how to select an existing snapshot to restore a source storage group.

Figure 4 Unisphere restore from snapshot

30
Creating a cascaded snapshot from an existing snapshot
TimeFinder Snap VX allows creating snaps from an existing snapshot for repurposing the same point-in-time copy for other uses.
Figure 7 shows how to use an existing snapshot to create additional point-in-time cascaded snapshots.

Figure 5 Unisphere creating cascaded snapshot

APPENDIX II SRDF MODES AND TOPOLOGIES

SRDF modes
SRDF modes define SRDF replication behavior. These basic modes can be combined to create different replication topologies
(described in this appendix).

SRDF Synchronous (SRDF/S) is used to create a no-data-loss of committed transactions solution.

o In SRDF/S each host write to an R1 device gets acknowledged only after the I/O was copied to the R2 storage system
persistent cache.
o SRDF/S makes sure that data on both the source and target devices is exactly the same.
o Host I/O latency will be affected by the distance between the storage arrays.
SRDF Asynchronous (SRDF/A) is used to create consistent replicas at unlimited distances, without write response time
penalty to the application.

o In SRDF/A each host write to an R1 device gets acknowledged immediately after it registered with the local VMAX3
persistent cache, preventing any write response time penalty to the application.
o Writes to the R1 devices are grouped into cycles. The capture cycle is the cycle that accepts new writes to R1 devices
while it is open. The Transmit cycle is a cycle that was closed for updates and its data is sent from the local to the
remote array. The receive cycle on the remote array receives the data from the transmit cycle. The destaged cycle
on the remote array destages the data to the R2 devices. SRDF software only destages full cycles to the R2 devices.

- The default time for capture cycle to remain open for writes is 30 seconds, though it can be set differently.

- In legacy mode (at least one of the arrays is not a VMAX3), cycle time can increase during peak workloads as
more data needs to be transferred over the links. After the peak, the cycle time will go back to its set time (default
of 30 seconds).

- In multi-cycle mode (both arrays are VMAX3), cycle time remains the same, though during peak workload more
than one cycle can be waiting on the R1 array to be transmitted.

31
- While the capture cycle is open, only the latest update to the same storage location will be sent to the R2, saving
bandwidth. This feature is called write-folding.

- Write-order fidelity is maintained between cycles. For example, two dependent I/Os will always be in the same
cycle, or the first of the I/Os will be in one cycle and the dependent I/O in the next.

- To limit VMAX3 cache usage by capture cycle during peak workload time and to avoid stopping replication due to
too many outstanding I/Os, VMAX3 offers a Delta Set Extension (DSE) pool which is local storage on the source
side that can help buffer outstanding data to target during peak times.

o The R2 target devices maintain a consistent replica of the R1 devices, though slightly behind, depending on how fast
the links can transmit the cycles and the cycle time. For example, when cycles are received every 30 seconds at the
remote storage array its data will be 15 seconds behind production (if transmit cycle was fully received), or 1 minute
behind (if transmit cycle was not fully received it will be discarded during failover to maintain R2 consistency).

o Consistency should always be enabled when protecting databases and applications with SRDF/A to make sure the R2
devices create a consistent restartable replica.
SRDF Adaptive Copy (SRDF/ACP) mode allows bulk transfers of data between source and target devices without
maintaining write-order fidelity and without write performance impact to source devices.
o While SRDF ACP is not valid for ongoing consistent replications it is a good way of transferring changed data in bulk
between source and target devices after replications were suspended for an elongated period of time, accumulating
many changes on the source. ACP mode can be maintained until a certain skew of leftover changes to transmit is
achieved. Once the amount of changed data has been reduced, the SRDF mode can be changed to Sync or Async as
appropriate.
o SRDF ACP is also good for migrations (also referred to as SRDF Data Mobility) as it allows a Point-in-Time data push
between source and target devices.

SRDF topologies
A two-site SRDF topology includes SRDF sessions in SRDF/S, SRDF/A, and/or SRDF/ACP between two storage arrays, where each
RDF group can be set in different mode and each array may contain R1 and R2 devices of different groups.

Three-site SRDF topologies include:

Concurrent SRDF: Concurrent SRDF is a three-site topology in which replication takes place from site A simultaneously to
site B and site C. Source R1 devices are replicated simultaneously to two different sets of R2 target devices on two different
remote arrays. For example, one SRDF group can be set as SRDF/S replicating to a near site and the other as SRDF/A,
replicating to a far site.
Cascaded SRDF: Cascaded SRDF is a three-site topology in which replication takes place from site A to site B, and from
there to site C. R1 devices in site A replicate to site B to a set of devices called R21. R21 devices behave as R2 to site A,
and as R1 to site C. Site C has the R2 devices. In this topology, site B holds the full capacity of the replicated data. If site A
fails and Production operations continue on site C, site B can become the DR site for site C.
SRDF/EDP: Extended data protection SRDF topology is similar to cascaded SRDF, as site A replicates to site B, and from
there to site C. However, in EDP, site B doesnt hold R21 devices with real capacity. Instead, this topology offers capacity
and cost savings as site B only uses cache to receive the replicated data from site A and transfer it to site C.

SRDF/STAR: SRDF/STAR offers an intelligent three-site topology similar to concurrent SRDF, where site A replicates
simultaneously to site B and site C. However, if site A failed, site B and site C can communicate to merge the changes and
resume DR. For example, if SRDF/STAR replications between site A and B use SRDF/S and replications between site A and C
use SRDF/A, if site A fails then site B can send the remaining changes to site C for a no-data-loss solution at any distance.
Site B can become a DR site for site C afterwards, until site A can come back.
SRDF/AR: SRDF Automatic Replication (AR) can be set as either a two or a three-site replication topology. It offers slower
replication when network bandwidth is limited and without performance overhead. In a two-site topology, AR uses
TimeFinder to create a PiT replica of production on site A, then uses SRDF to replicate it to site B, in which another
TimeFinder replica is created as a gold copy. Then the process repeats. In a three-site topology, site A replicates to Site B
using SRDF/S. In site B TimeFinder is used to create a replica which is then replicated to site C. In site C the gold copy
replica is created and the process repeats itself.
There are also 4-site topologies, though they are beyond the scope of this paper. For full details on SRDF modes, topologies, and
other details refer to the VMAX3 Family with HYPERMAX OS Product Guide.

32
APPENDIX III SOLUTIONS ENABLER CLI COMMANDS FOR TIMEFINDER
SNAPVX MANAGEMENT

Creation of periodic snaps


This command allows creation of periodic snaps from a database storage group. All the objects associated with that storage group
will be included in the snap and a consistent point-in-time snap will be created. Similar syntax can also be used for linked target
storage groups. The newer snapshot with the same name can be created and the generation number will be incremented with
generation 0 identifying the most recent one.

# symsnapvx -sid 535 -sg FINDB_SG -name FINDB_SG snapshot_name FINDB_Snap_1 establish [-ttl delta <#of days>]
Execute Establish operation for Storage Group FINDB_SG (y/[n]) ? y

Establish operation execution is in progress for the storage group FINDB_SG. Please wait...
Polling for Establish.............................................Started.
Polling for Establish.............................................Done.
Polling for Activate..............................................Started.
Polling for Activate..............................................Done.

Establish operation successfully executed for the storage group FINDB_SG

Listing details of a snap


This command shows the details about a snapshot including delta and non-shared tracks and expiration time. The difference between
delta track and non-shared track will give the shared tracks shared by this snap. The command lists all the snaps for the given
storage group.

# symsnapvx -sid 535 -sg FINDB_SG -name FINDB_Snap_1 list -detail

Storage Group (SG) Name : FINDB_SG


SG's Symmetrix ID : 000196700535 (Microcode Version: 5977)
Total
Sym Flgs Deltas Non-Shared

Dev Snapshot Name Gen FLRG Snapshot Timestamp (Tracks) (Tracks) Expiration Date
----- -------------------------------- ---- ---- ------------------------ ---------- ---------- ----------------
--------

000BC FINDB_Snap_1 0 .... Tue Mar 31 10:12:51 2015 3 3 Wed Apr 1


10:12:51 2015

Establish operation successfully executed for the storage group FINDB_SG

Flgs:
(F)ailed : X = Failed, . = No Failure

(L)ink : X = Link Exists, . = No Link Exists


(R)estore : X = Restore Active, . = No Restore Active
(G)CM : X = GCM, . = Non-GCM

Linking the snap to a storage group


This command shows how to link a snap to target storage group. By default, linking is done using no_copy mode.

# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 -lnsg FINDB_MNT link [-copy]

Execute Link operation for Storage Group FINDB_SG (y/[n]) ? y


Link operation execution is in progress for the storage group FINDB_SG. Please wait...
33
Polling for Link..................................................Started.

Polling for Link..................................................Done.


Link operation successfully executed for the storage group FINDB_SG

Verifying current state of the snap


This command provides the current summary of the given snapshot. This shows the number of devices included in the snap and the
total number of tracks protected but not copied. By default, all the snaps are created with nocopy. When the link is created using
the -copy option, the 100% copy is indicated by Total Remaining count reported as 0. The same command can be used to check
the remaining tracks to copy during the restore operation.

# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 verify -summary

Storage Group (SG) Name : FINDB_SG


Snapshot State Count
----------------------- ------
Established 8
EstablishInProg 0
NoSnapshot 0
Failed 0
----------------------- ------
Total 8
Track(s)
-----------
Total Remaining 38469660
All devices in the group 'FINDB_SG' are in 'Established' state.

Listing linked snaps


This command lists the named linked snap and specifies the status of the copy or defined operation, indicates whether modified
target tracks exist, and provides other useful information.

# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 list -linked

Storage Group (SG) Name : FINDB_SG


SG's Symmetrix ID : 000196700535 (Microcode Version: 5977)

-------------------------------------------------------------------------------
Sym Link Flgs
Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp
----- -------------------------------- ---- ----- ---- ------------------------
000BC FINDB_Snap_1 0 00053 ..X. Tue Mar 31 10:12:52 2015
000BD FINDB_Snap_1 0 00054 .... Tue Mar 31 10:12:52 2015
000BE FINDB_Snap_1 0 00055 .... Tue Mar 31 10:12:52 2015
000BF FINDB_Snap_1 0 00056 .... Tue Mar 31 10:12:52 2015
000C0 FINDB_Snap_1 0 00057 ..XX Tue Mar 31 10:12:52 2015
000C1 FINDB_Snap_1 0 00058 ...X Tue Mar 31 10:12:52 2015
000C2 FINDB_Snap_1 0 00059 ...X Tue Mar 31 10:12:52 2015
000C3 FINDB_Snap_1 0 0005A ...X Tue Mar 31 10:12:52 2015

34
Flgs:

(F)ailed : F = Force Failed, X = Failed, . = No Failure


(C)opy : I = CopyInProg, C = Copied, D = Copied/Destaged, . = NoCopy Link

(M)odified : X = Modified Target Data, . = Not Modified


(D)efined : X = All Tracks Defined, . = Define in progress

Restore from a snap


This command shows how to restore a storage group from a point in time snap. Once the restore operation completes, the restore
session can be terminated while keeping the original point-in-time snap for subsequent use.

# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 terminate restored


# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 verify summary
# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 terminate -restored

APPENDIX IV SOLUTIONS ENABLER CLI COMMANDS FOR SRDF


MANAGEMENT

Listing local and remote VMAX SRDF adapters


This command shows how to list existing SRDF directors, available ports, and dynamic SRDF groups. The command listed below
should be run on both local and remote VMAX to obtain the full listing needed for subsequent commands.

# symcfg -sid 535 list -ra all


Symmetrix ID: 000196700535 (Local)
SYMMETRIX RDF DIRECTORS
Remote Local Remote Status
Ident Port SymmID RA Grp RA Grp Dir Port
----- ---- ------------ -------- -------- ---------------

RF-1H 10 000197200056 1 (00) 1 (00) Online Online


10 000197200056 10 (09) 10 (09) Online Online
RF-2H 10 000197200056 1 (00) 1 (00) Online Online
10 000197200056 10 (09) 10 (09) Online Online

Creating dynamic SRDF groups


This command shows how to create a dynamic SRDF group. Based on the output generated from the prior command, a new dynamic
SRDF group can be created with proper director ports and group numbers.

# symrdf addgrp -label FINDB -rdfg 20 -sid 535 -dir 1H:10 -remote_sid 536 -remote_dir 1E:7 -remote_rdfg 20
Execute a Dynamic RDF Addgrp operation for group

'FINDB_1' on Symm: 000196700535 (y/[n]) ? y


Successfully Added Dynamic RDF Group 'FINDB_1' for Symm: 000196700535

Creating SRDF device pairs for a storage group


This command shows how to create SRDF device pairs between local and remote VMAX arrays, identify R1 and R2 devices, and start
syncing the tracks from R1 to R2 between those for remote protection.

# symrdf -sid 535 -sg FINDB_SG -rdfg 20 createpair -type R1 -remote_sg FINDB_R2 -establish
Execute an RDF 'Create Pair' operation for storage

35
group 'FINDB_SG' (y/[n]) ? y

An RDF 'Create Pair' operation execution is

in progress for storage group 'FINDB_SG'. Please wait...


Create RDF Pair in (0535,020)....................................Started.
Create RDF Pair in (0535,020)....................................Done.

Mark target device(s) in (0535,020) for full copy from source....Started.


Devices: 00BC-00C3 in (0535,020).................................Marked.
Mark target device(s) in (0535,020) for full copy from source....Done.

Merge track tables between source and target in (0535,020).......Started.


Devices: 00BC-00C3 in (0535,020).................................Merged.
Merge track tables between source and target in (0535,020).......Done.

Resume RDF link(s) for device(s) in (0535,020)...................Started.


Resume RDF link(s) for device(s) in (0535,020)...................Done.

The RDF 'Create Pair' operation successfully executed for


storage group 'FINDB_SG'.

Listing the status of SRDF group


This command shows how to get information about the existing SRDF group.

# symrdf -sid 535 list -rdfg 20


Symmetrix ID: 000196700535

Local Device View


---------------------------------------------------------------------------
STATUS MODES RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MDATE Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- ------- ------- --- ---- -------------
000BC 00068 R1:20 RW RW NR S..1. 0 3058067 RW RW Split
000BD 00069 R1:20 RW RW NR S..1. 0 3058068 RW RW Split

Total ------- -------


Track(s) 0 13141430

MB(s) 0 1642679
Legend for MODES:
M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy

: M = Mixed
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off

(Mirror) T(ype) : 1 = R1, 2 = R2

36
(Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A

Restoring SRDF group


This command shows how to restore an SRDF group from R2 to R1.

# symrdf -sid 536 -sg FINDB_R2 -rdfg 20 restore


Execute an RDF 'Incremental Restore' operation for storage
group 'FINDB_R2' (y/[n]) ? y

An RDF 'Incremental Restore' operation execution is


in progress for storage group 'FINDB_R2'. Please wait...

Write Disable device(s) in (0536,020) on SA at source (R1).......Done.


Write Disable device(s) in (0536,020) on RA at target (R2).......Done.

Suspend RDF link(s) for device(s) in (0536,020)..................Done.


Mark Copy Invalid Tracks in (0536,020)...........................Started.
Devices: 0068-006B in (0536,020).................................Marked.
Mark Copy Invalid Tracks in (0536,020)...........................Done.
Mark source device(s) in (0536,020) to refresh from target.......Started.
Devices: 00BC-00C0, 00C3-00C3 in (0536,020)......................Marked.
Mark source device(s) in (0536,020) to refresh from target.......Done.
Merge track tables between source and target in (0536,020).......Started.
Devices: 00BC-00C3 in (0536,020).................................Merged.
Merge track tables between source and target in (0536,020).......Done.
Resume RDF link(s) for device(s) in (0536,020)...................Started.
Resume RDF link(s) for device(s) in (0536,020)...................Done.
Read/Write Enable device(s) in (0536,020) on SA at source (R1)...Done.

The RDF 'Incremental Restore' operation successfully initiated for


storage group 'FINDB_R2'.

37
APPENDIX V - SOLUTIONS ENABLER ARRAY BASED ACCESS CONTROL
MANAGEMENT
VMAX Solutions Enabler array-based Access Control (ACL) allows DBA users to perform VMAX management from the database host
or a host under DBA control. By setting ACLs on database devices, DBAs can perform data protection operations with better control,
isolation, and security.

The components of Array Based Access Controls are:

Access Groups: Groups that contain the unique Host ID and descriptive Host Name of non-root users. The Host ID is
provided by running symacl unique command on the appropriate host.
Access Pools: Pools that specify the set of devices for operations.

Access Control Entry (ACE): Entries in the Access Control Database that specify the permission level for the Access
Control Groups and on which pools they can operate.
This appendix illustrates ACL management using Unisphere, but array-based Access Control can also be performed using the
Solutions Enabler Command Line interface using the general syntax: symacl sid file preview | prepare | commit. With this syntax,
preview verifies the syntax, prepare runs preview and checks if the execution is possible, and commit performs the prepare
operations and executes the command.

The Storage Admin PIN can be set in an environment variable, SYMCLI_ACCESS_PIN, or entered manually.

The high-level steps to set ACLs are:

1. Initialize the SYMACL database, identify and add an admin host with host access to run SYMACL commands.

2. Identify unique IDs of the UNIVMAX and database management hosts: use SYMCLI for this step.

3. Add UNIVMAX host to AdminGrp for ACL management: use SYMCLI for this step.

4. Create an access group for the database host.

5. Create an access pool for database devices.

6. Grant base management and data protection management privileges to the access group.

7. Install Solutions Enabler as non-Root Oracle user and run SnapVX operations from Oracle user on the devices granted access to
that user.

Identifying the unique ID of the Unisphere for VMAX and database management host
Run this SYMCLI command on both UNIVMAX and database management hosts to retrieve their unique IDs

# symacl -sid 535 -unique


The unique id for this host is: XXXXXXXX-XXXXXXXX-XXXXXXXX

Adding the UNIVMAX host to AdminGrp for ACL management


VMAX3 contains a pre-created AdminGrp which allows full VMAX administrative control of ACLs. Adding UNIVMAX host to this group
will allow management of access groups and pools from UNIVMAX.

Note: On VMAX3, the SymmACL database comes pre-initialized with this group that has to be first initialized if not done already. For
Unisphere access, the VMAX SymmWin procedure wizard must be used to add Host_Based access ID of the Unisphere host to the
SymmACL database. Once Unisphere is added to SymmACL database, the following command can also be run from Unisphere
graphical user interface instead of using Solutions Enabler Command Line from a host with granted access. A PIN can also be set up
using the SymmWin procedure wizard.

<Set the Environment variable to specify the SYMCLI access PIN>


# export SYMCLI_ACCESS_PIN=<XXXX>

<Add the host access ID to the AdminGrp>


# symacl -sid 535 commit << DELIM
38
add host accid <XXXXXXXX- XXXXXXXX-XXXXXXXX> name <UNIVMAX Host Name> to accgroup AdminGrp;

DELIM
Command file: (stdin)

PREVIEW............................................................................................Started.
PREVIEW............................................................................................Done.
PREPARE............................................................................................Started.

Adding Host access id DSIB1134 to group AdminGrp................................................Done.


PREPARE............................................................................................Done.
Starting COMMIT....................................................................................Done.

Authenticate UNIVMAX host for ACL management


Enter the PIN to authenticate UNIVMAX for SYMACL management.

Figure 6 Unisphere enabling access control using PIN

39
Create access group for database host
Create a database host management access group using the database management host access ID.

Figure 7 Unisphere creating access group using host unique ID

40
Create database device access pool
Create a database device pool by selecting the devices in FINDB_SG.

Figure 8 Unisphere creating access pool

41
Grant base and snap privileges
Create Access Control Entry for the database access group and database access pool created in earlier steps. Assign BASE,
BASECTRL, and SNAP privileges to the access pool to allow running snap operations.

Figure 9 Unisphere grant base and snap privileges to the group

Install Solutions Enabler on database hosts using non-root user


Typically Solutions Enabler is installed as the root user; however, with VMAX3, Solutions Enabler can also be installed to start certain
daemons using non-root user and management operations can be run from there. Once ACLs are set as described above, non-root
users can run Snap operations on the access groups for their host. The following examples illustrate running SnapVX operations from
Oracle user account.

On the Application Management host, install Solutions Enabler for the Oracle user. The installation has to be performed as root user,
though the option for allowing a non-root user is part of the installation.

<Running SYMSNAPVX command on a storage group with devices in access group>


# ./se8020_install.sh install
...
Install root directory of previous Installation : /home/oracle/SE

Working root directory [/usr/emc] : /home/oracle/SE


...
Do you want to run these daemons as a non-root user? [N]:Y

Please enter the user name : oracle


...
#-----------------------------------------------------------------------------

42
# The following HAS BEEN INSTALLED in /home/oracle/SE via the rpm utility.

#-----------------------------------------------------------------------------
ITEM PRODUCT VERSION

01 EMC Solutions Enabler V8.0.2.0


RT KIT
#-----------------------------------------------------------------------------
Establish operation execution is in progress for the storage group FINDB_FRA. Please wait...

To allow the Oracle user to run symcfg discover and list commands, permission is required to use the Solutions Enabler
daemons. Update the daemon_users file.

# cd /var/symapi/config
# vi daemon_users
# Add entry to allow user access to base daemon
oracle storapid
oracle storgnsd

Test Oracle user access.

# su oracle
# symcfg disc
# sympd list gb

Below is a test showing snapshot operations run as Oracle user.

<Running SYMSNAPVX command on a storage group with devices in access group>


# symsnapvx -sid 535 -sg FINDB_SG -name DSIB0122_FINDB_Oracle establish
Execute Establish operation for Storage Group FINDB_SG (y/[n]) ? y
Establish operation execution is in progress for the storage group FINDB_SG. Please wait...
Polling for Establish.............................................Started.
Polling for Establish.............................................Done.
Polling for Activate..............................................Started.
Polling for Activate..............................................Done.
Establish operation successfully executed for the storage group FINDB_SG

<Running SYMSNAPVX command on a storage group with devices NOT in access group>
# symsnapvx -sid 535 -sg FINDB_FRA -name FINDB_FRA_SNAP establish
Execute Establish operation for Storage Group FINDB_FRA (y/[n]) ? y
Establish operation execution is in progress for the storage group FINDB_FRA. Please wait...
Symmetrix access control denied the request

REFERENCES
EMC VMAX3 Family with HYPERMAX OS Product Guide
Unisphere for VMAX3 Documentation Set
EMC VMAX3 TM Local Replication TechNote
Deployment Best Practices for Oracle Database with VMAX3 SLO management

43

S-ar putea să vă placă și