Documente Academic
Documente Profesional
Documente Cultură
ABSTRACT
With the introduction of the third generation VMAX disk arrays and local replication
and enhanced remote replication capabilities, Oracle database administrators have a
new way to protect their Oracle databases effectively and efficiently with
unprecedented ease of use and management.
June, 2015
EMC WHITE
To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.
2
TABLE OF CONTENTS
3
CONCLUSION .......................................................................................... 27
APPENDIX I - CONFIGURING ORACLE DATABASE STORAGE GROUPS FOR
REPLICATION ......................................................................................... 28
Creating snapshots for Oracle database storage groups ........................................ 28
Linking Oracle database snapshots for backup offload or repurposing ..................... 29
Restoring Oracle database using storage snapshot ............................................... 30
Creating a cascaded snapshot from an existing snapshot ...................................... 31
REFERENCES ........................................................................................... 43
5
EXECUTIVE SUMMARY
Many applications are required to be fully operational 24x7x365 and the data for these applications continues to grow. At the same
time, their RPO and RTO requirements are becoming more stringent. As a result, there is a large gap between the requirements for
fast and efficient protection and replication, and the ability to meet these requirements without overhead or operations disruption.
DBAs require the ability to create local and remote database replicas in seconds without disruption of Production host CPU or I/O
activity for purposes such as testing patches, running reports, creating development sandbox environments, publishing data to
analytic systems, offloading backups from Production, developing Disaster Recovery (DR) strategy, and more.
Traditional solutions rely on host based replication. The disadvantages of this solution are the additional host I/O and CPU cycles
consumed by the need to create such replicas, the complexity of monitoring and maintaining them, especially across multiple
servers, and the elongated time and complexity associated with their recovery. EMC and Oracle have made the creation of such
replicas more efficient, integrated, easier to create, faster to restore, and very robust in features.
The ability to create instant and consistent database replicas for repurposing, across a single database, multiple databases,
including external data or message queues, and across multiple VMAX3 storage systems.
The ability to perform TimeFinder replica creation or restore operations in seconds, regardless of the database size. The
target devices (in case of a replica) or source devices (in case of a restore) are available immediately with their data, even
as incremental data changes are copied in the background.
The ability to create valid backup images without hot-backup mode that can be taken, and more importantly restored, in
seconds regardless of database size, leveraging the Oracle Snapshot Storage Optimization Oracle 12c feature. Prior to
Oracle 12c, such valid backup images were achieved by using hot backup mode for only a few seconds.
The ability to utilize RMAN Block Change Tracking (BCT) file from a TimeFinder replica, when offloading backups from
Production to a backup host. Recovery operations can take place on either the Backup or Production host.
VMAX3 TimeFinder SnapVX snapshots are consistent by default. Each source device can have up to 256 space-efficient
snapshots that can be linked to up to 1024 target devices, maintaining incremental refresh relationships. The linked targets
can remain space-efficient, or a background copy of all the data can take place, making it a full copy. In this way, SnapVX
allows unlimited number of cascaded snapshots.
With Oracle 12c Cloud Control DBaaS Snap Clone, DBAs can perform storage provisioning and replications directly from
Enterprise Manager (TimeFinder is called via APIs).
With Oracle VM 3.3 Storage Connect, storage devices can be provisioned to VMs, or VMs with their physical and virtual
storage can be cloned (TimeFinder is called via APIs).
SRDF remote replication values to Oracle include:
Synchronous and Asynchronous consistent replication of a single or multiple databases, including external data or message
queues, across multiple VMAX3 storage array systems if necessary. The point of consistency is created before a disaster
strikes, rather than taking hours to achieve afterwards when using replications that are not consistent across applications
and databases.
Disaster Recovery (DR) protection for two or three sites, including cascaded or triangular relationships, where SRDF always
maintains incremental updates between source and target devices.
SRDF and TimeFinder are integrated. For example, while SRDF replicates the data remotely, TimeFinder can be used on the
remote site to create writable snapshots or backup images of the database. This allows the DBAs to perform remote backup
operations or create remote database copies.
SRDF and TimeFinder can work in parallel to restore remote backups. For example, while a remote TimeFinder backup is
being restored to the remote SRDF devices, in parallel SRDF will copy the restored data to the local site. This parallel restore
capability provides DBA with faster accessibility to remote backups and shortens recovery times.
VMAX replication and Silent Data Corruption:
Both VMAX and VMAX3 arrays protect all user data with T10-DIF from the moment it enters the storage until it is retrieved by the
host, including for local and remote replication.
With Oracle ASMlib (Oracle 10g and above), Oracle and EMC integrated the T10-DIF standard for end-to-end data integrity
validation. Each read or write between Oracle and VMAX storage is validated. Starting with Oracle 12c, Oracle ASM Filter Driver
(AFD) provides a wider host OS support for T10-DIF with VMAX, and includes other features such as the ability to reclaim deleted
6
ASM files in VMAX storage, protection from non-Oracle writes to Oracle data, and more. Internally, both TimeFinder and SRDF use
T10-DIF to validate all replicated data.
Together, Oracle ASM and VMAX3 offer the most protected and robust platform for database storage and replications that maintains
data integrity for each database read or write as well as its replications. By utilizing VMAX3 local and remote replications, DBAs gain
the ability to protect and repurpose their databases quickly and easily, without the time and complexity associated with host-based
replications, and the increasing RTOs associated with growing databases size.
AUDIENCE
This white paper is intended for database and system administrators, storage administrators, and system architects who are
responsible for implementing, managing, and maintaining Oracle databases backup and replication on VMAX3 storage arrays. It is
assumed that readers have some familiarity with Oracle and the EMC VMAX3 family of storage arrays, and are interested in achieving
higher database availability, performance, and ease of storage management.
TERMINOLOGY
The following table provides explanation to important terms used in this paper.
Term Description
Restartable vs. Oracle distinguishes between a restartable and recoverable state of the database. A restartable
state requires all log, data, and control files to be consistent (see Storage consistent replication).
Recoverable database
Oracle can be simply started, performing automatic crash/instance recovery without user
intervention. Recoverable state on the other hand requires a database media recovery, rolling
forward transaction log to achieve data consistency before the database can be opened.
RTO and RPO Recovery Time Objective (RTO) refers to the time it takes to recover a database after a failure.
Recovery Point Objective (RPO) refers to any amount of data loss after the recovery completes,
where RPO=0 means no data loss of committed transactions.
Storage consistent Storage consistent replication refers to storage replications (local or remote) in which the target
devices maintain write-order fidelity. That means that for any two dependent I/Os that the
replication
application issues, such as log write followed by data update, either both will be included in the
replica or only the first. To the Oracle database, after a host crash or Oracle shutdown abort, the
snapshot data appears in a state from which Oracle can simply recover by performing
crash/instance recovery when starting.
Starting with Oracle 11g, Oracle allows database recovery from storage consistent replications
without the use of hot-backup mode (details in Oracle support note: 604683.1). The feature has
become more integrated with Oracle 12c and is called Oracle Storage Snapshot Optimization.
VMAX3 HYPERMAX OS HYPERMAX OS is the industrys first open converged storage hypervisor and operating system. It
enables VMAX3 arrays to embed storage infrastructure services like cloud access, data mobility
and data protection directly on the array. This delivers new levels of data center efficiency and
consolidation by reducing footprint and energy requirements. In addition, HYPERMAX OS delivers
the ability to perform real-time and non-disruptive data services.
VMAX3 Storage Group A collection of host addressable VMAX3 devices. A Storage Group can be used to (a) present
devices to host (LUN masking), (b) specify FAST Service Levels (SLOs) to a group of devices, and
(c) manage device grouping for replication software such as SnapVX and SRDF.
Storage Groups can be cascaded, such as the child storage groups used for setting FAST Service
Level Objectives (SLOs) and the parent storage groups used for LUN masking of all the database
devices to the host.
VMAX3 TimeFinder TimeFinder SnapVX is the latest generation in TimeFinder local replication software, offering
higher scale and a wider feature set while maintaining the ability to emulate legacy behavior.
SnapVX
VMAX3 TimeFinder Previous generations of TimeFinder referred to Snapshot as a space-saving copy of the source
device, where capacity was consumed only for data changed after the snapshot time. Clone, on
SnapVX Snapshot vs.
the other hand referred to full copy of the source device. With VMAX3 arrays, TimeFinder SnapVX
Clone snapshots are always space-efficient. When they are linked to host-addressable target devices,
the user can choose to keep the target devices space-efficient or perform a full copy.
7
VMAX3 Product Overview
The EMC VMAX3 family of storage arrays is built on the strategy of simple, intelligent, modular storage, and incorporates a Dynamic
Virtual Matrix interface that connects and shares resources across all VMAX3 engines, allowing the storage array to seamlessly grow
from an entry-level configuration into the worlds largest storage array. It provides the highest levels of performance and availability
featuring new hardware and software capabilities.
The newest additions to the EMC VMAX3 family, VMAX 100K, 200K and 400K, deliver the latest in Tier-1 scale-out multi-controller
architecture with consolidation and efficiency for the enterprise. It offers dramatic increases in floor tile density, high capacity flash
and hard disk drives in dense enclosures for both 2.5" and 3.5" drives, and supports both block and file (eNAS).
The VMAX3 family of storage arrays comes pre-configured from factory to simplify deployment at customer sites and minimize time
to first I/O. Each array uses Virtual Provisioning to allow the user easy and quick storage provisioning. While VMAX3 can ship as an
all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and
reads even further, it can also ship as hybrid, multi-tier storage that excels in providing FAST 1 (Fully Automated Storage Tiering)
enabled performance management based on Service Level Objectives (SLO). VMAX3 new hardware architecture comes with more
CPU power, larger persistent cache, and a new Dynamic Virtual Matrix dual InfiniBand fabric interconnect that creates an extremely
fast internal memory-to-memory and data-copy fabric.
Figure 1 shows possible VMAX3 components. Refer to EMC documentation and release notes to find the most up to date supported
components.
To learn more about VMAX3 and FAST best practices with Oracle databases, refer to the white paper: Deployment best practice for
Oracle database with VMAX3 Service Level Object Management.
The replicated devices can contain the database data, Oracle home directories, data that is external to the database (e.g. image
files), message queues, and so on.
VMAX3 TimeFinder SnapVX combines the best aspects of previous TimeFinder offerings and adds new functionality, scalability, and
ease-of-use features.
1
Fully Automated Storage Tiering (FAST) allows VMAX3 storage to automatically and dynamically manage performance service level goals across the
available storage resources to meet the application I/O demand, even as new data is added, and access patterns continue to change over time.
8
Some of the main SnapVX capabilities related to native snapshots (emulation mode for legacy behavior is not covered) include:
With SnapVX, snapshots are natively targetless. They only relate to a group of source devices and cannot be otherwise
accessed directly. Instead, snapshots can be restored back to the source devices, or linked to another set of target devices
which can be made host-accessible.
Each source device can have up to 256 snapshots that can be linked to up to 1024 targets.
Snapshot operations are performed on a group of devices. This group is defined by using either a text file specifying the list
of devices, a device-group (DG), composite-group (CG), a storage group (SG), or simply specifying the devices. The
recommended way is to use a storage group.
Snapshots are taken using the establish command. When a snapshot is established, a snapshot name is provided, and an
optional expiration date. The snapshot time is saved with the snapshot and can be listed. Snapshots also get a generation
number (starting with 0). The generation is incremented with each new snapshot, even if the snapshot name remains the
same.
SnapVX provides the ability to create either space-efficient replicas or full-copy clones when linking snapshots to target
devices.
Use the -copy option to copy the full snapshot point-in-time data to the target devices during link. This will make the
target devices a stand-alone copy. If -copy option is not used, the target devices provide the exact snapshot point-in-time
data only until the link relationship is terminated, saving capacity and resources by providing space-efficient replicas.
SnapVX snapshots themselves are always space-efficient as they are simply a set of pointers pointing to the data source
when it is unmodified, or to the original version of the data when the source is modified. Multiple snapshots of the same
data utilize both storage and memory savings by pointing to the same location and consuming very little metadata.
SnapVX snapshots are always consistent. That means that snapshot creation always maintains write-order fidelity. This
allows easy creation of restartable database copies, or Oracle recoverable backup copies based on Oracle Storage Snapshot
Optimization. Snapshot operations such as establish and restore are also consistent that means that the operation either
succeeds or fails for all the devices as a unit.
Linked-target devices cannot restore any changes directly to the source devices. Instead, a new snapshot can be taken
from the target devices and linked back to the original source devices. In this way, SnapVX allows unlimited number of
cascaded snapshots.
FAST Service Levels apply to either the source devices, or to snapshot linked targets, but not to the snapshots themselves.
SnapVX snapshot data resides in the same Storage Resource Pool (SRP) as the source devices, and acquire an Optimized
FAST Service Level Objective (SLO) by default.
See Appendix III for a list of basic TimeFinder SnapVX operations.
For more information on SnapVX, refer to the TechNote: EMC VMAX3 TM Local Replication and the EMC Solutions Enabler CLI Guides.
o SRDF Asynchronous (SRDF/A) mode which is used to create consistent replicas at unlimited distances without write
response time penalty to the application. The target devices are typically seconds to minutes behind the source devices
(Production), though consistent (restartable).
o SRDF Adaptive Copy (SRDF/ACP) mode which allows bulk transfers of data between source and target devices
without write-order fidelity and without write performance impact to source devices. SRDF/ACP is typically used for data
migrations as a Point-in-Time data transfer. It is also used to catch up after a long period that replication was
suspended and many changes are owed to the remote site. SRDF/ACP can be set to continuously send changes in bulk
until the delta between source and target is reduced to a specified skew. At this time SRDF/S or SRDF/A mode can
resume.
SRDF groups:
9
o An SRDF group is a collection of matching devices in two VMAX3 storage arrays together with the SRDF ports that are
used to replicate these devices between the arrays. HYPERMAX OS allows up to 250 SRDF groups per SRDF director.
The source devices in the SRDF group are called R1 devices, and the target devices are called R2 devices.
o SRDF operations are performed on a group of devices contained in an SRDF group. This group is defined by using either
a text file specifying the list of devices, a device-group (DG), composite/consistency-group (CG), or a storage group
(SG). The recommended way is to use a storage group.
SRDF consistency:
o An SRDF Consistency Group is an SRDF group to which consistency was enabled.
Appendix II, SRDF Modes and Topologies, and Appendix IV, Solutions Enabler CLI Commands for SRDF Management, provide
additional information.
For more information on SRDF, refer to the VMAX3 Family with HYPERMAX OS Product Guide.
10
ORACLE DATABASE REPLICATION WITH TIMEFINDER AND SRDF
CONSIDERATIONS
If snapshots are used as part of a disaster protection strategy then the frequency of creating snapshots can be determined based on
the RTO and RPO needs.
For a restart solution where no roll-forward is planned, snapshots taken at very short intervals (seconds or minutes)
ensure that RPO is limited to that interval. For example, if a snapshot is taken every 30 seconds, there will be no more than
30 seconds of data loss if it is needed to restore the database without recovery.
For a recovery solution, frequent snapshots ensure that RTO is short as less data will need recovery during roll forward of
logs to the current time. For example, if snapshots are taken every 30 seconds, roll forward of the data from the last
snapshot will be much faster than rolling forward from a nightly backup or hourly snapshots. Linked targets for the existing
snapshots can be further used to create additional Point-in-Time snapshots for repurposing or backups.
Because snapshots consume storage capacity based on the database change rate, old snapshots should be terminated when no
longer needed.
No-copy option: No-copy linked targets remain space efficient by sharing pointers with Production and the snapshot. Only changes
to either the linked targets or Production devices consume additional storage capacity to preserve the original data. However, reads
to the linked targets may affect Production performance as they share their storage via pointers to unmodified data. Another by-
product of no-copy linked targets is that they do not retain their data after they are unlinked. When the snapshot is unlinked, the
target devices no longer provide a coherent copy of the snapshot point-in-time data as before, though they can be relinked later.
Copy option: Alternatively, the linked-targets can be made a stand-alone copy of the source snapshot point-in-time data by using
copy option. When the background copy is complete, the linked targets will have their own copy of the point-in-time data of the
snapshot, and will not be sharing pointers with Production. If at that point the snapshot is unlinked, the target devices will maintain
their own coherent data, and if they are later relinked they will be incrementally refreshed from the snapshot (usually, after the
snapshot is refreshed).
No-copy linked targets are useful for storage capacity efficiency due to shared pointers. They can be used for short term and light
weight access to avoid affecting Productions performance. When longer retention period of the linked targets is anticipated, or heavy
workload, it could be better to perform a link-copy and have them use independent pointers to storage. It should be noted that
during the background copy the storage backend utilization will increase and the operator may want to time such copy operations to
periods of low system utilization to avoid any application performance overhead.
Traditionally hot backup mode is used to create a database recoverable solution. A recoverable database replica can perform
database recovery to a desired point in time using archive and redo logs. Oracle database 12c enhanced the ability to create
database recoverable solution based on storage replications by leveraging storage consistency instead of hot-backup mode. This
feature of Oracle 12c is called: Oracle Snapshot Storage Optimization and is demonstrated in Test Case 2.
For a snapshot that will be recovered on the Production host and therefore relies on the available logs and archive logs, the snapshot
can include just the data files. However, if the snapshot will be recovered on another host (such as when using linked targets) an
11
additional snapshot of the archive logs should be taken, following the best practice described in Test Case 2 for recoverable replicas.
Redo logs are not required in the snapshot.
It is possible to create a hybrid replica that can be used for either recovery or restart. This can be done by including all data, control,
and redo logs in the first replica, and archive logs in the second (following the best practice for recoverable database replica). In that
case, if a restartable solution is performed, the archive log replica will not be used. If a recoverable solution is used, the replica of
the online logs will not be restored (especially since we dont want to overwrite Productions redo logs if those are still available).
A typical Oracle database backup strategy involves running full database backups on a periodic basis; however, running incremental
backups frequently backs up only the changes since the prior backup. To further improve the efficiency of incremental backups,
RMAN allows the use of block change tracking to maintain metadata for the changes in the backup. Once enabled, RMAN incremental
backups use block change tracking files (BCT) to quickly identify the changed blocks. The RMAN block change tracking mechanism
can also be deployed when offloading backups to alternate hosts using VMAX3 snapshots. RMAN based backups are described in Test
Case 3.
Allow the database backup operator (commonly a DBA) a controlled access to commands in Solutions Enabler and Data
Domain, leveraging VMAX Access Controls (ACLs).
Use SUDO, allowing the DBA to execute specific commands for the purpose of their backup (possibly in combination with
Access Controls).
It is beyond the scope of this paper to document how access controls are executed; however, it is important to mention that
Solutions Enabler can be installed for a non-root user as described in Test Case 9. Solutions Enabler has a robust set of Access
Controls that fit this situation. Similarly, for Oracle database replication or backup purposes, additional user accounts other than
sysadmin can be created that can manage such processes appropriately. Oracle also allows setting up a backup user and only
providing them a specific set of authorizations appropriate for their task.
12
ASM Disk Groups and Oracle files:
o A minimum of 3 sets of database devices should be defined for maximum flexibility: data/control files, redo logs, and
FRA (archive logs), each in its own Oracle ASM disk group (for example, +DATA, +REDO, +FRA).
o The separation of data, redo and archive log files allows backup and restore of only the appropriate file types at the
appropriate time. For example, Oracle backup procedures require the archive logs to be replicated at a later time than
the data files. Also, during restore, if the redo logs are still available on the Production host, we can restore only data
files without overwriting the Productions redo logs.
o If only database restart solution is required, then the data and log files can be mixed and replicated together (although
they may be other reasons to separate them, such as for better performance management).
o When Oracle RAC is used it is recommended to use a separate ASM disk group for Grid infrastructure (for example,
+GRID). The +GRID ASM disk group should not contain user data. In this way, the cluster information is not part of a
database backup and if a recovery is performed on another clustered server, it can already have its +GRID ASM disk
group configured ahead of time.
Partition alignment on x86 based systems
o Oracle recommends on Linux and Windows systems to create at least one partition on each storage device. Due to the
legacy BIOS issue, by default such partitions are rarely aligned. It is therefore strongly recommended to move the
beginning of the first partition using fdisk or parted to an offset of 1MB (2048 blocks).
o By having the beginning of the partition aligned, I/O to VMAX3 will be aligned with storage tracks and FAST extents
achieving best performance.
SRDF is a restart solution and since database crash recovery never uses archive logs there is no need to include FRA (archive logs) in
the SRDF replication. However, there are two reasons why they could be included:
If Flashback database functionality is required for the target. Replicating the flashback logs in the same consistency group
as the rest of the database allows the use of Flashback database on the target.
To allow offload of backup operations to the remote site as archive logs are required to create a stand-alone backup image
of the database. In this case, the archive logs can use a different SRDF group and mode, potentially leveraging SRDF/A,
even if data, control, and log files are replicated with SRDF/S.
It is always recommended to have a database replica available at the SRDF remote site as a gold copy protection from rolling
disasters. Rolling disasters is a term used when a first interruption to normal replication activities is followed by a secondary
database failure on the source, leaving the database without an immediately available valid replica. For example, if SRDF replication
was interrupted for any reason for a while (planned or unplanned) and changes were accumulated on the source, once the
synchronization resumes and until the target is synchronized (SRDF/S) or consistent (SRDF/A), the target is not a valid database
image. For that reason it is best practice before such resynchronization to take a TimeFinder gold copy replica at the target site. This
preserves the last valid image of the database as a safety measure from rolling disasters.
When using Oracle RAC on the Production host, since RAC uses shared storage by virtue of replicating all the database components
(data, log, and control files), the target database can be started in cluster or single-instance. Regardless of the choice, it is not
recommended to replicate the cluster layer (voting disks or cluster configuration devices) since these contain local hosts and subnets
information. It is best practice that if a cluster layer is required at the mount hosts, it should be configured ahead of time, based on
mount hostnames and subnets, and therefore be ready to bring up the database when needed.
13
ORACLE BACKUP AND DATA PROTECTION TEST CASES
Test Configuration
This section provides examples using Oracle database backup and data protection on VMAX3 arrays. Figure 2 depicts the overall test
configuration used to describe these Test Cases.
Test Overview
HYPERMAX OS 5977.596
14
Table 2 Test host environment
Configuration aspect Description
+DATA: 4 x 1 TB DATA_SG
thin LUNs
Name: FINDB_SG FINDB_MNT FINDB_R2 FINDB_R2_TGT
+REDO: 4 x 150 REDO_SG
FINDB
GB thin LUNs
Size: 1.5 TB
+FRA: 4 x 100 FINFRA_SG - FINFRA_R2 FINFRA_R2_TGT
FINFRA_MNT
GB thin LUNs
Test Case 1: Creating a local restartable database replica for database clones
Objectives:
The purpose of this Test Case is to demonstrate the use of SnapVX to create a local database restartable copy, also referred to as a
database clone. The database clone can be started on a Mount host for purposes such as logical error detection or creation of Test,
Development, and Reporting environments. These environments can be periodically refreshed from Production.
Note: A restartable database replica must include all database control, data, and redo log files, and therefore the cascaded storage
group FINDB_SG was used.
15
Groups used:
Server Storage Group ASM Disk Group
Detailed steps:
On Production host:
Create a snapshot of the Production database containing all control, data, and redo log files.
# symsnapvx sg FINDB_SG name FINDB_Restart establish
On Mount host:
Complete pre-requisites:
o GRID infrastructure and Oracle binaries should be installed ahead of time on the mount host. If RAC is used on the
Mount host then it should be pre-configured so the ASM disk groups from the snapshots can simply be mounted into the
existing cluster. If RAC is not used on the Mount host see steps later to bring up Oracle High Availability Services
(HAS).
o The storage group FINDB_MNT contains the linked-target devices of Productions snapshot. It should be added to a
masking view to make the target devices accessible to the Mount host.
If refreshing an earlier snapshot, shut down the database instance that will be refreshed and dismount its ASM disk groups:
o Log in to the ASM instance and dismount the ASM disk groups.
SQL> dismount diskgroup DATA;
SQL> dismount diskgroup REDO;
Link the Production snapshot based on FINDB_SG to the target storage group FINDB_MNT. For the first link use the link
option. For all other links use the relink option.
If RAC is used on the Mount host then it should be already configured and running using a separate ASM disk group and
therefore +DATA and +REDO can simply be mounted. Skip to the next step. If RAC is not used, an ASM instance may not
be running yet. Bring it up following the procedure below before mounting +DATA and +REDO ASM disk groups.
o As the Grid infrastructure user (ASM instance user), start the Oracle high-availability services.
o Log in to the ASM instance and update the ASM disk string before mounting the ASM disk groups.
Mount the ASM disk groups that now contain the snapshot point-in-time data.
Log in to the database instance and start up the database (do not perform database recovery). The database will perform
crash (or instance) recovery and will open.
SQL> startup
Note: Since there is no roll forward of transactions, the creation of database clones using SnapVX is very fast. The time it takes
to Oracle to complete crash recovery and opens depends on the amount of transactions in the log since the last checkpoint.
16
Test Case 2: Creating a local recoverable database replica for backup and recovery
Objectives:
The purpose of this Test Case is to demonstrate the use of SnapVX to create a local recoverable database replica. Such a database
replica can be used to recover Production, or can be mounted on a Mount host and used for RMAN backup and running reports.
Note: As long as the database replica is only mounted, or opened in read-only mode, it can be used to recover Production.
2. Create a consistent snapshot of Production control, data, and redo log files (which are contained in +DATA and +REDO ASM disk
groups).
4. As the database user, switch logs, archive the current log, and save backup control file to +FRA ASM disk group.
5. If the replica is used to offload RMAN incremental backups to a Mount host then switch RMAN Block Change Tracking file
manually.
6. Create a snapshot of Production archive logs contained in +FRA ASM disk group.
7. Link both snapshots to target devices and present them to the Mount host.
9. Mount the database instance on the Mount host (do not open it).
Note: See Test Case 3 for details on how the snapshot can be used to perform RMAN backups.
Groups used:
Server Storage Group ASM Disk Group
FINFRA_SG FRA
FINFRA_MNT FRA
Detailed steps:
On Production host:
Pre-Oracle 12c, place the Production database in hot-backup mode.
Create a snapshot of Production control, data, and redo log files contained in +DATA and +REDO ASM disk groups.
# symsnapvxsg FINDB_SG name Snapshot_Backup establish
As the database user, switch logs, archive the current log, and save backup control file to +FRA ASM disk group.
17
SQL> alter database backup controlfile to +FRA/CTRLFILE_BKUP REUSE;
If the replica is not used to offload RMAN incremental backups to a Mount host (Test Case 3) then skip to the next step.
Otherwise, as a database user on Production, switch RMAN Block Change Tracking file manually.
Note: When RMAN incremental backups are taken using RMAN Block Change Tracking (BCT), then RMAN switches the version of
the file with each backup automatically. However, when the RMAN backup is offloaded to a Mount host, RMAN will update the
BCT file on the Mount host. Oracle provides an API for such cases that switches the BCT file manually on Production after
incremental backups from the Mount host.
Note: By default Oracle only keeps 8 versions in the BCT file for incremental backups. That means that if more than 8
incremental backup are taken before another level 0 (full) backup takes place, RMAN will not be able to use the BCT file and will
revert to scanning the whole database. To increase the number of versions in the BCT file use the init.ora parameter
_bct_bitmaps_per_file (see Oracle support notes: 1192652.1 and 1528510.1)
Create a snapshot of Production archive logs contained in +FRA ASM disk group.
# symsnapvx sg FINFRA_SG name FRA_Backup establish
On Mount host:
Complete pre-requisites:
o GRID infrastructure and Oracle binaries should be installed ahead of time on the mount host. If RAC is used on the
Mount host then it should be pre-configured so the ASM disk groups from the snapshots can simply be mounted into the
existing cluster. If RAC is not used on the Mount host, see steps later to bring up Oracle High Availability Services
(HAS).
o The storage groups FINDB_MNT and FINFRA_MNT contain the linked-target devices of Productions snapshots. They
should be added to a masking view to make the target devices accessible to the Mount host.
If refreshing an earlier snapshot, shut down the database instance and dismount the ASM disk groups.
o Login to the database instance and shut it down.
o Log in to the ASM instance and dismount the ASM disk groups that will be refreshed.
SQL> dismount diskgroup DATA;
SQL> dismount diskgroup REDO;
SQL> dismount diskgroup FRA;
Link the Production snapshots based on FINDB_SG and FINFRA_SG to the target storage group: FINDB_MNT and
FINFRA_MNT respectively. For the first link use the link option. For all other links use the relink option.
Note: By default SnapVX link uses no-copy mode. To have a stand-alone copy with all the data from the source, a copy mode
can be used by adding -copy to the command.
If ASM instance is not running, follow the steps in the Test Case 1 to start the ASM instance and update the ASM disk string.
Mount the ASM disk groups that now contain the snapshot point-in-time data.
Log in to the database instance and mount the database (but do not open it with resetlogs).
SQL> startup mount
Optionally, catalog the backup data files (all in +DATA disk group) with RMAN.
18
Test Case 3: Performing FULL or incremental RMAN backup from a SnapVX replica
Objectives:
The purpose of this test case is to offload RMAN backups to a Mount host using SnapVX snapshot. The RMAN backup can be full or
incremental. In incremental backup, RMAN Block Change Tracking (BCT) is used from the Mount host.
Groups used:
Server Storage Group ASM Disk Group
FINFRA_SG FRA
FINFRA_MNT FRA
Detailed steps:
On Production host:
If RMAN incremental backups are used then enable block change tracking on Production. Make sure that the block change
tracking file is created in the +FRA ASM disk group. For example:
SQL> alter database enable block change tracking using file +FRA/BCT/change_tracking.f reuse;
Perform Test Case 2 to create recoverable replica of Production and mount it to the Mount host.
o If RMAN incremental backups are used, make sure to switch BCT file manually after the step that archives the current
log file, as described in Test Case 2. At the end of this step, ASM disk group +FRA will be mounted to the Mount host
with the Block Change Tracking file included, and Productions BCT file will start tracking block changes with a new
version.
On Mount host:
If no RMAN incremental backups are used then simply run RMAN backup script and perform a full database backup.
o Example for creating full backup (simplest form):
RMAN> run
{
Backup database;
}
If RMAN incremental backups are used perform a full backup (also called level 0 backup) periodically, followed by level 1
backup. For example, a weekly level 0 backup and daily level 1 backups. The DBA can determine an incremental backup
strategy between Differential or Cumulative incremental backups (refer to Oracle documentation for more details).
o Example for creating first full backup as part of incremental backup strategy:
RMAN> run
{
Backup incremental level 0 database;
}
19
o Verify for level 1 backups that the BCT file was used:
Objectives:
The purpose of this Test Case is to leverage a previously taken recoverable snapshot to perform a database recovery of the
Production database. The Test Case demonstrates full and point-in-time recovery. It also demonstrates how to leverage the Oracle
12c Storage Snapshot Optimization feature during database recovery.
Groups used:
Server Storage Group ASM Disk Group
FINFRA_SG FRA
Detailed steps:
On Production host during backup:
Perform Test Case 2 to create a recoverable replica of Production. The linked target (or Mount host) will not be used in this
scenario, only the original snapshot of Production.
On Production host during restore:
Restore the SnapVX snapshot of +DATA ASM disk group alone to Production (do not restore the +REDO ASM disk group to
avoid overwriting the current redo logs on Production if they survived). To do so, use the child storage group: DATA_SG
instead of the cascaded storage group FINDB_SG that was used to create the original snapshot.
o If Productions +DATA ASM disk group was still mounted then, as Grid user, use asmcmd or SQL dismount it (repeat
on all nodes if RAC is used).
SQL> alter diskgroup DATA dismount;
o Restore the SnapVX snapshot of the +DATA ASM disk group alone.
o It is not necessary to wait for the snapshot restore to complete; however, at some point after it completed, terminate
the snapshot-restore session as a best practice.
# symsnapvx sg DATA_SG snapshot_name Snapshot_Backup verify restored
# symsnapvx sg DATA_SG snapshot_name Snapshot_Backup terminate -restored
SnapVX allows using the source devices as soon as the restore is initiated, even as background copy operation of the
changed data is taking place in the background. There is no need to wait for the restore to complete. Once the restore
starts, the +DATA ASM disk group can be mounted on the Production host to the ASM instance.
o As Grid user, using asmcmd or SQL mount DATA ASM disk group (repeat on all nodes if RAC is used).
SQL> alter diskgroup DATA mount;
20
Recover the Production database using the archive logs and optionally the current redo log.
o When performing full recovery (using the current redo log if still available), follow Oracle database recovery procedures.
For example:
SQL> recover automatic database;
SQL> alter database open;
Note: It might be necessary to point to the location of the online redo logs or archive logs if the recovery process didnt
locate them automatically (common in RAC implementations with multiple online or archive logs locations). The goal is
to apply any necessary archive logs as well as the online logs fully.
o When performing incomplete recovery, when leveraging the Oracle 12c feature Storage Optimized Snapshot, provide
the time of the snapshot during the recovery. If the backup was taken using hot-backup mode, remove the snapshot
time <time> reference. An example for using Storage Snapshot Optimization:
SQL> alter session set NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";
SQL> recover database until time '2015-03-21 12:00:00' snapshot time '2015-03-21 11:50:40';
SQL> alter database open RESETLOGS;
If the recovery process requires archive logs that are no longer available on the server, but exist in the +FRA snapshot, use
the snapshot to retrieve the missing archive logs.
Note: it is recommended that a new snapshot of +FRA be taken prior to retrieving the old +FRA snapshot with the missing
archive logs. The new snapshot will contain any additional archive logs that currently exist on the host, but were created after
the old +FRA snapshot was taken and would therefore be lost when it is restored.
o List the FRA snapshots to choose which snapshot generation to restore (generation 0 is always the latest for a given
snapshot-name)
o Restore the appropriate +FRA snapshot and use its archive logs as necessary during the database recovery process.
o It is not necessary to wait for the snapshot restore to complete; however, at some point after it completed, terminate
the snapshot-restore session as a best practice.
Test Case 5: Using SRDF/S and SRDF/A for database disaster recovery
Objectives:
The purpose of this test case is to leverage VMAX3 SRDF to create remote restartable copies and use that for the production
database disaster recovery.
21
Groups used:
SRDF Site Server Storage Group ASM Disk Group
Detailed steps:
SRDF replication setup example (performed from local storage management host):
Create a dynamic SRDF group between production and D/R sites.
# symrdf addgrp -label FINDB -rdfg 20 -dir 1H:10 -remote_sid 536 -remote_dir 1E:7 -remote_rdfg 20
Pair SRDF devices between the Production and remote storage groups that include the database data, control, and redo log
files.
# symrdf -sg FINDB_SG -rdfg 20 createpair -type R1 -remote_sg FINDB_R2 -establish
Optionally pair SRDF device for FRA storage group if archive logs and flash back logs are also replicated to remote site.
Set the SRDF mode to synchronous or asynchronous. If the FRA ASM disk group includes only archive logs, SRDF/A can be
used for more efficient use of bandwidth. If FRA includes also flashback logs then it should be consistent with FINDDB_SG
and use the same SRDF group/mode.
# symrdf -rdfg 20 set mode synchronous
SRDF replication failover example (performed from remote storage management host):
In the event of an outage on Production site, the SRDF link will fail and replication stops. The R2 devices are consistent but
not yet read-writable. Perform SRDF failover to make the R2 devices write-enabled. The commands below describe planned
failover. In the event of a disaster this will be automatically done by SRDF.
22
On production host for disaster recovery:
Split SRDF link and initiate the restore operation. Repeat the steps for FRA also if they are restored.
When the restore operation is initiated, restart the production site application.
<Use SQLPLUS>
<For ASM instance>
SQL> alter diskgroup DATA mount;
SQL> alter diskgroup REDO mount;
SQL> alter diskgroup FRA mount;
When the restore operation completes, failback SRDF to revert the roles back to original.
Objectives:
The purpose of this test case is to leverage VMAX3 SRDF and SnapVX to create remote restartable copies and use that for the
production database disaster recovery. The gold copy snapshot created on a D/R site is used to link with a separate target storage
group for D/R testing.
Groups used:
SRDF Site Server Storage Group ASM Disk
Group
23
Detailed steps:
On production host during normal operation:
Use Test Case 5 to set up the remote site.
<Create snapshot for DATA and REDO on R2 site to be used for periodic D/R testing>
# symsnapvx sid 536sg FINDB_R2 name FINDB_R2Gold nop v establish
On D/R host during normal operation for D/R testing:
Link the snapshot to the target storage group on R2 site. For subsequent D/R testing, relink can be used to refresh the
existing targets.
SQL> startup
On production host for disaster recovery using the restartable snapshot:
Split SRDF link to prepare for the restore. Link the snapshot to the target storage group on R2 site. For subsequent D/R
testing, relink can be used to refresh the existing targets.
Follow the rest of the steps in Test Case 5 for production host disaster recovery to perform SRDF restore.
Objectives:
The purpose of this test case is to leverage VMAX3 SRDF and SnapVX to create remote recoverable copies to use for remote backups
or recovery of the production database from remote backups. The snapshots generated this way also work with the Oracle 12c
snapshot optimization feature. This test case uses the snapshot created off R2 to link with separate storage group for further backup.
2. Pre-Oracle 12c, use database backup mode prior to snapshot of DATA and REDO.
3. Create control file copies and FRA snapshot.
4. Use SnapVX to create snapshots using R2 devices.
5. Mount snapshots and Oracle instance to prepare for backups as described in Test Case 2.
Groups used:
SRDF Site Server Storage Group ASM Disk
Group
24
(Child) DATA_SG_R2 DATA
R2 - FINDB_FRA_R2 FRA
Detailed steps:
On production host:
Pre-Oracle 12c, put the database in hot backup mode.
Note: The SRDF checkpoint command will return control to the user only after the source device content reached the SRDF
target. This is useful for example when production is placed in hot backup mode before the remote clone is taken.
b) No special action is needed when using SRDF synchronous mode (SRDF/S) on FINDB_SG.
On D/R host:
Create a snapshot for DATA and REDO disk groups on the remote target. Name the snap to identify it as the backup image.
Every time -establish is used with the same snapshot name, the generation number will be incremented while keeping the
older generation as well. This can be avoided by terminating the snap prior to recreating it.
<Create snapshot for DATA and REDO to be used for backup on remote VMAX3>
# symsnapvx sid 536 sg FINDB_R2 name Snapshot_Backup_R2 nop v establish
On production host:
Pre-Oracle 12c, take the database out of backup mode.
Perform a log switch and archive the current log. Also save backup control +FRA/CTRLFILE_BKUP to use with RMAN
backup in FRA disk group to be available in FRA snap along with archived logs.
<Use SQLPLUS>
SQL> alter system switch logfile;
SQL> alter system archive log current;
SQL> alter database backup controlfile to +FRA/CTRLFILE_BKUP REUSE;
a) If SRDF asynchronous mode (SRDF/A) is used for SRDF replication of FINDB_FRA then use the SRDF checkpoint command
to make sure that the remote FRA disk group is updated with necessary archived logs generated during backup mode.
<Issue SRDF checkpoint command>
# symrdf sid 535 sg FINDB_SG checkpoint
Note: The SRDF checkpoint command will return control to the user only after the source device content reached the SRDF
target devices (SRDF will wait two delta sets). For example, this is useful when production is placed in hot backup mode before
the remote clone is taken.
b) No special action is needed when using SRDF synchronous mode (SRDF/S) on FINDB_SG
On D/R host:
Create a snapshot of FRA disk group on the remote target.
Link the snapshots Snapshot_Backup_R2 to FINDB_R2_TGT and FINDB_FRABackup_R2 to FRA_R2_TGT to continue with the
rest of the steps and provision the storage to the D/R host.
Use the backup operations described on Mount host in Test Case 2 to continue with further backup.
Objectives:
The purpose of this test case is to demonstrate parallel recovery from a remote backup image by initiating a restore of the remote
target from a remote snapshot and simultaneously starting SRDF restore. This test case is similar to Test Case 4 except that it uses a
remote recoverable copy.
Test scenario:
Use remote snapshot to restore SRDF R2 devices and initiate SRDF restore simultaneously.
Groups used:
SRDF Site Server Storage Group ASM Disk
Group
R2 - FINFRA_R2 FRA
Detailed steps:
On production host during normal operation:
Use Test Case 4 to create a recoverable image on the remote site.
Shut down the Production database and dismount ASM disk groups.
a) Shut down Oracle database.
SQL> shutdown immediate;
b) Dismount ASM diskgroups DATA and REDO and FRA disk groups.
SQL> dismount diskgroup DATA;
SQL> dismount diskgroup REDO;
SQL> dismount diskgroup FRA;
Restore FRA disk group from target snap if needed for Production database recovery.
As soon as the restore from snap is initiated, SRDF restore can be started. SRDF will start performing incremental restore
from R2 to R1. The devices will show SyncInProg to indicate that the restore is going on. The state Synchronized will
indicate completion of the restore.
Test Case 9: Leveraging Access Control List replications for storage snapshots
Objectives:
The purpose of this test case is to demonstrate self-service orchestration of Oracle database snapshots for DBAs. Symmetrix Access
Control Lists are used to grant appropriate privileges to Oracle user to perform self-service database snapshots.
2. Install Solutions Enabler as the non-root user of the choice that would manage Oracle database backups.
3. Once Symmetrix Access Control is set up, Oracle DBAs can run snapshot operations as non-root user and all the test cases
described earlier in the white paper can be executed.
CONCLUSION
VMAX3 provides a platform for Oracle databases that is easy to provision, manage, and operate with the application performance
needs in mind. This paper provides guidance on the latest features of VMAX3 for local and remote data protection along with various
commonly deployed use cases including backup, D/R and repurposing as TEST/DEV. It also covers self-service database replication
that can be leveraged by database administrators to deploy additional copies under their control.
27
APPENDIX I - CONFIGURING ORACLE DATABASE STORAGE GROUPS FOR
REPLICATION
VMAX3 TimeFinder SnapVX and SRDF allow using VMAX3 Auto-Provisioning Groups Storage Groups for provisioning storage for
Oracle database clusters and also for creating Enginuity Consistent Assist based write order consistent snapshots. Changes to Oracle
database provisioning using these storage groups is reflected into any new snapshots created after that, making it very easy to
manage database growth. This simplifies configuring and provisioning Oracle database storage for data protection, availability and
recoverability. Cascading DATA and REDO into a parent SG allows creation of restartable copies of the database. Separating archive
logs from this group allows independent management of data protection for archived logs. While providing desired control over SLO
management, this allows easy deployment of Oracle 12c database recovery optimization from storage-based snapshots.
This appendix shows how to provision storage for Oracle DATA, REDO, and FRA disk groups to ensure database recovery SLAs are
achievable. Following this provisioning model along with the Test Cases described earlier provides proper deployment guidelines for
Oracle databases on VMAX3 to database and storage administrators.
Figure 3 shows an Oracle server provisioning storage using cascaded storage groups.
28
Figure 2 Unisphere create snapshot
29
Figure 3 Unisphere creating linked target
30
Creating a cascaded snapshot from an existing snapshot
TimeFinder Snap VX allows creating snaps from an existing snapshot for repurposing the same point-in-time copy for other uses.
Figure 7 shows how to use an existing snapshot to create additional point-in-time cascaded snapshots.
SRDF modes
SRDF modes define SRDF replication behavior. These basic modes can be combined to create different replication topologies
(described in this appendix).
o In SRDF/S each host write to an R1 device gets acknowledged only after the I/O was copied to the R2 storage system
persistent cache.
o SRDF/S makes sure that data on both the source and target devices is exactly the same.
o Host I/O latency will be affected by the distance between the storage arrays.
SRDF Asynchronous (SRDF/A) is used to create consistent replicas at unlimited distances, without write response time
penalty to the application.
o In SRDF/A each host write to an R1 device gets acknowledged immediately after it registered with the local VMAX3
persistent cache, preventing any write response time penalty to the application.
o Writes to the R1 devices are grouped into cycles. The capture cycle is the cycle that accepts new writes to R1 devices
while it is open. The Transmit cycle is a cycle that was closed for updates and its data is sent from the local to the
remote array. The receive cycle on the remote array receives the data from the transmit cycle. The destaged cycle
on the remote array destages the data to the R2 devices. SRDF software only destages full cycles to the R2 devices.
- The default time for capture cycle to remain open for writes is 30 seconds, though it can be set differently.
- In legacy mode (at least one of the arrays is not a VMAX3), cycle time can increase during peak workloads as
more data needs to be transferred over the links. After the peak, the cycle time will go back to its set time (default
of 30 seconds).
- In multi-cycle mode (both arrays are VMAX3), cycle time remains the same, though during peak workload more
than one cycle can be waiting on the R1 array to be transmitted.
31
- While the capture cycle is open, only the latest update to the same storage location will be sent to the R2, saving
bandwidth. This feature is called write-folding.
- Write-order fidelity is maintained between cycles. For example, two dependent I/Os will always be in the same
cycle, or the first of the I/Os will be in one cycle and the dependent I/O in the next.
- To limit VMAX3 cache usage by capture cycle during peak workload time and to avoid stopping replication due to
too many outstanding I/Os, VMAX3 offers a Delta Set Extension (DSE) pool which is local storage on the source
side that can help buffer outstanding data to target during peak times.
o The R2 target devices maintain a consistent replica of the R1 devices, though slightly behind, depending on how fast
the links can transmit the cycles and the cycle time. For example, when cycles are received every 30 seconds at the
remote storage array its data will be 15 seconds behind production (if transmit cycle was fully received), or 1 minute
behind (if transmit cycle was not fully received it will be discarded during failover to maintain R2 consistency).
o Consistency should always be enabled when protecting databases and applications with SRDF/A to make sure the R2
devices create a consistent restartable replica.
SRDF Adaptive Copy (SRDF/ACP) mode allows bulk transfers of data between source and target devices without
maintaining write-order fidelity and without write performance impact to source devices.
o While SRDF ACP is not valid for ongoing consistent replications it is a good way of transferring changed data in bulk
between source and target devices after replications were suspended for an elongated period of time, accumulating
many changes on the source. ACP mode can be maintained until a certain skew of leftover changes to transmit is
achieved. Once the amount of changed data has been reduced, the SRDF mode can be changed to Sync or Async as
appropriate.
o SRDF ACP is also good for migrations (also referred to as SRDF Data Mobility) as it allows a Point-in-Time data push
between source and target devices.
SRDF topologies
A two-site SRDF topology includes SRDF sessions in SRDF/S, SRDF/A, and/or SRDF/ACP between two storage arrays, where each
RDF group can be set in different mode and each array may contain R1 and R2 devices of different groups.
Concurrent SRDF: Concurrent SRDF is a three-site topology in which replication takes place from site A simultaneously to
site B and site C. Source R1 devices are replicated simultaneously to two different sets of R2 target devices on two different
remote arrays. For example, one SRDF group can be set as SRDF/S replicating to a near site and the other as SRDF/A,
replicating to a far site.
Cascaded SRDF: Cascaded SRDF is a three-site topology in which replication takes place from site A to site B, and from
there to site C. R1 devices in site A replicate to site B to a set of devices called R21. R21 devices behave as R2 to site A,
and as R1 to site C. Site C has the R2 devices. In this topology, site B holds the full capacity of the replicated data. If site A
fails and Production operations continue on site C, site B can become the DR site for site C.
SRDF/EDP: Extended data protection SRDF topology is similar to cascaded SRDF, as site A replicates to site B, and from
there to site C. However, in EDP, site B doesnt hold R21 devices with real capacity. Instead, this topology offers capacity
and cost savings as site B only uses cache to receive the replicated data from site A and transfer it to site C.
SRDF/STAR: SRDF/STAR offers an intelligent three-site topology similar to concurrent SRDF, where site A replicates
simultaneously to site B and site C. However, if site A failed, site B and site C can communicate to merge the changes and
resume DR. For example, if SRDF/STAR replications between site A and B use SRDF/S and replications between site A and C
use SRDF/A, if site A fails then site B can send the remaining changes to site C for a no-data-loss solution at any distance.
Site B can become a DR site for site C afterwards, until site A can come back.
SRDF/AR: SRDF Automatic Replication (AR) can be set as either a two or a three-site replication topology. It offers slower
replication when network bandwidth is limited and without performance overhead. In a two-site topology, AR uses
TimeFinder to create a PiT replica of production on site A, then uses SRDF to replicate it to site B, in which another
TimeFinder replica is created as a gold copy. Then the process repeats. In a three-site topology, site A replicates to Site B
using SRDF/S. In site B TimeFinder is used to create a replica which is then replicated to site C. In site C the gold copy
replica is created and the process repeats itself.
There are also 4-site topologies, though they are beyond the scope of this paper. For full details on SRDF modes, topologies, and
other details refer to the VMAX3 Family with HYPERMAX OS Product Guide.
32
APPENDIX III SOLUTIONS ENABLER CLI COMMANDS FOR TIMEFINDER
SNAPVX MANAGEMENT
# symsnapvx -sid 535 -sg FINDB_SG -name FINDB_SG snapshot_name FINDB_Snap_1 establish [-ttl delta <#of days>]
Execute Establish operation for Storage Group FINDB_SG (y/[n]) ? y
Establish operation execution is in progress for the storage group FINDB_SG. Please wait...
Polling for Establish.............................................Started.
Polling for Establish.............................................Done.
Polling for Activate..............................................Started.
Polling for Activate..............................................Done.
Dev Snapshot Name Gen FLRG Snapshot Timestamp (Tracks) (Tracks) Expiration Date
----- -------------------------------- ---- ---- ------------------------ ---------- ---------- ----------------
--------
Flgs:
(F)ailed : X = Failed, . = No Failure
# symsnapvx -sid 535 -sg FINDB_SG -snapshot_name FINDB_Snap_1 -lnsg FINDB_MNT link [-copy]
-------------------------------------------------------------------------------
Sym Link Flgs
Dev Snapshot Name Gen Dev FCMD Snapshot Timestamp
----- -------------------------------- ---- ----- ---- ------------------------
000BC FINDB_Snap_1 0 00053 ..X. Tue Mar 31 10:12:52 2015
000BD FINDB_Snap_1 0 00054 .... Tue Mar 31 10:12:52 2015
000BE FINDB_Snap_1 0 00055 .... Tue Mar 31 10:12:52 2015
000BF FINDB_Snap_1 0 00056 .... Tue Mar 31 10:12:52 2015
000C0 FINDB_Snap_1 0 00057 ..XX Tue Mar 31 10:12:52 2015
000C1 FINDB_Snap_1 0 00058 ...X Tue Mar 31 10:12:52 2015
000C2 FINDB_Snap_1 0 00059 ...X Tue Mar 31 10:12:52 2015
000C3 FINDB_Snap_1 0 0005A ...X Tue Mar 31 10:12:52 2015
34
Flgs:
# symrdf addgrp -label FINDB -rdfg 20 -sid 535 -dir 1H:10 -remote_sid 536 -remote_dir 1E:7 -remote_rdfg 20
Execute a Dynamic RDF Addgrp operation for group
# symrdf -sid 535 -sg FINDB_SG -rdfg 20 createpair -type R1 -remote_sg FINDB_R2 -establish
Execute an RDF 'Create Pair' operation for storage
35
group 'FINDB_SG' (y/[n]) ? y
MB(s) 0 1642679
Legend for MODES:
M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy
: M = Mixed
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
36
(Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A
37
APPENDIX V - SOLUTIONS ENABLER ARRAY BASED ACCESS CONTROL
MANAGEMENT
VMAX Solutions Enabler array-based Access Control (ACL) allows DBA users to perform VMAX management from the database host
or a host under DBA control. By setting ACLs on database devices, DBAs can perform data protection operations with better control,
isolation, and security.
Access Groups: Groups that contain the unique Host ID and descriptive Host Name of non-root users. The Host ID is
provided by running symacl unique command on the appropriate host.
Access Pools: Pools that specify the set of devices for operations.
Access Control Entry (ACE): Entries in the Access Control Database that specify the permission level for the Access
Control Groups and on which pools they can operate.
This appendix illustrates ACL management using Unisphere, but array-based Access Control can also be performed using the
Solutions Enabler Command Line interface using the general syntax: symacl sid file preview | prepare | commit. With this syntax,
preview verifies the syntax, prepare runs preview and checks if the execution is possible, and commit performs the prepare
operations and executes the command.
The Storage Admin PIN can be set in an environment variable, SYMCLI_ACCESS_PIN, or entered manually.
1. Initialize the SYMACL database, identify and add an admin host with host access to run SYMACL commands.
2. Identify unique IDs of the UNIVMAX and database management hosts: use SYMCLI for this step.
3. Add UNIVMAX host to AdminGrp for ACL management: use SYMCLI for this step.
6. Grant base management and data protection management privileges to the access group.
7. Install Solutions Enabler as non-Root Oracle user and run SnapVX operations from Oracle user on the devices granted access to
that user.
Identifying the unique ID of the Unisphere for VMAX and database management host
Run this SYMCLI command on both UNIVMAX and database management hosts to retrieve their unique IDs
Note: On VMAX3, the SymmACL database comes pre-initialized with this group that has to be first initialized if not done already. For
Unisphere access, the VMAX SymmWin procedure wizard must be used to add Host_Based access ID of the Unisphere host to the
SymmACL database. Once Unisphere is added to SymmACL database, the following command can also be run from Unisphere
graphical user interface instead of using Solutions Enabler Command Line from a host with granted access. A PIN can also be set up
using the SymmWin procedure wizard.
DELIM
Command file: (stdin)
PREVIEW............................................................................................Started.
PREVIEW............................................................................................Done.
PREPARE............................................................................................Started.
39
Create access group for database host
Create a database host management access group using the database management host access ID.
40
Create database device access pool
Create a database device pool by selecting the devices in FINDB_SG.
41
Grant base and snap privileges
Create Access Control Entry for the database access group and database access pool created in earlier steps. Assign BASE,
BASECTRL, and SNAP privileges to the access pool to allow running snap operations.
On the Application Management host, install Solutions Enabler for the Oracle user. The installation has to be performed as root user,
though the option for allowing a non-root user is part of the installation.
42
# The following HAS BEEN INSTALLED in /home/oracle/SE via the rpm utility.
#-----------------------------------------------------------------------------
ITEM PRODUCT VERSION
To allow the Oracle user to run symcfg discover and list commands, permission is required to use the Solutions Enabler
daemons. Update the daemon_users file.
# cd /var/symapi/config
# vi daemon_users
# Add entry to allow user access to base daemon
oracle storapid
oracle storgnsd
# su oracle
# symcfg disc
# sympd list gb
<Running SYMSNAPVX command on a storage group with devices NOT in access group>
# symsnapvx -sid 535 -sg FINDB_FRA -name FINDB_FRA_SNAP establish
Execute Establish operation for Storage Group FINDB_FRA (y/[n]) ? y
Establish operation execution is in progress for the storage group FINDB_FRA. Please wait...
Symmetrix access control denied the request
REFERENCES
EMC VMAX3 Family with HYPERMAX OS Product Guide
Unisphere for VMAX3 Documentation Set
EMC VMAX3 TM Local Replication TechNote
Deployment Best Practices for Oracle Database with VMAX3 SLO management
43