Sunteți pe pagina 1din 30

Oracle GoldenGate Best Practices:

GoldenGate Capture from a DataGuard with Cascaded Redo Logs

Version 12c
Document ID 2395049.1

ORACLE WHITE PAPER | AUGUST 2018

Tracy West
Consulting Solution Architect
Sourav Bhattacharya
Consulting Solution Architect

A-Team – Cloud Solution Architects


DISCLAIMER
This sample code is provided for educational purposes only and not supported by Oracle Support
Services. It has been tested internally, however, and works as documented. We do not guarantee
that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this sample code before using it! Due to the differences in the way text editors, e-mail
packages and operating systems handle text formatting (spaces, tabs and carriage returns), this
sample code may not be in an executable state when you first receive it. Check over the sample
code to ensure that errors of this type are corrected.

This document touches briefly on many important and complex concepts and does not provide a
detailed explanation of any one topic since the intent is to present the material in the most
expedient manner. The goal is simply to help the reader become familiar enough with the
product to successfully design and implement an Oracle GoldenGate environment. To that end, it
is important to note that the activities of design, unit testing and integration testing which are
crucial to a successful implementation have been intentionally left out of the guide. All the
sample scripts are provided as is. Oracle consulting service is highly recommended for any
customized implementation.

GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS
Contents

Introduction 1

Prerequisites 2

Additional OGG Considerations 2

Overview 3

Configure Cascading Redo Log Shipping 3

Source RDBMS Setup 3

ADG Standby Setup 3

Downstream Mining Server Setup 3

Instantiation Considerations 3

Oracle GoldenGate Highlights 4

Source RDBMS Configuration Steps 5

Enable Cascading Redo Log Shipping 5

Set LOG_ARCHIVE_CONFIG 5

Enable Minimal Supplemental Logging 5

Create OGG Extract User 5

Open SQL*Net Port 6

ADG Standby Configuration Steps 7

Enable Cascading Redo Log Shipping 7

Set LOG_ARCHIVE_CONFIG 7

Set LOG_ARCHIVE_DEST_3 7

Open SQL*Net Port 7

Create TNSNAMES entry for Mining System 7

GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Downstream Mining Server Configuration Steps 8

Enable Cascading Redo Log Shipping 8

Set LOG_ARCHIVE_CONFIG 8

Set LOG_ARCHIVE_DEST_1 8

Set LOG_ARCHIVE_DEST_2 8

Set Standby Logfile Groups 8

Copy password file from Source to Downstream 9

Open SQL*Net Port 9

Create TNSNAMES entry for Source System 9

Create TNSNAMES entry for ADG Standby System 9

Create Mining Database Capture User 10

Enable Supplemental Logging 10

Create & Start Downstream Extract Process 12

Instantiation Methods 14

Using Oracle expdp 14

CSN Filtering Applied to All Mapped Objects in Replicat 17

CSN Filtering with the parameter DBOPTIONS

ENABLE_INSTANTIATION_FILTERING 18

Using Other Data Tools ie. ODI or Network Import 20

CSN Filtering Applied to All Mapped Objects in Replicat 23

CSN Filtering Applied to specific tables in Replicat 23

Where to Go for More Information 24

GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Introduction
In some circumstances, Oracle GoldenGate customers may wish to eliminate installation and workload
dependencies on the primary Oracle Database. Typically this is an important consideration when the primary
database workload is already very high or when the primary database operates on a protected host that is not a
suitable a GoldenGate installation. In many Cloud situations, it is desirable to completely decouple the primary
application database from the downstream changed data capture process with GoldenGate.

This paper provides guidance and examples for setting up downstream extraction from an Oracle DataGuard
Standby system that has been configured with cascading redo logs. We outline how to configure the cascade log
shipping from the standby to the mining database as well has how to configure the extract process on the mining
server to fetch necessary data from the DataGuard standby system. Instantiation of target systems from an
DataGuard standby has special considerations and will also be addressed in this document. This architecture
configuration using redo cascade is the most decoupled solution that can still provide realtime characteristics of
change data capture. The focus of the document is intended for Oracle Database Administrators (DBAs), and Oracle
Developers with basic knowledge of Oracle GoldenGate and Oracle DataGuard. The document is intended to be a
supplement to the existing series of documentation available from Oracle.

The following assumptions have been made during the writing of this document:

» The reader has basic knowledge of Oracle GoldenGate products and concepts
» Referencing Oracle GoldenGate Version 12.2 and above
» Referencing Oracle RDBMS Version 11.2.0.4 and above
» Referencing OS: All Oracle GoldenGate supported platforms for Oracle

Oracle GoldenGate is the market leading data integration tool, it provides data capture from transaction logs and
delivery for homogeneous/heterogeneous databases, big data and messaging systems. A key strength of Oracle
GoldenGate is that it provides a flexible, de-coupled architecture that can be used to implement virtually any
replication scenario.

1 | GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Prerequisites
If you plan to execute the instructions in this best practice, make sure all software is already installed. The reader
should be familiar with basic OGG architecture and functionality. For Oracle RDBMS 12.1.0.2 and above, the
init.ora parameter, enable_goldengate_replication must be set to TRUE in both the source, target and
mining databases.

The following table describes items that are referred to throughout the document. You will need to identify your
installation-specific values and substitute them as you go.

Item Reference Description


Unix Programs /ggs Directory of Unix GoldenGate installation.

Unix Parameter Files /ggs/dirprm Directory for GoldenGate parameter files.

Unix Report Files /ggs/dirrpt Directory for output from GoldenGate programs.

Unix Definitions Files /ggs/dirdef Directory for generated Oracle DDL and definition files.

GGS temporary storage /ggs/dirdat Directory to hold temporary Extract trails

Oracle Logon userid, password User ID and password for the source or target database. When implementing
Integrated Extract or Replicat, this user must be granted admin privileges with the
DBMS_GOLDENGATE_AUTH procedure on the both source and target databases.
Unix System Network Target Server IP address/hostname of the target Unix system in network.
Address Source Server IP address/hostname of the source Unix system in network

Additional OGG Considerations


Key Management
» Need to share encryption key for trail file encryption between OGG source and OGG target
Password Security
» Passwords used by GoldenGate for Database access encrypted using AES.
Command Security
» Manager configured with CMDSEC security. Restrict by IP Address and User for IPC Messages using
ACCESSRULE Parameter.
Process Management
» GoldenGate can be configured to auto start / restart processes upon any failure including network failure.
Monitoring
» All GoldenGate deployments can be monitored by EM with optional Monitor Agent which are not covered
in this document.

2 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Overview
This document will address how to configure Oracle GoldenGate to extract data from an ADG Standby by utilizing
cascading redo log shipping and a downstream mining server. This document assumes that the user already has
the following systems configured: 1) a source rdbms, 2) an ADG Standby for the source system and 3) a database
that will be used as the mining database. This document will outline the steps to enable those environments to work
together for implementing downstream extraction from an ADG Standby.

OnPrem
Source & Standby RDBMS Mining RDBMS Pump/Apply Data DB /
DBCS / OnPrem (DIPC/GGCS/OnPrem) Big Data

DbaaS
ADG Downstream OEH +
ProdDB Cascaded REDO via SSL Connection GG Capture GG Replicat /
ADG Standby DB Mining GG Pump BDC
Server

Cloud
firewalls firewalls @Customer

Configure Cascading Redo Log Shipping


» This is a data guard configuration option. There are steps that need to be done at the source, ADG
standby and downstream mining databases.

Source RDBMS Setup


» There are a few steps that have to be executed on the source to enable downstream extraction from an
ADG Standby. Enabling supplemental logging, and registering the extract are the main steps that will
need to be executed on the source. This source system, after this preliminary work is completed, should
not be impacted by OGG.

ADG Standby Setup


» This document will not cover the complete setup of an ADG Standby. This document assumes the ADG
Standby already exists. However, in order for the redo logs to be shipped to the mining database server,
cascade redo log shipping will need to be enabled from the ADG Standby.

Downstream Mining Server Setup


» A majority of the work will be done at this server. The extract process needs to be configured and the
database has to be configured to receive the cascading redo logs being shipped from the ADG Standby
server. The downstream mining server RDBMS version can be different than the ADG Standby. It is only
required to be the same version or higher. The mining server can be patched and not impact the source
systems.

Instantiation Considerations
» Instantiation has similar requirements whether it is done from the source system or the ADG Standby
System. There are certain steps that users need to be aware of to get a consistent copy of data from the
Standby that are slightly different. This paper will not cover to the extent of the Document 1276058.1
“Instantiation from an Oracle Source Database with Oracle GoldenGate 12c”. This paper will focus on the
unique requirements of the ADG Standby Server.

3 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Oracle GoldenGate Highlights
Extract, Extract pump and Replicat work together to keep the databases in sync near real-time via incremental
transaction replication. In all examples this function is accomplished by

» Starting the Manager program in all OGG installed systems. 


» Adding supplemental transaction log data for update operations on the source system. 
» Creating/Running the real-time Extract to retrieve and store the incremental changed data from the Oracle
tables into trail files on the target Unix system.
» Creating/Running the real-time Extract pump to send incremental changed data from the source
environment to the target environment. 
After initial instantiation (Heterogenous/Homogenous)

» Create/Start the real-time Replicat to replicate extracted data. 


Once Extract and Replicat are running, changes are replicated perpetually.

Notes on Command Syntax: Commands throughout the document make specific references to directories, file
names, checkpoint group names, begin times, etc. Unless otherwise noted, these items do not have to correspond
exactly in your environment; they are used to illustrated concrete examples. For exact syntax, consult the Oracle
GoldenGate Reference Guide.

4 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Source RDBMS Configuration Steps

Enable Cascading Redo Log Shipping

At the source database, the database parameter LOG_ARCHIVE_CONFIG has to be set for both the Standby
Database and the mining database.

Set LOG_ARCHIVE_CONFIG
SQL> Alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=(< Primary database DB_UNQUE_NAME>,
<Standby database DB_UNIQUE_NAME>, <Mining database DB_UNIQUE_NAME>)’;

SQL> Alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=(SRC_01, STBY_02,


MINING)’;

Enable Minimal Supplemental Logging


To enable minimal supplemental logging at the PDB level, issue the following command on the source Unix system.

$(Source System) sqlplus / as sysdba

SQL> alter session set container=pdb1 ;

Session altered.

SQL> ALTER PLUGGABLE DATABASE ADD SUPPLEMENTAL LOG DATA;

Pluggable Database altered.

Create OGG Extract User


The Extract user for a Multitenant environment must be a common user and must log into the root container. This
user must be created in the source database. In the following example, the extract userid is c##ggadmin using the
password ggadmin. If your environment has other security features that require additional permissions, grant those
as well. This user will be used to fetch, when required, data from the ADG Standby Database.

$(Source System) sqlplus / as sysdba


SQL> create user c##ggadmin identified by ggadmin;
User created.

SQL> exec
dbms_goldengate_auth.grant_admin_privilege('C##GGADMIN',container=>'ALL');
PL/SQL procedure successfully completed.

SQL> grant dba to c##ggadmin container=all;


Grant succeeded.

SQL> connect c##ggadmin/ggadmin


Connected.

5 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Open SQL*Net Port
For cloud environments, a connection for the mining database to connect to the source database to register the
extract, and enable table level supplemental logging must be opened. The following Access Rules screen is an
example for Oracle Cloud. In an on-premise environment, this would be done by the Network Administration Team.

6 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


ADG Standby Configuration Steps

Enable Cascading Redo Log Shipping


Redo needs to be shipped from the ADG Standby to the Downstream Mining Server. In a cascading redo shipping
configuration, it is important to note that even if redo is not being applied to the ADG Standby, redo will continue to
be shipped to the downstream mining server. The database parameters LOG_ARCHIVE_CONFIG and
LOG_ARCHIVE_DEST_3 need to be set. LOG_ARCHIVE_DEST_2 is usually set for the original standby
configuration.

Set LOG_ARCHIVE_CONFIG
SQL> Alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=(< Primary database DB_UNQUE_NAME>,
<Standby database DB_UNIQUE_NAME>, <Mining database DB_UNIQUE_NAME>)’;

SQL> Alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=(SRC_01, STBY_02,


MINING)’;

Set LOG_ARCHIVE_DEST_3
SQL> Alter system set LOG_ARCHIVE_DEST_2=’SERVICE=<connect string for the mining database> ASYNC
NOREGISTER VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) REOPEN=10 DB_UNIQUE_NAME=<db
unique name of the mining server>’;

SQL> Alter system set LOG_ARCHIVE_DEST_3=’SERVICE=to_mining ASYNC NOREGISTER


VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) REOPEN=10 DB_UNIQUE_NAME=mining’;

Open SQL*Net Port


Much like what was done on the source database, a SQL*Net connection for the mining database extract is required
to connect to ADG Standby database to fetch required data.

Create TNSNAMES entry for Mining System


Create an entry in the tnsnames.ora to enable a connection to the mining database for the shipping of redo from the
standby to the mining database.
to_mining =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
(HOST = <mining db ip or host name>)
(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL.us.cloud.internal)
)
)

7 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Downstream Mining Server Configuration Steps

Enable Cascading Redo Log Shipping


On the mining server, the database parameters LOG_ARCHIVE_CONFIG, LOG_ARCHIVE_DEST_1,
LOG_ARCHIVE_DEST_2 will need to be set. The standby logfile groups created. As well as, the password file
copied from the source.

Set LOG_ARCHIVE_CONFIG
SQL> Alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=(< Primary database DB_UNQUE_NAME>,
<Standby database DB_UNIQUE_NAME>, <Mining database DB_UNIQUE_NAME>)’;

SQL> Alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=(SRC_01, STBY_02,


MINING)’;

Set LOG_ARCHIVE_DEST_1
SQL>Alter system set LOG_ARCHIVE_DEST_1=’LOCATION=<location for local archives>
VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE) ‘

SQL> Alter system set LOG_ARCHIVE_DEST_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST


VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE) ‘

Set LOG_ARCHIVE_DEST_2
SQL>Alter system set LOG_ARCHIVE_DEST_2=’LOCATION=<landing path for foreign archives that are cascading
through standby db> VALID_FOR=(STANDBY_LOGFILES, ALL_ROLES)’

SQL> alter system set


LOG_ARCHIVE_DEST_2=’LOCATION=/u01/app/oracle/foreign_archives
VALID_FOR=(STANDBY_LOGFILES, ALL_ROLES)’

Set Standby Logfile Groups


The size of the standby redo logs and the number of redo logs is dependent on your operating environment. The
standby redo log size needs to be the same size as the current redo log size of the source system. The number of
standby logs in the mining database is determined using the following formula:
(maximum # of logfile groups on the source db +1) * maximum # of threads

In this case, we have a non RAC setup with 3 groups.

Standby redo logs = (3 + 1) * 1 = 4

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/oracle/dbs/slog4a.rdo',


'/oracle/dbs/slog4b.rdo') SIZE 1024M;

8 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5('/oracle/dbs/slog5.rdo',
'/oracle/dbs/slog5b.rdo') SIZE 1024M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('/oracle/dbs/slog6.rdo',


'/oracle/dbs/slog6b.rdo') SIZE 1024M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('/oracle/dbs/slog7.rdo',


'/oracle/dbs/slog7b.rdo') SIZE 1024M;

Copy password file from Source to Downstream

Copy the “sys” user password file (orapw<SID>) from the source database and copy it to the mining db.

$cp <orapw<Source database SID>> $ORACLE_HOME/dbs/<orapw<Mining database


SID>>

Open SQL*Net Port


Much like what was done on the source database and standby database, a SQL*Net connection for the standby to
cascade the redo logs to the mining database is required.

Create TNSNAMES entry for Source System


Create an entry in the tnsnames.ora to enable a connection to the source database for the initial setup of
supplemental logging and registering the extract.
to_source =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
(HOST = <source db ip or host name>)
(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL.us.cloud.internal)
)
)

Create TNSNAMES entry for ADG Standby System


Create an entry in the tnsnames.ora to enable a connection to the standby database for the extract. The
downstream extract will need to connect to the standby to fetch data when required. This will be used in the extract
parameter file.
to_stby =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
(HOST = <standby db ip or host name>)
(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL.us.cloud.internal)
)
)

9 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Create Mining Database Capture User
The Extract user for a Multitenant environment must be a common user and must log into the root container. This
user must be created in the mining database. In the following example, the extract userid is c##min using the
password ogg. If your environment has other security feature that require additional permissions, grant those as
well.

$(Mining DB Server) sqlplus / as sysdba

SQL> create user c##min identified by ogg;


User created.

SQL> exec dbms_goldengate_auth.grant_admin_privilege('C##MIN',container=>'ALL');


PL/SQL procedure successfully completed.

SQL> grant dba to c##min container=all;


Grant succeeded.

SQL> connect c##min/ogg


Connected.

Enable Supplemental Logging


In order to replicate data from any system, OGG requires that table level supplemental logging must be enabled at
the source system. As of GoldenGate (OGG) version 12.2 and above there is transparent integration of OGG with
Oracle Data Pump. This feature requires OGG version 12.2 and above on the source and target system. The CSN
for each table is captured on an Oracle Data Pump Export. The CSN is then applied to system tables and views on
the target database on an import. These views and system tables are referenced by Replicat when applying data to
the target database. When using Oracle Data Pump to instantiate a target system, users must make sure the tables
that you are instantiating are “prepared tables for instantiation”.

At Mining Database:

» Login to Source Database via GGSCI

GGSCI > DBLOGIN USERID c##ggadmin@to_source PASSWORD ggadmin

Successfully logged into database CDB$ROOT.

» Source System tables are automatically prepared when issuing the command ADD TRANDATA / ADD
SCHEMATRANDATA with PREPARECSN

GGSCI > ADD SCHEMATRANDATA pdb1.apps PREPARECSN

2018-04-20 23:16:44 INFO OGG-01788 SCHEMATRANDATA has been added


on schema apps.

2018-04-20 23:16:44 INFO OGG-01976 SCHEMATRANDATA for scheduling


columns has been added on schema apps.

10 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


2018-04-20 23:16:44 INFO OGG-10154 Schema level PREPARECSN set to
mode NOWAIT on schema apps

GGSCI> INFO SCHEMATRANDATA pdb1.apps

2018-04-20 23:32:41 INFO OGG-06480 Schema level supplemental


logging, excluding non-validated keys, is enabled on schema APPS.

2018-04-20 23:32:41 INFO OGG-01980 Schema level supplemental


logging is enabled on schema APPS for all scheduling columns.

2018-04-20 23:32:41 INFO OGG-10462 Schema APPS have 2 prepared


tables for instantiation.

GGSCI > info trandata pdb1.apps.*

2018-04-20 23:31:06 INFO OGG-06480 Schema level supplemental


logging, excluding non-validated keys, is enabled on schema APPS.

2018-04-20 23:31:06 INFO OGG-01980 Schema level supplemental


logging is enabled on schema APPS for all scheduling columns.

Logging of supplemental redo log data is enabled for table


PDB1.APPS.TCUSTMER.

Columns supplementally logged for table PDB1.APPS.TCUSTMER: CUST_CODE.

Prepared CSN for table PDB1.APPS.TCUSTMER: 12454843

Logging of supplemental redo log data is enabled for table


PDB1.APPS.TCUSTORD.

Columns supplementally logged for table PDB1.APPS.TCUSTORD: CUST_CODE,


ORDER_DATE, ORDER_ID, PRODUCT_CODE.

Prepared CSN for table PDB1.APPS.TCUSTORD: 12454847

» A query against the “SOURCE” or “STANDBY” databases can also verify if individual tables have been
prepared for instantiation.

SQL> select table_name, scn from dba_capture_prepared_tables where


table_owner = 'APPS' ;

TABLE_NAME SCN
------------------------------ ----------
TCUSTMER 12454843
TCUSTORD 12454847

Note:. The scn is the smallest system change number (SCN) for which the table can be instantiated. It is
not the export SCN.

11 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Create & Start Downstream Extract Process
» Login to Source Database via GGSCI on the mining server

GGSCI > DBLOGIN USERID c##ggadmin@to_source PASSWORD ggadmin

Successfully logged into database CDB$ROOT.

» Login to Mining Database

GGSCI > MININGDBLOGIN USERID c##min@orcl PASSWORD ogg


Successfully logged into mining database.

» Register the Extract with the SOURCE database. This will be the last time OGG will need a connection to
the source database. Registering the extract will dump the required data dictionary to the redo logs.

GGSCI > REGISTER EXTRACT eapps DATABASE CONTAINER ( PDB1 )

2018-04-21 02:12:54 INFO OGG-02003 Extract EAPPS successfully registered


with database at SCN 12481586.

» Add the Extract & Trail File


GGSCI > ADD EXTRACT eapps INTEGRATED TRANLOG BEGIN NOW
EXTRACT (Integrated) added.

GGSCI > ADD EXTTRAIL ./dirdat/lt, EXTRACT eapps


EXTTRAIL added.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EAPPS 00:00:00 00:03:03

» Start the mining Extract

GGSCI > start eapps

Sending START request to MANAGER ...


EXTRACT EAPPS starting

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EAPPS 00:00:44 00:00:06

12 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


» Contents of Extract Parameter File
The key parameters for this configuration show in this parameter file are NOUSERID, FETCHUSERID,
DBOPTIONS FETCHTIMEOUT, and DBOPTIONS FETCHECKFREQ.

EXTRACT EAPPS
-- Not logging into Source Database. Must use NOUSERID
NOUSERID
-- When using NOUSERID, FETCHUSERID of standby must be specified
FETCHUSERID c##ggadmin@to_stby, password ggadmin

-- Force Extract to Abend after the default of 30 seconds if the ADG is


-- behind the mining extract on a Fetch.
DBOPTIONS FETCHTIMEOUT

-- Extract will wait the default of 3 seconds between each check while
-- waiting for ADG to catch up
DBOPTIONS FETCHCHECKFREQ

-- Mining Database Login


TRANLOGOPTIONS MININGUSER c##min@orcl, MININGPASSWORD ogg
-- Specify Real-Time Mode not Archive Log Only mode
TRANLOGOPTIONS INTEGRATEDPARAMS (DOWNSTREAM_REAL_TIME_MINE Y)
EXTTRAIL ./dirdat/lt
TABLE PDB1.APPS.*;

It is highly recommended that a job be created on the source system that will dump the data dictionary periodically.
This will enable the extract to be repositioned in the event of an issue later on. The command to dump the
dictionary is as follows:

EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

13 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Instantiation Methods

This is a guideline of how to Instantiate a target database from an ADG standby.. For more detailed steps for
instantiation of a target database, please refer to document Instantiation from an Oracle Source Document ID
1276058.1

In detailing these steps, it is assumed that all long running transactions that were active when the extract was added
have since committed. The extract could have been added in the middle of a long running transaction. Extract
would not know about the operations in the transaction that occurred before it was added. So in order to not miss
data, it is important to verify that all open transactions from the time when extract was added have since committed.

This section will address how to pull data from the ADG Standby via expdp or other data tools like ODI. Oracle’s
expdp requires write access to the database. So in order to use expdp, the ADG standby has to be temporarily
converted to a Flashback standby. Other data tools like ODI or Network Import, only require read access. In that
case, only the process applying the logs to the standby needs to be temporarily stopped. .

Using Oracle expdp


There are 2 approaches with OGG 12.2 and above that can be used on the target for applying data after a particular
SCN. The replicat can be started with AFTERCSN or the parameter DBOPTIONS
ENABLE_INSTANTIATION_FILTERING can be enabled. The steps required at the ADG Standby, however, are the
same in both cases.

At Mining Database:

Cascade Redo Log shipping will continue even when the redo log apply process on the standby has stopped.
In order to avoid the extract failing because it requires data to be fetched from the ADG Standby, it is
recommended to stop the extract process before stopping the redo log apply process on the standby.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EAPPS 00:00:50 00:00:09

GGSCI > stop eapps


Sending STOP request to EXTRACT EAPPS ...
Request processed.
GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EAPPS 00:00:30 00:00:16

14 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


At Target Database:

Once the replicat is current, stop the replicat process

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING RAPPS 00:00:00 00:00:02

GGSCI > stop rapps

Sending STOP request to REPLICAT RAPPS ...


Request processed.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT STOPPED RAPPS 00:00:00 00:02:19

At ADG Standby:
Stop the redo apply process and Note the current_scn. That SCN will be used to set the apply process.

SQL> select open_mode, current_scn from v$database;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY WITH APPLY 13948988

SQL> alter database recover managed standby database cancel ;

Database altered.

SQL> select open_mode, current_scn from v$database ;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY 13949075

SQL> /

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY 13949075

15 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Convert to SnapShot Standby which will enable expdp to read write.
SQL> alter database convert to snapshot standby ;

Database altered.

SQL> select open_mode, current_scn from v$database ;

OPEN_MODE CURRENT_SCN
-------------------- -----------
MOUNTED 0

Open database for expdp

SQL> alter database open;

Database altered.

SQL> select open_mode, current_scn from v$database ;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ WRITE 13950137

Do the export that is required. Either multiple schema, a single schema, or a single table

>expdp directory=dumpdir schemas=apps parallel=4 dumpfile=ora102_%u.dmp

>Username: system
Note: Any DB user with DBA privileges will do
>Password:
Note: The export log needs to be checked for errors.

Restore the database back to Standby Mode and Restart the Redo Apply Process
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount ;
ORACLE instance started.

Total System Global Area 2768240640 bytes


Fixed Size 2928248 bytes
Variable Size 704643464 bytes
Database Buffers 2046820352 bytes
Redo Buffers 13848576 bytes
Database mounted.

SQL> alter database convert to physical standby ;

Database altered.

SQL> alter database open;

Database altered.

16 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


SQL> select open_mode, current_scn from v$database ;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY WITH APPLY 13953343

Note: If the Open_Mode was READ ONLY, then you would need to issue the following
command to insure apply was enabled: >alter database recover managed standby
database disconnect ;

At Mining Database:

Restart the Downstream Extract


GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EAPPS 00:00:30 01:50:36

GGSCI > start eapps

Sending START request to MANAGER ...


EXTRACT EAPPS starting

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EAPPS 00:00:42 00:00:07

Import into Target Database:


Import the data to the target system.
>impdp system/password DIRECTORY=dumpdir DUMPFILE=ora102_%u.dmp

CSN Filtering Applied to All Mapped Objects in Replicat


During an initial load of all tables from an ADG Standby, this would be the most efficient option. This will force the
replicat to filter all ddl and dml for all objects in the replicat to filter based on the CSN entered at startup.
Start the replicat using the SCN of the Standby Database (13949075) before it was opened as a SnapShot
Standby.
GGSCI > START REP rapps, AFTERCSN 13949075

Sending START request to MANAGER ...


REPLICAT RAPPS starting

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING RAPPS 00:00:00 00:00:06

17 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


CSN Filtering with the parameter DBOPTIONS ENABLE_INSTANTIATION_FILTERING
Enabling this parameter in the replicat will allow CSN filtering of specific tables in the replicat instead of all the tables
in the replicat. There is some extra work involved but it gives the flexibility to re-instantiate just one table if needed..

» Datapump import will populate system tables and views with instantiation SCNs
SQL> select source_database, source_object_name, instantiation_scn, ignore_scn
from dba_apply_instantiated_objects where source_object_owner = 'APPS' ;

SOURCE_DATABASE SOURCE_OBJECT_NAME INSTANTIATION_SCN


------------------------------- -------------------- -----------------
IGNORE_SCN
----------
PDB1.ORACLECLOUD.MYDB TCUSTORD 13950137
0

PDB1.ORACLECLOUD.MYDB TCUSTMER 13950137


0

Change the Instantiation SCN to the SCN of the Standby Database (13949075) before it was opened as a
SnapShot Standby. This is set for each table that was imported.

GGSCI > dblogin userid apps@pdb1 password apps


Successfully logged into database PDB1.

GGSCI > SET_INSTANTIATION_CSN 13949075 FOR APPS.TCUSTMER FROM


PDB1.ORACLECLOUD.MYDB

2018-04-27 22:10:39 INFO OGG-10463 Instantiation CSN has been set


successfully.

GGSCI > SET_INSTANTIATION_CSN 13949075 FOR APPS.TCUSTORD FROM


PDB1.ORACLECLOUD.MYDB

2018-04-27 22:10:39 INFO OGG-10463 Instantiation CSN has been set


successfully.

Verfiy Change in Database

SQL> select source_database, source_object_name, instantiation_scn from


dba_apply_instantiated_objects where source_object_owner = 'APPS'

SOURCE_DATABASE SOURCE_OBJ INSTANTIATION_SCN


------------------------------ ---------- -----------------
PDB1.ORACLECLOUD.MYDB TCUSTORD 13949075

PDB1.ORACLECLOUD.MYDB TCUSTMER 13949075

18 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Replicat parameter (DBOPTIONS ENABLE_INSTANTIATION_FILTERING) to enable table level instantiation
filtering . Start replicat, who will query instantiation CSN on any new mapping and filter records accordingly.
Replicat filters out DDL and DML records based on each table’s instantiation CSN . Output in the report file will
show the table name and to what CSN the replicat will start applying data.

2018-04-27 15:02:51 INFO OGG-10155 Instantiation CSN filtering is enabled on


table APPS.TCUSTMER at CSN 13,949,075.

2018-04-27 15:02:51 INFO OGG-10155 Instantiation CSN filtering is enabled on


table APPS.TCUSTORD at CSN 13,949,075.

19 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Using Other Data Tools ie. ODI or Network Import
In this scenario, the replicat can be started with AFTERCSN or filtering at the individual table level. ODI, Network
Import and even OGG Initial load, do not require the database to be in read/write mode. When using Network
Import, the approach used above can be used to set the Instantiation SCN in the database and use the parameter
DBOPTIONS ENABLE_INSTANTIATION_FILTERING. The only difference is that the standby database does not
need to be converted to a Snapshot Standby.

At Mining Database:

Cascade Redo Log shipping will continue even when the redo log apply process on the standby has stopped.
In order to avoid the extract failing because it requires data to be fetched from the ADG Standby, it is
recommended to stop the extract process before stopping the redo log apply process on the standby.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EAPPS 00:00:50 00:00:09

GGSCI > stop eapps


Sending STOP request to EXTRACT EAPPS ...
Request processed.
GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EAPPS 00:00:30 00:00:16

At Target Database:

Once the replicat is current, stop the replicat process

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING RAPPS 00:00:00 00:00:02

GGSCI > stop rapps

Sending STOP request to REPLICAT RAPPS ...


Request processed.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT STOPPED RAPPS 00:00:00 00:02:19

20 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


21 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS
At ADG Standby:
Stop the redo apply process and Note the current_scn. That SCN will be used to set the apply process.
Stopping the apply process insures that the database is in a consistent state for the initial data pull.

SQL> select open_mode, current_scn from v$database;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY WITH APPLY 13948988

SQL> alter database recover managed standby database cancel ;

Database altered.

SQL> select open_mode, current_scn from v$database ;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY 13949075

SQL> /

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY 13949075

Pull the required data for instantiation. Either multiple schemas, a single schema, or a single table
Any tool not requiring write access to the standby database can be used.

SQL> alter database convert to physical standby ;

Database altered.

SQL> alter database recover managed standby database disconnect ;

Database altered.

SQL> select open_mode, current_scn from v$database ;

OPEN_MODE CURRENT_SCN
-------------------- -----------
READ ONLY WITH APPLY 13953343

At Mining Database:

Restart the Downstream Extract


GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EAPPS 00:00:30 01:50:36

22 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


GGSCI > start eapps

Sending START request to MANAGER ...


EXTRACT EAPPS starting

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EAPPS 00:00:42 00:00:07

Load Target Database:

Load the data to the target system.


This would be done using the tool that initially pulled the data from the Standby
Database.

CSN Filtering Applied to All Mapped Objects in Replicat


During an initial load of all tables from an ADG Standby, this would be the most efficient option. This will force the
replicat to filter all ddl and dml for all objects in the replicat to filter based on the CSN entered at startup.

Start the replicat using the SCN of the Standby Database (13949075) before it was opened as a SnapShot
Standby.
GGSCI > START REP rapps, AFTERCSN 13949075

Sending START request to MANAGER ...


REPLICAT RAPPS starting

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING RAPPS 00:00:00 00:00:06

CSN Filtering Applied to specific tables in Replicat


This requires modifying the replicat parameter file and having specific filters at the mapping level. This would be
used when filtering CSN on a specific table. The modification would be made, then the replicat started.
MAP PDB1.APPS.TCUSTMER, TARGET PDB1.APPS.TCUSTMER ,
FILTER ( @GETENV ("TRANSACTION", "CSN") > 13949075);
MAP PDB1.APPS.*, TARGET target PDB1.APPS.* ;

23 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Where to Go for More Information
Hopefully, this white paper has provided a basic understanding of how to configure Downstream Capture from an
ADG Standby. Undoubtedly, you will eventually fine-tune this process in your own environment.

Reference the Oracle Database 12.1 Documentation for additional information on the Oracle 12.1 RDBMS.

Reference the Data Guard Concepts and Administration Section of the Oracle 12c Documentation on High
Availability for more information on configuring redo log transport.

Reference the Oracle GoldenGate 12c Reference Guide and the Oracle GoldenGate 12c Administration Guide for
additional information on:

 Extract Parameters for Windows and Unix 


 Replicat Parameters for Windows and Unix
 Extract Management Considerations 
 Replicat Management Considerations 

24 | I GOLDENGATE CAPTURE FROM A DATAGUARD WITH CASCADED REDO LOGS


Oracle Corporation, World Headquarters Worldwide Inquiries
500 Oracle Parkway Phone: +1.650.506.7000
Redwood Shores, CA 94065, USA Fax: +1.650.506.7200

CONNECT W ITH US

blogs.oracle.com/oracle Copyright © 2014, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
facebook.com/oracle fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means,
twitter.com/oracle electronic or mechanical, for any purpose, without our prior written permission.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
oracle.com
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
A-Team Chronicles ateam-oracle.com trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0818

DOWNSTREAM CAPTURE FROM AN ORACLE ADG STANDBY


August 2018
Author: Tracy West , Sourav Bhattachayra
Contributing Authors:

S-ar putea să vă placă și