Sunteți pe pagina 1din 18

Golden Gate Created By: Ankit Kumar

ORACLE GOLDEN GATE CONCEPT AND ARCHITECTURE

Oracle Golden Gate is a tool provided by oracle for transactional data replication among oracle databases and other

RDBMS tools (SQL SERVER, DB2.Etc). Its modular architecture gives you the flexibility to easily decouple or

combined to provide best solution to meet the business requirements.

Because of this flexibility in the architecture, Golden Gate supports numerous business requirements:

 High Availability
 Data Integration
 Zero downtime upgrade and migration
 Live reporting database

DIFFERENT TOPOLOGY
Golden Gate Created By: Ankit Kumar

Oracle Golden Gate Architecture

Oracle Golden Gate Architecture is composed of the following Components:

● Extract

● Data pump

● Replicat

● Trails or extract

● Checkpoints

● Manager

● Collector

EXTRACT:

Extract runs on the source system and it is the extraction mechanism for oracle Golden Gate( capture the changes

which happens at the source database).


Golden Gate Created By: Ankit Kumar

The Extract process extracts the necessary data from the database transaction logs. For oracle database

transaction logs are nothing both REDO log file data. Unlike streams which runs in the oracle database itself and

needs the access to the database. Oracle Golden Gate does not needs access to the oracle database and also it will

extract only the committed transaction from the online redo log file.

Whenever there is a long running transaction which generates more number of redo data will force to switch the

redo log file and in turn more number of archive logs will be generated. In these cases the extract process need to

read the archive log files to get the data.

Extract process captures all the changes that are made to objects that are configured for synchronization. Multiple

Extract processes can operate on different objects at the same time. For example once process could continuously

extract transactional data changes and stream them to a decision support database. while another process

performs batch extracts for periodic reporting or, two extract processes could extract and transmit in parallel to

two replicate processes ( with two trails) to minimize target latency when the databases are large.

DATAPUMP

Datapump is the secondary extract process with in source oracle Golden Gate configuration. You can have the

source oracle Golden Gate configured without Datapump process also, but in this case Extract process has to send

the data to trail file at the target. If the Datapump is configured the primary extract process writes the data to the

source trail file and Datapump will read this trail file and propagate the data over the network to target trail file.

The Datapump adds the storage flexibility and it isolates the primary extract process from TCP/IP activity.

You can configure the primary extract process and Datapump extract to extract online or extract during batch

processing.

REPLICAT

Replicat process runs on the target system. Replicat reads extracted transactional data changes and DDL changes

(IF CONFIGURED) that are specified in the Replicat configuration, and then it replicates them to the target

database.

TRAILS OR EXTRACTS

To support the continuous extraction and replication of source database changes, Oracle Golden Gate stores the

captured changes temporarily on the disk in a series of files call a TRAIL. A trail can exist on the source or target
Golden Gate Created By: Ankit Kumar

system and even it can be in a intermediate system, depending on how the configuration is done. On the local

system it is known as an EXTRACT TRAIL and on the remote system it is known as REMOTE TRAIL.

The use of a trail also allows extraction and replication activities to occur independently of each other. Since these

two (source trail and target trail) are independent you have more choices for how data is delivered.

CHECKPOINT

Checkpoints stores the current read and write positions of a process to disk for recovery purposes. These

checkpoints ensure that data changes that are marked for synchronization are extracted by extract and replicated

by replicat.

Checkpoint work with inter process acknowledgments to prevent messages from being lost in the network. Oracle

Golden Gate has a proprietary guaranteed-message delivery technology.

Checkpoint information is maintained in checkpoint files within the dirchk sub-directory of the Oracle Golden Gate

directory. Optionally, Replicat checkpoints can be maintained in a checkpoint table within the target database,

apart from standard checkpoint file.

MANAGER

The Manager process runs on both source and target systems and it is the heart or control process of Oracle

Golden Gate. Manager must be up and running before you create EXTRAT or REPLICAT process. Manager performs

Monitoring, restarting oracle golden gate process, report errors, report events, maintains trail files and logs etc.

COLLECTOR

Collector is a process that runs in the background on the target system. Collector receives extracted database

changes that are sent across the TCP/IP network and it writes them to a trail or extract file.
Golden Gate Created By: Ankit Kumar

Oracle-12c Configure Step by step


Goldengate Unidirectional Method

Source Database

Hostname: ggsource.doyensys.com

Oracle database SID: GGSOURCE

Oracle version: 12.2.0

Oracle GG version: 12.2.0

Target Database

Hostname: ggtarget.doyensys.com

Oracle database SID: GGTARGET

Oracle version: 12.2.0

Oracle GG version: 12.2.0

Check the connectivity from source to target for replication.

Both source and target sides add the host information to /etc/hosts file.
Golden Gate Created By: Ankit Kumar

Archive log must be enable on source side because if we are using classic capture the
extract process will capture the changes information through archive logs only, So it is
mandatory for classic capture replication.

Select LOG_MODE from v$database;


shut immediate;
startup mount;
ALTER DATABASE ARCHIVELOG;
alter database open;
select LOG_MODE from v$database;

Verify that supplemental logging and forced logging are set properly.
Golden Gate Created By: Ankit Kumar

ALTER DATABASE FORCE LOGGING;


ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
select SUPPLEMENTAL_LOG_DATA_MIN,FORCE_LOGGING from v$database;

Prepare the database to support ddl replication.

SQL> alter system set recyclebin=off scope=spfile;

And bound the database

This parameter must be changed on source and target databases:

ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE


SCOPE=BOTH;
show parameter ENABLE_GOLDENGATE_REPLICATION
Golden Gate Created By: Ankit Kumar

Create the administrator and user/schema owners on both source and


target database.

create user gguser identified by gguser default tablespace goldengate quota unlimited on
goldengate;

grant create session,connect,resource,alter system to gguser;

EXEC
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>’gguser’,
privilege_type=>’CAPTURE’, grant_optional_privileges=>’*’);

(NOTE- Create a Tablespace with name goldengate before creating the user)

Go to Golden Gate Installed location (in our scenario /u01/gghome) and


then run the following Golden Gate inbuilt scripts for creating all necessary
objects to support DDL replication.

1. @marker_setup
2. @ddl_setup
3. @role_setup.sql
4. GRANT GGS_GGSUSER_ROLE TO <loggedUser>
5. @ddl_enable.sql

Start GGSCI and login into database using dblogin command.

dblogin userid gguser, password gguser


Golden Gate Created By: Ankit Kumar

By default manager parameter has created while installing the goldengate


software we just add the user information to manager parameter file.

PORT 7811

USERIDALAIS gguser

Check the manager parameter and status.

View param mgr

(For manager default status will be stopped prior to 12c)


Golden Gate Created By: Ankit Kumar

Start manager process if it stop using:

start MANAGER

Source side add trandata for particular table which we wants to replicate
the data to target database.

add tranadata demo.*

Create the primary Extract parameter file.

EXTRACT ext1
USERID gguser@ggsource, PASSWORD gguser
EXTTRAIL /u01/gghome/dirdat/aa
DDL INCLUDE ALL
TABLE demo.*;

Create the Extract group and the local Extract trail file and start the extract
process.

add extract ext1 tranlog begin now

add exttrail /u01/gghome/dirdat/aa extract ext1

start extract ext1


Golden Gate Created By: Ankit Kumar

Check the status of primary extract process.

info ext1

Create the secondary Extract (data pump) parameter file.

EXTRACT dpump1
USERID gguser@ggsource, PASSWORD gguser
RMTHOST ggtarget, MGRPORT 7810
RMTTRAIL /u01/gghome/dirdat/ab
DDL INCLUDE ALL
TABLE demo.*;
Golden Gate Created By: Ankit Kumar

Create the data pump group and the remote Extract trail file and start the
data pump process.

add extract dpump1 exttrailsource /u01/gghome/dirdat/aa

add rmttrail /u01/gghome/dirdat/ab extract dpump1

start extract dpump1

To check the status 0f data pump process.

info dpump1

**********ON TARGET***********

TARGET (GGTARGET.DOYENSYS.COM):

Start GGSCI and login into database using dblogin command.

./ggsci
Golden Gate Created By: Ankit Kumar

dblogin userid gguser password gguser

To check the manager status on target server.

view param mgr

info mgr

To create a checkpoint Table in the target database.

add checkpointtable gguser.chkpt

info checkpointtable gguser.chkpt


Golden Gate Created By: Ankit Kumar

Create the Replicat parameter file.

REPLICAT rep1
USERID gguser@ggtarget, PASSWORD gguser
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE
MAP demo.*, TARGET demo.*;

Create and start the replicat process.

add replicat rep1 exttrail /u01/gghome/dirdat/ab checkpointtable gguser.chkpt

start replicat rep1

To check the status of replicat process.

info rep1

To check the replication from source to target database.


Golden Gate Created By: Ankit Kumar

Create one sample table and generate insert operation into that table.

Verify that the table and rows were replicated into the target database.
Golden Gate Created By: Ankit Kumar

SUPPLEMENTAL LOGGING
What is supplemental logging?

Redo log files are generally used for instance recovery and media recovery. The data required for instance recovery

and media recovery is automatically recorded in the redo log files. However a redo log based application may

require that the additional columns need to be logged into redo log files. The process of adding these additional

columns into redo log files is called supplemental logging.

Supplemental logging is not the default behavior of oracle database. It has to be enabled manually after the

database is created. You can enable the supplemental logging at two levels

1. DATABASE LEVEL
2. TABLE LEVEL

What is the user of supplemental logging in replication?

Supplemental logging at the source database side to certain columns are very much required to ensure that those

changes which are happened to the columns which are supplemental logging enabled will be applied successfully at

the target database. With the help of these additional columns, oracle decides the rows which need to be updated

on the destination side. This is how supplement logging is more critical requirement for replication.

DATABASE LEVEL SUPPLEMENTAL LOGGING:

How to check supplemental logging is enabled or not?

SQL> SELECT supplemental_log_data_min FROM v$database;

How to enable supplemental logging at database level?

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

How to disable supplemental logging at database level?

SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

TABLE LEVEL SUPPLEMENTAL LOGGING:

TABLE LEVEL UNCONDITIONAL SUPPLEMENTAL LOGGING:


Golden Gate Created By: Ankit Kumar

 Primary Key columns


 All columns
 Selected columns

To specify an unconditional supplemental log group for PRIMARY KEY column(s):

SQL > ALTER TABLE SCOTT. EMP ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

To specify an unconditional supplemental log group that includes ALL TABLE columns:

SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

To specify an unconditional supplemental log group that includes SELECTED columns:

SQL> ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG GROUP t1_g1 (C1,C2) ALWAYS;

TABLE LEVEL CONDITIONAL SUPPLEMENTAL LOGGING:

 Foreign key
 Unique
 Any Columns

To specify a conditional supplemental log group that includes all FOREIGN KEY columns:

SQL> ALTER TABLE SCOTT.DEPT ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

To specify a conditional supplemental log group for UNIQUE column(s) and/or BITMAP index

column(s):

SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

To specify a conditional supplemental log group that includes ANY columns:

SQL>ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG GROUP t1_g1 (c1,c3);

To drop supplemental logging:

SQL > ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

SQL>ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
Golden Gate Created By: Ankit Kumar

S-ar putea să vă placă și