Documente Academic
Documente Profesional
Documente Cultură
Oracle Golden Gate is a tool provided by oracle for transactional data replication among oracle databases and other
RDBMS tools (SQL SERVER, DB2.Etc). Its modular architecture gives you the flexibility to easily decouple or
Because of this flexibility in the architecture, Golden Gate supports numerous business requirements:
High Availability
Data Integration
Zero downtime upgrade and migration
Live reporting database
DIFFERENT TOPOLOGY
Golden Gate Created By: Ankit Kumar
● Extract
● Data pump
● Replicat
● Trails or extract
● Checkpoints
● Manager
● Collector
EXTRACT:
Extract runs on the source system and it is the extraction mechanism for oracle Golden Gate( capture the changes
The Extract process extracts the necessary data from the database transaction logs. For oracle database
transaction logs are nothing both REDO log file data. Unlike streams which runs in the oracle database itself and
needs the access to the database. Oracle Golden Gate does not needs access to the oracle database and also it will
extract only the committed transaction from the online redo log file.
Whenever there is a long running transaction which generates more number of redo data will force to switch the
redo log file and in turn more number of archive logs will be generated. In these cases the extract process need to
Extract process captures all the changes that are made to objects that are configured for synchronization. Multiple
Extract processes can operate on different objects at the same time. For example once process could continuously
extract transactional data changes and stream them to a decision support database. while another process
performs batch extracts for periodic reporting or, two extract processes could extract and transmit in parallel to
two replicate processes ( with two trails) to minimize target latency when the databases are large.
DATAPUMP
Datapump is the secondary extract process with in source oracle Golden Gate configuration. You can have the
source oracle Golden Gate configured without Datapump process also, but in this case Extract process has to send
the data to trail file at the target. If the Datapump is configured the primary extract process writes the data to the
source trail file and Datapump will read this trail file and propagate the data over the network to target trail file.
The Datapump adds the storage flexibility and it isolates the primary extract process from TCP/IP activity.
You can configure the primary extract process and Datapump extract to extract online or extract during batch
processing.
REPLICAT
Replicat process runs on the target system. Replicat reads extracted transactional data changes and DDL changes
(IF CONFIGURED) that are specified in the Replicat configuration, and then it replicates them to the target
database.
TRAILS OR EXTRACTS
To support the continuous extraction and replication of source database changes, Oracle Golden Gate stores the
captured changes temporarily on the disk in a series of files call a TRAIL. A trail can exist on the source or target
Golden Gate Created By: Ankit Kumar
system and even it can be in a intermediate system, depending on how the configuration is done. On the local
system it is known as an EXTRACT TRAIL and on the remote system it is known as REMOTE TRAIL.
The use of a trail also allows extraction and replication activities to occur independently of each other. Since these
two (source trail and target trail) are independent you have more choices for how data is delivered.
CHECKPOINT
Checkpoints stores the current read and write positions of a process to disk for recovery purposes. These
checkpoints ensure that data changes that are marked for synchronization are extracted by extract and replicated
by replicat.
Checkpoint work with inter process acknowledgments to prevent messages from being lost in the network. Oracle
Checkpoint information is maintained in checkpoint files within the dirchk sub-directory of the Oracle Golden Gate
directory. Optionally, Replicat checkpoints can be maintained in a checkpoint table within the target database,
MANAGER
The Manager process runs on both source and target systems and it is the heart or control process of Oracle
Golden Gate. Manager must be up and running before you create EXTRAT or REPLICAT process. Manager performs
Monitoring, restarting oracle golden gate process, report errors, report events, maintains trail files and logs etc.
COLLECTOR
Collector is a process that runs in the background on the target system. Collector receives extracted database
changes that are sent across the TCP/IP network and it writes them to a trail or extract file.
Golden Gate Created By: Ankit Kumar
Source Database
Hostname: ggsource.doyensys.com
Target Database
Hostname: ggtarget.doyensys.com
Both source and target sides add the host information to /etc/hosts file.
Golden Gate Created By: Ankit Kumar
Archive log must be enable on source side because if we are using classic capture the
extract process will capture the changes information through archive logs only, So it is
mandatory for classic capture replication.
Verify that supplemental logging and forced logging are set properly.
Golden Gate Created By: Ankit Kumar
create user gguser identified by gguser default tablespace goldengate quota unlimited on
goldengate;
EXEC
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>’gguser’,
privilege_type=>’CAPTURE’, grant_optional_privileges=>’*’);
(NOTE- Create a Tablespace with name goldengate before creating the user)
1. @marker_setup
2. @ddl_setup
3. @role_setup.sql
4. GRANT GGS_GGSUSER_ROLE TO <loggedUser>
5. @ddl_enable.sql
PORT 7811
USERIDALAIS gguser
start MANAGER
Source side add trandata for particular table which we wants to replicate
the data to target database.
EXTRACT ext1
USERID gguser@ggsource, PASSWORD gguser
EXTTRAIL /u01/gghome/dirdat/aa
DDL INCLUDE ALL
TABLE demo.*;
Create the Extract group and the local Extract trail file and start the extract
process.
info ext1
EXTRACT dpump1
USERID gguser@ggsource, PASSWORD gguser
RMTHOST ggtarget, MGRPORT 7810
RMTTRAIL /u01/gghome/dirdat/ab
DDL INCLUDE ALL
TABLE demo.*;
Golden Gate Created By: Ankit Kumar
Create the data pump group and the remote Extract trail file and start the
data pump process.
info dpump1
**********ON TARGET***********
TARGET (GGTARGET.DOYENSYS.COM):
./ggsci
Golden Gate Created By: Ankit Kumar
info mgr
REPLICAT rep1
USERID gguser@ggtarget, PASSWORD gguser
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE
MAP demo.*, TARGET demo.*;
info rep1
Create one sample table and generate insert operation into that table.
Verify that the table and rows were replicated into the target database.
Golden Gate Created By: Ankit Kumar
SUPPLEMENTAL LOGGING
What is supplemental logging?
Redo log files are generally used for instance recovery and media recovery. The data required for instance recovery
and media recovery is automatically recorded in the redo log files. However a redo log based application may
require that the additional columns need to be logged into redo log files. The process of adding these additional
Supplemental logging is not the default behavior of oracle database. It has to be enabled manually after the
database is created. You can enable the supplemental logging at two levels
1. DATABASE LEVEL
2. TABLE LEVEL
Supplemental logging at the source database side to certain columns are very much required to ensure that those
changes which are happened to the columns which are supplemental logging enabled will be applied successfully at
the target database. With the help of these additional columns, oracle decides the rows which need to be updated
on the destination side. This is how supplement logging is more critical requirement for replication.
SQL > ALTER TABLE SCOTT. EMP ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
To specify an unconditional supplemental log group that includes ALL TABLE columns:
SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
SQL> ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG GROUP t1_g1 (C1,C2) ALWAYS;
Foreign key
Unique
Any Columns
To specify a conditional supplemental log group that includes all FOREIGN KEY columns:
SQL> ALTER TABLE SCOTT.DEPT ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
To specify a conditional supplemental log group for UNIQUE column(s) and/or BITMAP index
column(s):
SQL > ALTER TABLE SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL > ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
SQL>ALTER TABLE <TABLE NAME >DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER TABLE <TABLE NAME> DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
Golden Gate Created By: Ankit Kumar