Sunteți pe pagina 1din 16

Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

Introduction......................................................................................................................... 1
What is SQL Apply............................................................................................................. 2
Application Developers ...................................................................................................... 2
Applications involving Array processing ....................................................................... 3
User Case #1 ............................................................................................................... 3
User Case #2 ............................................................................................................... 3
Applications involving LOB columns ............................................................................ 4
User Case .................................................................................................................... 4
Database Administrators..................................................................................................... 5
Background of how SQL Apply works .......................................................................... 5
How many processes....................................................................................................... 6
Sizing the number of Appliers .................................................................................... 6
What is an LCR............................................................................................................... 7
What is the LCR Cache................................................................................................... 7
Sizing the LCR Cache................................................................................................. 9
What is an Eager Transaction ....................................................................................... 10
Why Eager Transactions ........................................................................................... 10
Why not use Eager all the time ................................................................................. 10
How many Eager Transactions may there be concurrently ...................................... 11
How many LCRs until a transaction is deemed eager .............................................. 11
The problem of having too large an eager size ......................................................... 11
Transactional Dependencies ......................................................................................... 12
The Hash Table ......................................................................................................... 13
Computing Dependencies ......................................................................................... 13
Hash Entries per LCR ............................................................................................... 13
The Watermark Dependency SCN............................................................................ 13
Appliers and transactional dependencies .................................................................. 14
Piggy backing commit approval ............................................................................... 14
DDL Transaction Dependencies ............................................................................... 15

Introduction
Utilizing Data Guard SQL Apply (logical standby database) will have zero impact on the
primary database when configured with asynchronous redo transport. Some users,
however, will be challenged to achieve standby apply performance that can keep pace
with peak periods of primary workload. Keeping pace with primary workload is
important to minimize failover time, and for queries and reports running on the logical
standby database to return results that are up-to-date with primary database transactions.

Tuning SQL Apply or logical standby has significantly improved with every release to
the point that SQL Apply 11g can keep up very high loads. However, there are certain
workload profiles where SQL Apply rates may be sub-optimal compared to the rate at
which the primary database is generating workload. This note focuses on specific
application use cases where SQL Apply performance may be sub-optimal and describes
the best practices and potential application changes to accelerate SQL Apply
performance.

Page: 1 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

While the information contained in this note focuses on Oracle Database 11g Release 1
(11gR1), many of the same principals can be applied to Oracle Database 10g.

What is SQL Apply


SQL Apply by design is optimized for one-way replication of the entire database for the
purpose of providing a disaster recovery solution that can also be easily utilized for
reporting purposes or any other activity that requires read-write access to the standby
database. Note that while the standby database is open read-write, SQL Apply prevents
any of the data it is synchronizing with the primary database to be changed. Thus
customers get the best of both worlds, guaranteed data protection with the flexibility of a
standby database open read-write. While Oracle SQL Apply shares technology with
Oracle Log Miner and Oracle Streams it is highly optimized for its primary mission,
simpler one-way replication of the entire primary database.

SQL Apply provides a near real time image of the primary database, which can be
utilized for reporting and other purposes, thereby offloading these activities from the
primary database.

The processes involved in SQL Apply are

The coordinator that is primarily responsible for the messaging between the
different processes and to ensure transactions are applied to the standby database
in the correct order.
The reader process that is responsible for reading redo records from the Standby
Redo Log or Archived Redo Log.
The preparer processes transforms redo records into one or more logical change
records (LCR) (See What is an LCR)
The builder process that is responsible for gathering up the different DDL or
DML statements into transactions that can then be passed on to the applier
processes.
The analyzer process that is responsible for ensuring transactions that are
dependent upon each other are applied to the standby database in the correct
order.
The applier processes that are responsible for executing the DDL & DML
statements that make up a transaction

The information below assumes a basic understanding of SQL Apply from the Oracle
Data Guard Concepts and Administration guide and from the SQL Apply Best Practices
papers available on the Oracle Maximum Availability Architecture MAA website.

Application Developers
The following section is intended primarily for Application Developers that are
responsible for writing applications that will function in an environment that utilizes a

Page: 2 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

logical standby database. DBAs should also be aware of these design considerations so
that they can identify them and work with the Application Development teams.

Applications involving Array processing


An application that takes advantage of array processing which equates to either using an
array constructed inside the program and which utilizes the FORALL IN syntax or
which keeps a count of the number of rows modified and that commits periodically,
requires special consideration for SQL Apply environments.

Each row modified equates to its own LCR record. When the SQL Apply builder process
sees that the transaction has more than a predefined number of LCRs, default 201, it will
consider this an eager transaction (See What is an Eager Transaction). The builder
process will forward the partial transaction to the analyzer process and ultimately onto an
available applier process, via the coordinator process, that can process eager transactions.
The transaction will be assigned to the applier such that the transaction can be applied
eagerly.

The Application Developer should attempt to keep the array size utilized in the
application to less than pre-defined value. If the application schema utilizes triggers or
referential constraints that can result in additional rows being modified, this needs to be
taken into account also, when determining the array processing size.

User Case #1
An Oracle customer that utilizes array processing extensively modified 10,000 rows per
transaction. While the limit of 10,000 rows was optimal for the primary database, this
provided an adverse affect on the standby database resulting in the inability of the
standby database to stay synchronized with the primary database.

Approximately 20% of all the DML for the application involved transactions were greater
than the default value for the _EAGER_SIZE (See What is an Eager Transaction)
parameter.

When the SQL Applys _EAGER_SIZE parameter was increased to 11,000 rows to allow
the transactions that were generated by the array processing, SQL Apply would appear to
hang while the completed transaction was applied to the database (See The problem of
having too large an eager size).

The application was changed so that it performed array processing of 1,000 rows per
transaction, and the _EAGER_SIZE parameter was set to 1,100. While the application
committed more frequently, the impact on the logical standby database was significantly
reduced because transactions were smaller and the lag was reduced as well as
transactions were being applied more efficiently.

User Case #2
An Oracle customers application was written to perform array processing in units of 200
rows. However, auditing requirements meant that whenever a row was inserted into a

Page: 3 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

specific table, a row was also populated in an application audit table. On average 1/4th of
the applications 200-row array size resulted in an audit record being generated. This
meant that the transaction actually modified 250 rows on average, resulting in the
transaction being considered eager.

The application developer was able to easily reduce the array size to 100 rows for this
particular transaction so that even with the auditing records the database transaction
modified less than 201 rows. This approach was taken because in this case, it was easy to
modify the array size and changing _EAGER_SIZE could have an adverse affect on the
rest of the application.

Applications involving LOB columns


LOB columns that are stored out-of-line in the database are stored in CHUNKS that are
defined when the table is created. Each CHUNK will appear as an LCR in the redo
stream, and preceding the first CHUNK, there is a record locator that contains the non-
LOB part of the row.

If the primary database inserts a 2Mb document into a LOB column on the primary
database, then with an 8k block size, this will equate to approx 270 LCRs.
SQL Apply considers each CHUNK of the LOB column as its own LCR record and as
such each CHUNK is considered when the builder process is constructing the
transactions.

When the builder process sees that the transaction has more than 201 LCRs it will
consider this an eager transaction and will notify the coordinator process as such, and if
there is an available applier process that can process eager transactions, the transaction
will be assigned to the applier to be applied eagerly.

If the application regularly inserts documents that are 2Mbs in size, then the application
developer could consider placing the LOB object in a tablespace that utilized a larger
block size and specify a large chunk size. However, if the database design stipulates that
the block size should be 8k, then the application developer needs to work with the DBA
staff to ensure the _EAGER_SIZE parameter is set larger than the 270 LCRs.

To determine the optimal chunk size for an application, please review the Oracle
Database SecureFiles and Large Objects Developer's Guide 11g Release 1 (11.1) and in
particular Chapter 5 LOB Storage.

User Case
An Oracle customer that utilizes LOB columns extensively loads documents that are
typically less than 64K in size. A 64K LOB column converts into 5 16K Blocks for the
customer, so when they wrote the application they commit every 100 documents which
equates to approximately 600 LCRs.

This is greater than the default value for _EAGER_SIZE parameter so the DBA team
have explicitly raised the value for the parameter to 1,001.

Page: 4 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

A side impact of raising the _EAGER_SIZE parameter to 1,000 is that the transaction
could utilize 8Mb of the LCR Cache (See What is the LCR Cache). For this reason, the
customer also has an LCR Cache of 1Gb so that they can hold the transactions in the
LCR Cache without paging.

Database Administrators
The following sections are intended primarily for Database Administrators that are
responsible for administering the logical standby database. Application Developers
should also be aware of these considerations so that they can identify them and work
proactively with the DBA team.

Background of how SQL Apply works


The following is a high level overview of how SQL Apply works. Additional information
will be provided below.

1. The primary database transfers redo records to the standby database and stores the
redo records in either the Standby Redo Log or Archived Redo Log.
2. The reader process then reads the redo records out of the Standby Redo Log file and
forwards them to the preparer processes
3. The preparer processes receive the individual redo records and using a local copy of
the primary databases data dictionary, translate the redo into DDL or DML LCRs.
4. The builder process then looks at the individual LCRs and constructs transaction
chunks, which are groupings of individual LCRs that relate to the same transaction. A
transaction chunk may or may not contain a commit record. If it does not contain a
commit record, then it is considered to be an eager transaction, and is applied eagerly.
5. The analyzer process picks up a certain number of transaction chunks that have been
built by the builder process and computes the dependencies between the transactions
already processed and those that have been picked up by the analyzer process. Each
LCR is updated with the dependency information that is found. Additionally, see the
discussion on the hash table below. (See The Hash Table)
6. The coordinator process picks up the newly analyzed transaction chunks, and assign
them out to the appliers in the commit scn order. If a transaction chunk does not
contain a commit scn, then this is an eager transaction, and is assigned to an available
applier if possible. (see How many Eager Transactions may there be concurrently).
7. The applier process will apply the transaction chunk that they have been assigned. If
one of the LCRs has a dependency upon another transaction, then it will wait until
that dependency has been resolved and will then continue.
If the transaction chunk is an entire transaction (i.e. Non eager) and the
transactions are being applied in absolute order
(PRESERVE_COMMIT_ORDER=TRUE), then when it reaches the commit
record, it will check to see if it can commit.
If the applier knows it is the lowest Commit SCN (CSCN), then it will
commit the transaction.
If the applier does not know or if it is not the lowest CSCN, then it will
message the coordinator transaction, saying it is ready to commit, and will

Page: 5 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

wait till the coordinator notifies the applier that it is now the lowest
CSCN.

If the transaction chunk is not an entire transaction, then when it completes


applying the current chunk, it will signal the coordinator for additional chunks
associated with the transaction. When it gets the transaction chunk that contains
the commit record, then it will commit the transaction after first messaging the
coordinator for commit approval.

For more information, see Appliers and transactional dependencies.

How many processes


Prior to 11gR1, the SQL Apply processes were taken from the parallel query processes
running on the system, which is limited by the init.ora parameter
PARALLEL_MAX_SERVERS. However, the number of processes could be further
reduced via the MAX_SERVERS logical standby parameter.

Starting with 11gR1, the SQL Apply processes are allocated explicitly via the
MAX_SERVERS logical standby parameter setting rather than as part of the
PARALLEL_MAX_SERVERS parameter setting. For more information on setting SQL
Apply parameters, please refer to chapter 73 of the Oracle Database PL/SQL Packages
and Types Reference 11g Release 1 (11.1.)

There is always 1 Reader process, 1 Builder process and 1 Analyzer process in addition
to the 1 Coordinator process.It is possible to have multiple Preparer processes and this
defaults to 1 preparer process per 20 applier processes. The number of preparer processes
running on the system can be explicitly set via the PREPARER_SERVERS logical
standby parameter.

It is desirable to have multiple Applier processes and this defaults to the all remaining
processes that SQL Apply has access to. The number of applier processes running on the
system can be explicitly set via the APPLY_SERVERS logical standby parameter.
Note that if APPLY_SERVERS and or PREPARER_SERVERS parameters are set
explicitly, then the total number of processes must be less than the number set either
explicitly or implicitly via the MAX_SERVERS logical standby parameter.

Sizing the number of Appliers


The coordinator always assigns transactions to the lowest applier, which is to say, the
coordinator always attempts to assign a transaction to Applier #1, and if Applier #1 is
busy, then it tries to assign the transaction to Applier #2 and so on. The
v$streams_apply_server view records the number of transactions each applier has
processed. With this information, it is possible to determine how many transactions the
last applier has serviced, and if system resources are limited, then it might be possible to
reduce the number of applier processes.

Page: 6 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

If all applier processes have serviced an even percentage of the transactions and system
resources are plentiful, then it might be advantageous to increase the number of applier
processes.

To determine if all appliers are being used evenly, execute the following query.
select min(pct_applied) pct_applied_min
, max(pct_applied) pct_applied_max
, avg(pct_applied) pct_applied_avg
, count(server_id) number_of_appliers
from ( select server_id
, (greatest(nvl(s.total_assigned,0),0.00000001) /
greatest(nvl(c.total_assigned,1),1)) * 100 pct_applied
from v$streams_apply_server s
, v$streams_apply_coordinator c
)

PCT_APPLIED_MIN PCT_APPLIED_MAX PCT_APPLIED_AVG NUMBER_OF_APPLIERS


--------------- --------------- --------------- ------------------
1.152 4.913 2.857 35

This output indicates that 4.9% of all transactions were processed by the busiest applier
while only 1.1% of all transactions were processed by the quietest applier. If all appliers
had applied an even number of transactions, then they would have applied 2.8% of the
transactions. This output indicates that if systems resources are limited, the number of
appliers could be reduced.

On a systems that was busy, the same script generated the following output.
PCT_APPLIED_MIN PCT_APPLIED_MAX PCT_APPLIED_AVG NUMBER_OF_APPLIERS
--------------- --------------- --------------- ------------------
2.854 2.858 2.857 35

This output indicates that the difference between the busiest and quietest applier is
relatively small, so all appliers are being used evenly. This output indicates that if
systems resources are plentiful, the number of appliers could be increased.

What is an LCR
An LCR is a Logical Change Record that in SQL Apply terms relates to a DML
statement for an individual row of a table. An LCR can also be related to DDL statement
and in terms of LOB data, a CHUNK of the LOB data.

What is the LCR Cache


The LCR Cache is a structure in the SGA, and more precisely a portion of the shared pool
size. Starting in 10gR2, the LCR Cache maximum size defaults to of the size of the
shared pool (if set) or of the sga size and must be at least 30Mb in size. If there is
insufficient memory to allocate 30Mb from the shared pool, then SQL Apply will fail to
start.

The desired size of the LCR Cache can be explicitly set via the MAX_SGA logical
standby parameter and the LCR Cache is only created when SQL Apply is started on the
standby database. See Sizing the LCR Cache.

Page: 7 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

The easiest way to think of the LCR cache is as a bucket or barrel. The barrel is open at
the top and the reader and preparer processes are responsible for filling up the barrel. At
the bottom of the barrel is a small funnel where transactions are funneled through and
assigned to the different applier processes.

If there is a back log of work to be applied to the standby database, then the
coordinator will try and keep the bucket at least half full by signaling the reader
processes to read more LCRs which in turn causes the preparer and builder processes
to construct more DML statements until the bucket becomes approx 95% full. Then
the reader processes will stop until the coordinator process signals it to fill the bucket
again.
If there is no backlog, then as transactions are received into the standby redo log, they
are immediately read, prepared and built.

The Applier processes will apply any and all transactions in the bucket assuming they
have been successfully analyzed and no dependencies exist.
To determine the gap between the last transaction applied and the last transaction
received from the primary database, execute the following query periodically.

select numtodsinterval(latest_time - applied_time,'DAY')


from v$logstdby_progress

NUMTODSINTERVAL(LATEST_TIME-APPLIED_TIME,'DAY')
-------------------------------------------------------------------------
+000000000 00:00:06.000000000

The value returned in the example shows the most recently applied transaction on the
standby database is 6 seconds behind the last transaction received from the primary
database. If a redo log GAP has formed due to a network outage, then this query will only
show how much lag exists between the data received and the data applied.

If a lag is reported, then this is an indication that the standby database might benefit from
additional applier processes if system resources are available.

Note that if the primary database is idle and the standby database is up to date, then there
can be an apparent lag reported that is typically less than 10 seconds.

Additionally, if the log transport used to send data to the standby database is ARCH, then
when a log switch occurs on the primary database and the standby database registers the
log file, then a gap will occur. This gap will reduce until either the entire log has been
applied or until another log switch occurs. If a log switch occurs on the primary database
before SQL Apply finishes applying the previously switched log, then again consider
increasing the number appliers if system resources are available.

Page: 8 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

Sizing the LCR Cache


The LCR Cache should be large enough such that paging does not occur; yet small
enough that it does not prevent memory from being allocated to other parts of the SGA
including the buffer cache.

If the LCR Cache fills up, then SQL Apply has the ability to page the LCR Cache into a
SPILL table that resides in the SYSAUX tablespace. SQL Apply paging is an
EXTERMELY EXPENSIVE operation and paging of the LCR Cache should be avoided.
To determine if paging of the LCR cache is occurring, execute the following query
periodically.

select name
, to_number(value) value
from v$logstdby_stats
where name = 'bytes paged out'

Statistic Bytes Paged Out


------------------------------ ----------------
bytes paged out 0

The value returned shows the total number of bytes that have been paged out since SQL
Apply was started. If the query returns a non-zero value paging is occurring, you should
run the query on a regular basis to attempt to identify if a particular transaction on the
primary database is responsible for the paging. If the number of Bytes Paged Out is
constantly increasing consider increasing the value of the MAX_SGA logical standby
parameter.

If the LCR Cache is too large, then the instance will not be able to redeploy the reserved
memory to other parts of the SGA including the buffer cache. To determine if the LCR
Cache is too large, the peak size of the LCR Cache will be reported in the v$sgastat view.
To determine if the LCR cache is too large, execute the following query periodically.

select name,(least(max_sga,bytes)/max_sga) * 100 pct_utilization


from ( select *
from v$sgastat
where name = 'Logminer LCR c'
)
, (select value*(1024*1024) max_sga
from dba_logstdby_parameters
where name = 'MAX_SGA'
)

NAME PCT_UTILIZATION
-------------------------- ---------------
Logminer LCR c 5.43263626

The value returned in this example shows that we have only ever utilized 5.4% of the
maximum possible size of the LCR Cache, indicating that the LCR Cache might be over
sized.

Page: 9 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

NOTE: The MAX_SGA parameter specifies the desired size of the LCR Cache, but the
LCR Cache actually allocated can exceed the value specified by the MAX_SGA
parameter. In this case, the query would return a PCT_Utilization of 100%.

What is an Eager Transaction


An eager transaction typically is a transaction that modifies a large number of rows or
more precisely is constructed of a large number of LCRs. Starting with 10gR2, a
transaction becomes eager when it consists of more that 201 LCRs, but this number can
be overridden by explicitly setting the _EAGER_SIZE logical standby parameter.
A transaction can also become eager if it is a long running transaction, in which case the
builder process will pass a transaction chunk that consists of less than 201 LCRs.

Why Eager Transactions


The reason for having Eager Transactions is two fold.
To reduce the amount of memory required in order to apply a very large transaction
To reduce the amount of time it takes to apply a very large transaction

Reducing LCR cache usage


When a transaction is applied eagerly, rather than having to prepare the entire transaction
in memory and only then pass the transaction to an applier, the coordinator processes
trickle feeds the DML to the applier in an optimistic way. Once the DML statements have
been applied to the database, the applier process asks for the next batch of DML
statements for the same process. SQL Apply knows that the first set of transactions have
been applied to the database, and can free up the memory that those DML statements
used in the LCR Cache.

Reducing transaction commit time


When a transaction is applied eagerly, the DML statements that are being trickle fed to
the applier process can be executed against the database. Therefore by the time the
commit statement has been identified by the Log Miner processes, most of the transaction
has been applied to the standby database, and only the final part of the transaction needs
to be applied to the database.

Why not use Eager all the time


In order to apply a transaction eagerly, SQL Apply bypasses the Analyzer process since
we do not know what DML will be read next. We know that by the time the LCR was
created on the primary database that there were no locks taken out on the rows that the
LCR relates to, so we can safely apply all the changes of the eager transaction up to this
point. However, it's possible that a small transaction, that has a dependency on the rows
that the eager transaction was modifying, may have been executing concurrently on the
primary database and committed after the eager transaction.

Therefore, SQL Apply cannot apply any transactions that have a commit SCN greater
than the last part of the eager transaction that was passed to the applier process executing
the eager transaction.

Page: 10 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

This concept of blocking the small transactions means that the small transactions take
longer to be applied to the database thereby generating a larger lag between the primary
and the standby database.

How many Eager Transactions may there be concurrently


SQL Apply must reserve a number of applier processes for applying safe or non-eager
transactions, and this is 1/6th the number of apply servers and there must always be at
least 1 applier process that can apply safe transactions.

Therefore, if the number of appliers is set to 6, then the maximum number of transactions
that can be applied eagerly is limited to 5 with 1 applier reserved for safe transactions.
If the builder process identifies an additional transaction that could be considered eager,
but there are already too many transactions already being applied eagerly, then the
transaction will remain in the LCR Cache as a normal transaction. It will remain in this
state until either the commit record is found in which case the transaction will be
analyzed and subsequently assigned to an applier as a normal transaction, or an existing
eager transaction commits and the applier process frees up.

At this time, SQL Apply does not maintain any statistics around the number of eager
transaction or number of concurrent eager transactions. However, if during a certain
period of time, such as month-end or quarterly processing, it is known that 6 jobs execute
concurrently on the primary database and that these jobs are unable to control the number
of rows processed per transaction, then the number of appliers could temporarily be
increased to at least 8 for the duration of the job execution to allow for 6 applier
processes that could process eager transactions.

How many LCRs until a transaction is deemed eager


Ideally, an extremely small number of transactions should be applied to the database
eagerly. Therefore it is suggested that less than 0.001% of all transactions should be
applied to the database eagerly. This means that less than 1 in 100,000 transactions
should be applied to the database eagerly.

In order to achieve this ratio, two options exist. Either the SQL Apply parameter
_EAGER_SIZE can be increased or the application can be changed so that it executes
smaller transactions. While it might not be possible to change an OLTP application to
operate on a smaller amount of work, batch processes can be modified to commit more
frequently (see Applications involving Array processing).

The problem of having too large an eager size


It is not advisable to simply increase the setting of the _EAGER_SIZE parameter.
Consider the following example:

Assume an LCR takes 1/100th of a second to be read, prepared and built but 1/10th
of a second to be executed by the Applier process.

Page: 11 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

If there are 10,000 DML statements in a transaction, and the transaction is applied
normally, which is to say _EAGER_SIZE is greater than 10,000, then it will take
the reader, preparer and builder processes 100 seconds to construct the DML
statements before being passed to the applier process. The applier process would
then take 1000 seconds to apply and ultimately commit this transaction to the
database.
During this time, there might be hundreds of smaller transactions that started
concurrently with the large transaction and which committed shortly after the
large transactions.
These smaller transactions will have to wait the 1,000 seconds that the large
transaction takes to apply, because when the smaller transactions ask for approval
to commit, they will need to wait for the large transaction to commit first.
The SQL Apply database will appear to be making no progress for 1,000 seconds,
and the standby database will appear 1,000 seconds behind the primary database.
However, with a larger number of applier processes, more transactions will be
queued up and waiting to be committed once the large transaction is committed
on the standby database.

If we take the same scenario, but this time the applications commit every 100 DML
statements, then we would have 100 transactions that make up the 10,000 DML
statements that the application operated on.

Assume again that an LCR takes 1/100th of a second to be read, prepared and built
but 1/10th of a second to be executed by the Applier process.
Concurrently, with the application of the first transaction by the first applier
process, the Log Miner processes are constructing the subsequent transactions.
Each 100 DML transaction would take 1 second to be mined.
The second transaction is assigned to another Applier process, and again takes 10
seconds to be applied. However, this transaction commits approximately 1 second
after the first transaction was committed.
This continues and 100 seconds after the first transaction was mined, the last
transaction has been mined. An additional 10 seconds after this the last of the
transaction has been committed to the database.

Therefore the 10,000 DML transactions would be replicated to the standby database in
110 seconds using a smaller array size compared to 1,100 seconds using the large array
size.

Transactional Dependencies
Computing transactional dependencies is the responsibility of the Analyzer process, but
additional considerations need to be made when a transaction is deemed to be eager. The
analyzer process utilizes a hash table when computing the dependencies as well as a
number of memory structures.

Page: 12 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

The Hash Table


The Hash Table exists in the PGA of the Analyzer process, and as such does not consume
space in the Shared Pool of the database, and the size of the hash table defaults to
1,000,000 entries but is configurable via the _HASH_TABLE_SIZE logical standby
parameter. Each entry in the hash table contains the transaction ID and Commit SCN for
the transaction.

Computing Dependencies
When a transaction chunk is picked up from the queue by the analyzer process, each LCR
has its dependencies computed. If the table has a Primary Key and or Non Null Unique
Indexes, then the analyzer process utilizes the Primary Key columns and all the Non Null
Unique Indexes to compute the dependencies. If the table does not have either a Primary
Key or a Non Null Unique Index, then all the columns of the table are used when
computing the dependencies.

Once the hash key has been computed, the Analyzer process looks up the hash entry and
determines if a previous transaction had hashed to the same key. If there is a previous
transaction ID and Commit SCN present, then that information is associated with the
current LCR being analyzed. What happens next depends upon the type of transaction
chunk.

If the transaction chunk is a single chunk and contains a commit record, then the hash
entry is updated with the current LCRs transaction ID and commit SCN.
If the transaction chunk refers to an eager transaction, then the hash entry is NOT
updated.

Hash Entries per LCR


For an LCR that relates to an INSERT, a hash dependency will be computed for each
primary key and unique index on the table for the row being inserted.

For an LCR that relates to an UPDATE, a hash dependency will be computed for each
primary key and unique index on the table for BOTH the original value for the index as
well as the new value for the index if the indexed columns changed.

For an LCR that related to a DELETE, a hash dependency will be computed for each
primary key and unique index on the table for the row being deleted.

Therefore, if a table has a Primary Key and a single Unique Index and both the primary
key and unique index columns are being updated by a transaction, then 4 dependencies
will be computed, and assuming this is a single chunk transaction, the 4 hash entries for
the computed dependencies will be updated with the transaction ID and commit SCN.

The Watermark Dependency SCN


In addition to the dependencies identified by the analyzer process via the hash table, there
is also a Water Mark Dependency that contains a commit SCN that is part of the LCR.

Page: 13 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

The Watermark Dependency is used to say that all transaction less than the current
apply low-watermark may commit if the coordinator grants approval.
The apply low-watermark is broadcast to the applier processes frequently via the
coordinate messages

Appliers and transactional dependencies


Before an LCR can be applied to the database, the dependent transaction information
stored in the LCR is checked.

If the SCN stored in the LCR is less than the SCN from the apply low-watermark
dependency, then this means the prior dependent transaction has been committed to
the database and therefore, there is no outstanding dependency for this LCR.
If the SCN stored in the LCR is higher than the SCN from the apply low-watermark
dependency, then this means that the prior dependent transaction has not yet been
committed and therefore, this dependency is still valid. In this case, the applier set the
status to 16124 - waiting on another transaction

Additionally, when an eager transaction is being applied, the analyzer process has not
updated the hash entries in the hash table, so we do not know if a dependency exists
between this transaction and another transaction. However, we know that the eager
transaction was able to update the rows on the primary database, so we can safely say that
if the transactions are processed in the same order, then the primary and standby database
will not be out of sync.

Therefore, when we receive the last transaction chunk for an eager transaction that
contains the commit SCN, we raise the watermark dependency to the commit SCN for
the eager transaction. This allows all transaction with a commit SCN prior to the commit
of the eager transaction to be committed, but it also prevents any transaction with an SCN
after this SCN to be applied. Once all transactions prior to the eager transactions commit
SCN have been committed, then the eager transaction is allowed to be committed. Once
the eager transaction has committed, then the apply low-watermark dependency is raised
to the commit SCN for the eager transaction, thereby allowing other transactions to
proceed.

Piggy backing commit approval


In order to speed up the process of committing transactions to the database, whenever the
coordinator process communicates with the applier processes, each applier is advised if it
needs to request commit approval or not.

When SQL Apply is running with the logical standby parameter


PRESERVE_COMMIT_ORDER set to false, then the bit is set saying that the
transaction may commit without first requesting approval.
When SQL Apply is running with the logical standby parameter
PRESERVE_COMMIT_ORDER set to true, then the coordinator sets a bit
identifying the transaction as having the lowest Commit SCN, and notifying the
Applier to commit the transaction. If the transaction is not the lowest Commit SCN

Page: 14 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

when the transaction is assigned out, then the bit is not set saying the applier
processes must request commit approval before proceeding. If however, during the
course of applying the transaction to the standby database, the applier has to message
the coordinator, then when the coordinator responds to the message, the coordinator
will re-evaluate the bit, and if the transaction is now the lowest Commit SCN, the bit
will be set to indicate the applier may proceed to commit the transaction without the
need to first request commit approval.

DDL Transaction Dependencies


Up until this point, we have primarily discussed DML transactions. DDL transactions
have special requirements and as such are handled differently.

When a DDL Transaction is mined, mining of subsequent transactions is suspended and a


DDL Barrier is established. The DDL statement is prepared and a transaction is built.
Depending upon the type of DDL transaction, there is no need to analyze the
dependencies. This would be the case if a table were created. If the DDL involves a
Create Table As Select (CTAS) operation, then this type of DDL would need to be
analyzed as it results in the creation of new rows.

The DDL transaction is ultimately assigned to an Applier process that must wait until all
preceding transactions have committed. Once the previous transactions have been
committed, the watermark dependency is raised to allow the DDL transaction to be
executed. As part of the application of the transaction, the logical standby dictionary used
by the prepare processes to map LCRs to tables and columns is also updated. Once the
transaction has been applied, the watermark dependency is raised again, and the DDL
Barrier is also removed, allowing the miner process to mine additional transactions.

Why do we need a DDL Barrier?


Consider two transactions that are executed serially on the primary database. The first
transaction creates a table called newemp, and the second transaction inserts a row into
the newly created table. Without the DDL Barrier that is raised when the miner process
finds the DDL statement creating the table, the preparer process would not be able to
correctly identify which table the subsequent transaction is related to. This is because the
LCR will say something like row is inserted into file 5 block 243 but the logical standby
dictionary used by the prepare process will not have been populated with the information
saying file 5 blocks 242 -> blocks 265 refer to table newemp and that column 1 is
empid, column 2 is ename, etc. This information is populated into the logical standby
dictionary as part of the application of the first transaction.

Why does SQL Apply slow down during a DDL Barrier?


As previously mentioned, the mining process must stop when a DDL statement is mined
and the DDL Barrier is raised. If we use the analogy of the LCR Cache being a bucket,
then no more transactions are being poured into the bucket. However, the applier
processes are still emptying the bucket through the funnel at the bottom of the bucket.
The bucket empties when the DDL transaction is applied to the database. Once the DDL
transaction is applied, the DDL Barrier is removed and the miner process is signaled to

Page: 15 of 16
Developer and DBA Tips for Pro-Actively Optimizing SQL Apply Performance

mine new transactions. It takes a bit of time for these new transactions to go through the
process of mining, preparing, building and analyzing before finally being available for
application by an applier.

DDL Transactions that also create DML LCRs


Certain DDL transactions also result in the creation of DML LCRs on the primary
database. One such operation is a CTAS statement. A CTAS operation will create the
object definition as the first part of the transaction, but it will also generate additional
LCRs that refer to the rows that are populated in the table as a result of the select
statement.

On the standby database, SQL Apply cannot execute the CTAS statement in the same
way it was executed on the primary database. SQL Apply must first create the table with
no rows, and then it will insert each of the DML LCRs into the newly created table. This
is done because SQL Apply might not maintain the source table, or the source table might
have a different set of data from the primary database that could be the case if the standby
database has a SKIP operation defined for the source table.

Page: 16 of 16

S-ar putea să vă placă și