Documente Academic
Documente Profesional
Documente Cultură
The presence of a histogram changes the formula used by the Optimizer to estimate the
cardinality, and allows it to generate a more accurate execution plan.
Oracle automatically determines the columns that need histograms based on the column usage
information (SYS.COL_USAGE$), and the presence of a data skew.
For example, Oracle will not automatically create a histogram on a unique column if it is only
seen in equality predicates.
There are two types of histograms, frequency or height-balanced. Oracle determines the type of
Histogram to be created based on the number of distinct values in the column.
Frequency Histograms
Frequency histograms are created when the number of distinct values in the column is less than
254.
Oracle uses the following steps to create a frequency histogram.
Height balanced Histograms
Height-balanced histograms are created when the number of distinct values in the column is
greater
than 254. In a height-balanced histogram, column values are divided into buckets so that each
bucket
contains approximately the same number of rows.
you can use v$event_histogram view and the dba_hist_system_event table to plot the
distribution of physical disk read speed
Gathering Statistics
For database objects that are constantly changing, statistics must be regularly gathered so that
they
Accurately describe the database object. The PL/SQL package, DBMS_STATS, is Oracles
preferred
Method for gathering statistics, and replaces the now obsolete ANALYZE2 command for
collecting
Statistics. The DBMS_STATS package contains over 50 different procedures for gathering and
managing
Statistics but most important of these procedures are the GATHER_*_STATS procedures. These
Procedures can be used to gather table, column, and index statistics. You will need to be the
owner of
the object or have the ANALYZE ANY system privilege or the DBA role to run these
procedures. The
parameters used by these procedures are nearly identical, so this paper will focus on the
GATHER_TABLE_STATS procedure.
GATHER_TABLE_STATS
The DBMS_STATS.GATHER_TABLE_STATS procedure allows you to gather table, partition,
index, and
column statistics. Although it takes 15 different parameters, only the first two or three parameters
need
to be specified to run the procedure, and are sufficient for most customers;
The primary difference is that Oracle internally prioritizes the database objects that require
statistics, so
that those objects, which most need updated statistics, are processed first. You can verify that the
automatic statistics gathering job exists by querying the DBA_AUTOTASK_CLIENT_JOB view
or through
Enterprise Manager
Restoring Statistics
From Oracle Database 10g onwards, when you gather statistics using DBMS_STATS, the
original
statistics are automatically kept as a backup in dictionary tables, and can be easily restored by
running
DBMS_STATS.RESTORE_TABLE_STATS if the newly gathered statistics lead to any kind of
problem.
The dictionary view DBA_TAB_STATS_HISTORY contains a list of timestamps when statistics
were
saved for each table.
The example below restores the statistics for the table SALES to what they were yesterday, and
automatically invalidates all of the cursors referencing the SALES table in the SHARED_POOL.
We want
to invalidate all of the cursors; because we are restoring yesterdays statistics and want them to
impact
any cursor instantaneously. The value of the NO_INVALIDATE parameter determines if the
cursors
referencing the table will be invalidated or not.
BEGIN
DBMS_STATS.RESTORE_TABLE_STATS(ownname => SH,
tabname => SALES,
as_of_timestamp => SYSTIMESTAMP-1
Extended Statistics
In Oracle Database 11g, extensions to column statistics were introduced. Extended statistics
encompasses two additional types of statistics; column groups and expression statistics
Index Statistics
Index statistics provide information on the number of distinct values in the index (distinct keys),
the
depth of the index (blevel), the number of leaf blocks in the index (leaf_blocks), and the
clustering
factor1
. The Optimizer uses this information in conjunction with other statistics to determine the cost
of an index access. For example the Optimizer will use b-level, leaf_blocks and the table
statistics
Num_rows to determine the cost of an index range scan (when all predicates are on the leading
edge of
the index).
EVENTS
DB File Sequential Read -- A single-block read (i.e., index fetch by ROWID)
Top segments are going with physical reads then we have to create index and rebuilds
The Oracle process wants a block that is currently not in the SGA, and it is waiting for
the database block to be read into the SGA from disk.
sequential read is a single-block read.Single block I/Os are usually the result of using
indexes.
To determine the actual object being waited can be checked by the p1, p2, p3 info
in v$session_wait
The two important numbers to look for are the TIME_WAITED and AVERAGE_WAIT
by individual sessions.
Hence to reduce this wait event follow the below points .
Tune Oracle - tuning SQL statements to reduce unnecessary I/O request is the only
guaranteed way to reduce "db file sequential read" wait time.
Tune Physical Devices - Distribute(stripe) the data on diferent disk to reduce the i/o .
Logical distribution is useless. "Physical" I/O performance is only governed by
"independency of devices".
Faster Disk - Buy the faster disk to reduce the unnecessary I/O request .
Increase db_block_buffers - A larger buffer cache can (not will, "might") help .
DB File Scattered Read -- A multiblock read (a full-table scan, OPQ, sorting)
Reading fragmented info from the buffer cache
A db file scattered read issues a scatter-read to read the data into multiple discontinuous
memory locations. A scattered read is usually a multiblock read
It may be because of insufficient indexes or unavailability of updated statistics.
The db file scattered read wait event identifies that a full table scan is occurring.
This is why the corresponding wait event is called 'db file scattered read'. Multiblock (up
to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full table scans into
the buffer cache show up as waits for 'db file scattered read'."
If we are doing a lot of partition activity then expect to see this wait event.it could be
table or index partition.
Tune sql, tune indexing, tune disk I/O, increase buffer cache.
DB File Parallel write
'db file parallel write' occurs when database writer (DBWr) is performing parallel write
to files and blocks. Check the average_wait in V$SYSTEM_EVENT, if it is greater than
10 milliseconds then it signals a slow IO throughput.
Tuning options - The main blocker for this wait event is the OS I/O sub systems. Hence
use OS monitoring tools (sar -d, iostat) to check the write performance. To improve the
average_wait time you can consider the following,
If the data files reside on raw devices use asynchronous writes. However if the data files
reside on cookedfile systems use synchronous writes with direct IO.
Note: If the average_wait time for db file parallel write is high then you may see that the
system waits on freebuffer waits event.
Control file Parallel write.
This event occurs while the session is writing physical blocks to all controlfiles
the session starts a controlfile transaction (to make sure that the controlfiles are up to date in case
the session crashes before committing the controlfile transaction)
the session commits a transaction to a controlfile
This may be a case where too many checkpoints are generated as a result of excessive log swaps.
Use v$loghist to check how many log swaps have been done.
Add the /*+ APPEND */ hint and no logging to INSERT statements. This can reduce the
log files filled.
Recreate larger log files using something like the following:
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3
'/oracle/oradata/CURACAO9/redo03.log' size 500m
alter system switch logfile;
alter system checkpoint;
Trouble shooting:
First, you can ensure that you have placed your control files on disks that are not under excessive
heavy loads.
Also consider using operating system mirroring instead of Oracle multiplexing of the control
files, taking and verify your control file backups daily, and create a new backup control file
whenever a structural change is made.
The common practice is to have multiple copies of the control files, keep them on separate
physical disks, and not putting them on heavily accessed disks.
Reduce the frequent log switches . To find the optimal time and size for log switch
The two main cases where this wait can occur are:
Another session is reading the block into the buffer
Another session holds the buffer in an incompatible mode to our request
Reducing buffer busy waits by tuning the SQL to access rows with fewer block reads by
adding indexes, adjusting the database writer or adding freelists to tables and indexes.
Even if there is a huge db_cache_size
Direct path read:
When a session reads buffers from disk directly into the PGA
the wait is on direct path read temp. This is closely related to the direct path read wait. If the I/O
subsystem doesnt support asynchronous I/Os, then each wait corresponds to a physical read
request. If the I/O subsystem supports asynchronous I/O, then the process overlaps read requests
with processing the blocks already in the PGA. When the process attempts to access a block in
the PGA that has not yet been read from disk, it issues a wait call and updates the statistics for
this event. So, the number of waits is not always the same as the number of read requests.
Using optimizer_index_cost_adj
The optimizer_index_cost_adj parameter was created to allow use to change the relative costs of
full-scan versus index operations. This is the most important parameter of all, and the default
setting of 100 is incorrect for most Oracle systems. For some OLTP systems, re-setting this
parameter to a smaller value (between 10- to 30) may result in huge performance gains!
Unfortunately, the Oracle hash join is more memory intensive than a nested loop join. To be
faster than a nested loop join, we must set the hash_area_size large enough to hold the entire
hash table in memory (about 1.6 times the sum of the rows in the table).
If the Oracle hash join overflows the hash_area_size memory, the hash join will page into the
TEMP tablespace, severely degrading the performance of the hash join. You can use the
following script, to dynamically allocate the proper hash_area_size for your SQL query in terms
of the size of your hash join driving table.
Indexes
An index is basically used for faster access to tables. Over a period of time the index gets
fragmented because of several DMLs running on table.
When the index gets fragmented, data inside the index is scattered, rows / block reduces, index
consumes more space and scanning of index takes more time and more blocks for same set of
queries.
To talk in index terminology, we will have a single root block, but as fragmentation increases
there will be more number of branch blocks and more leaf blocks. Also the height of index will
increase.
To fix the above issue, we go for index rebuild. During index rebuild, the data inside the index is
reorganized and compressed to fit in minimum number of blocks, height of the index is reduced
to minimum possible level and performance of queries will increase.
Your search becomes faster and your query will read less number of blocks.
There are 2 methods to rebuild the index.
1) Offline index rebuild alter index <index name> rebuild;
2) Online index rebuild alter index <index name> rebuild online;
1) Offline index rebuild alter index <index name> rebuild;
With offline index rebuild, table and index is locked in exclusive mode preventing any
translations on the table. This is most intrusive method and is used rarely in production unless we
know for sure that modules are not going to access the table and we have complete downtime.
2) Online index rebuild alter index <index name> rebuild online;
With online index rebuild, transactions can still access the table and index. Only for very less
amount of time the lock is acquired on the table (by index rebuild operation). For rest of the time
table and index is available for transactions.
During index build you can use the CREATE INDEX ONLINE to create an index without
placing an exclusive lock over the table. CREATE INDEX ONLINE statement can speed
up the creation as it works even when reads or updates are happening on the table.
The ALTER INDEX REBUILD ONLINE can be used to rebuild the index, resuming failed
operations, performing batch DML, adding stop words to index or for optimizing the
index.
Table Locks:
A table lock is required on the index base table at the start of the CREATE or REBUILD
process to guarantee DD information. A lock at the end of the process is also required
so as to merge changes into the final index structure.
The time taken to complete the indexing process will increase as the indexing process
will hang if there is an active transaction on the base table of the index being created or
rebuilt at the time one of these locks is required. Another important issue to be
considered is that any other transaction taking place on the base table that started after
indexing process will also be locked unless the indexing releases its locked.
This issue can have serious impact on the response time in highly concurrent systems.
This backlog of locked transactions can be quite significant depending on the time taken
by initial transactions to commit or rollback.
Oracle 11g
Oracle11g has provided enormous improvements in the locking implications regarding
creating or rebuilding indexes online. Creating or Rebuilding Indexes Online in Oracle
11g also requires two associated locks on the base table. One lock is required at the
start of indexing process and other at the end of indexing process.
The indexing process still hangs until all prior transactions have been completed if an
active transaction is going on the base table at the time one of these locks is required.
However such transaction will no longer be locked and complete successfully if the
indexing process has been locked out and subsequent transactions relating to the base
table starts afterwards.
In Oracle 11g the indexing process no longer effects other concurrent transactions on
the base table. The only process potentially left hanging while waiting to acquire its
associated lock resource. Now we will compare the index rebuild locking mechanism in
Oracle 10g and Oracle 11g.
Oracle Global Index vs. Local Index
Global Index: A global index is a one-to-many relationship, allowing one index partition
to map to many table partitions. The docs says that a "global index can be partitioned by
the range or hash method, and it can be defined on any type of partitioned, or nonpartitioned, table".
Local Index: A local index is a one-to-one mapping between a index partition and a table
partition. In general, local indexes allow for a cleaner divide and conquer approach for
generating fast SQL execution plans with partition pruning.
Local Partitioned indexes are easier to manage and each partition of local
indexes are associated with that partition
Local Partitioned indexes are easier to manage and each partition of local
indexes are associated with that partition. They also offer greater availability
and are common in DSS environments. When we take any action(MERGE,
SPLIT,EXCHANGE etc) on local partition, it impacts only that partition and other
partition will be available. We can not explicity add local index to new
partition. Local index will be added implicitly to new partition when we add
new partition on table. Likewise, we can not drop the local index on specific
partition. It can be dropped automatically when we drop the partition from
underlying table. Local indexes can be unique when partition key is part of the
composit index. Unique local indexes are useful for OLTP environment. We can
can create bitmap indexes on partitioned tables, with the restriction that the
bitmap indexes must be local to the partitioned table. They cannot be global
indexes.
Global index used in OLTP environments and offer efficient access to any
individual record. We have two types of Global index. They are global Nonpartitioned index and Global partitioned index. Global nonpartitioned indexes
behave just like a nonpartitioned index.
Detection: Migrated and chained rows in a table or cluster can be identified by using the
ANALYZE command with the LIST CHAINED ROWS option. This command collects
information about each migrated or chained row and places this information into a specified
output table. To create the table that holds the chained rows,
execute script UTLCHAIN.SQL.
SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS;
2. AWR is the 10g NEW feature but statspack can still be used in 10g.
3. AWR repository holds all the statistics available in statspack as well as some additional
statistics which are not (10g new features).
4. Statspack does not STORE the ASH statistics which are available in AWR
dba_hist_active_sess_history VIEW.
5. Important difference between both is STATSPACK doesnt store history for new metric
statistics introduced in Oracle 10g.The key AWR views dba_hist_sysmetric_history and
dba_hist_sysmetric_summary.
6. AWR contains views such as dba_hist_service_stat, dba_hist_service_wait_class and
dba_hist_service_name.
7. Latest version of statspack included with 10g contains a specific tables which track history of
statistics that reflect the performance of Oracle streams feature. These tables are
stats$streams_capture, stats$treams_apply_sum, stats_buffered_subscribers,
stats$rule_set.
8. The AWR does not contain specific tables that reflect oracle streams activity. Therefore if DBA
relies on Oracle streams it would be useful to monitor its performance using Statspack utiity.
9. AWR snapshots are scheduled every 60 minutes by default.
10. Statspack snapshot purges must be scheduled manually but AWR snapshots are purged
automatically by MMON every night.
block is written at one point in time while the bottom of the block is written
at another point in time. If you restore a file containing a fractured block and
Oracle reads the block, then the block is considered a corrupt.
Oracles does this by reading the "before image" of changed rows from the online undo
segments. If you have lots of updates, long running SQL and too small UNDO, the ORA-01555
error will appear.
From the docs we see that the ORA-01555 error relates to insufficient undo storage or a too
small value for the undo_retention parameter:
Action: If in Automatic Undo Management mode, increase the setting of UNDO_RETENTION.
Otherwise, use larger rollback segments.
You can get an ORA-01555 error with a too-small undo_retention, even with a
large undo tables. However, you can set a super-high value for
undo_retention and still get an ORA-01555 error. Also see these important
notes on commit frequency and the ORA-01555 error
Lock, Latch, Enquiues
Locks are used to protect the data or resources from the simultaneous use of them by multiple
sessions which might set them in inconsistent state Locks are external mechanism, means user
can also set locks on objects by using various oracle statements.
Latches are for the same purpose but works at internal level. Latches are used to Protect and
control access to internal data structures like various SGA buffers. They are handled and
maintained by oracle and we cant access or set it.. this is the main difference.
Locks are held at object level,
Latches are the locking mechanism to protect the memory structure of oracle
Enquiues are the low level serialization locking mechanism which serialization the access to the
resource
About Opening with the RESETLOGS Option
The RESETLOGS option is always required after incomplete media recovery or recovery
using a backup control file. Resetting the redo log does the following:
Archives the current online redo logs (if they are accessible) and then erases the
contents of the online redo logs and resets the log sequence number to 1. For
example, if the current online redo logs are sequence 1000 and 1001 when you
open RESETLOGS, then the database archives logs 1000 and 1001 and then
resets the online logs to sequence 1 and 2.
Creates the online redo log files if they do not currently exist.
Reinitializes the control file metadata about online redo logs and redo threads.
Updates all current datafiles and online redo logs and all subsequent archived
redo logs with a new RESETLOGS SCN and time stamp.
Because the database will not apply an archived log to a datafile unless the RESETLOGS
SCN and time stamps match, the RESETLOGS prevents you from corrupting datafiles
with archived logs that are not from direct parent incarnations of the current incarnation.
In prior releases, it was recommended that you back up the database immediately after
the RESETLOGS. Because you can now easily recover a pre-RESETLOGS backup like
any other backup, making a new database backup is optional. In order to perform
recovery through resetlogs you must have all archived logs generated since the last
backup and at least one control file (current, backup, or created).
Figure 18-1 shows the case of a database that can only be recovered to log sequence
2500 because an archived redo log is missing. When the online redo log is at sequence
4000, the database crashes. You restore the sequence 1000 backup and prepare for
complete recovery. Unfortunately, one of your archived logs is corrupted. The log before
the missing log contains sequence 2500, so you recover to this log sequence and open
RESETLOGS. As part of the RESETLOGS, the database archives the current online logs
(sequence 4000 and 4001) and resets the log sequence to 1.
You generate changes in the new incarnation of the database, eventually reaching log
sequence 4000. The changes between sequence 2500 and sequence 4000 for the new
incarnation of the database are different from the changes between sequence 2500 and
sequence 4000 for the old incarnation. You cannot apply logs generated after 2500 in
the old incarnation to the new incarnation, but you can apply the logs generated before
sequence 2500 in the old incarnation to the new incarnation. The logs from after
sequence 2500 are said to be orphaned in the new incarnation because they are
unusable for recovery in that incarnation.
http://docs.oracle.com/cd/E25178_01/server.1111/e16638/technique.htm
http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e14496/psr_trouble.ht
m
http://www.oracle.com/technetwork/database/bi-datawarehousing/pres-whatto-expect-from-optimizer--128499.pdf
With SQL Plan Management
SQL statement is parsed for the first time and a plan is generated
Check the log to see if this is a repeatable SQL statement
Add SQL statement signature to the log and execute it
Plan performance is still verified by execution
New plan is not the same as the baseline new plan is not executed
but marked for verification
Execute known plan baseline - plan performance is verify by history
optimizer_use_sql_plan_baselines
Monitoring SPM
Dictionary view DBA_SQL_PLAN_BASELINE
Via the SQL Plan Control in EM DBControl
Managing SPM
PL/SQL package DBMS_SPM or via SQL Plan Control in EM DBControl
Requires the administer sql management object privilege
Previously I wrote about how to view a plan of a sql. Today I will tell you about a good feature
DBMS_XPLAN.DISPLAY_AWR function comes with Oracle 10G which helps you to view
plan of an old sql. . If you have license for tuning pack and diagnostics pack you can get
historical information about the old SQLs which ran on your database. For more info about
licensing feature of these packs refer to the Oracle Database Licensing Information 10g
Release 1 (10.1) manual
DBMS_XPLAN.DISPLAY_AWR displays the contents of an execution plan stored in the AWR.
Syntax is;
DBMS_XPLAN.DISPLAY_AWR(
sql_id IN VARCHAR2,
plan_hash_value IN NUMBER DEFAULT NULL,
db_id IN NUMBER DEFAULT NULL,
format IN VARCHAR2 DEFAULT TYPICAL);
If db_id paramater is not specified the function will use the id of the local db.
If you dont specify plan_hash_value parameter, function will bring all the stored execution plans
for the given sql_id
Format parameter have so many capabilities you can get the list from the manual.
Simple demonstration ; (all tests are done with 10.2.0.1 Express Edition)
You can also use DBA_HIST_SQL_PLAN table for viewing the historic plan info.
NUM_ROWS
BLOCKS
---------- ---------29933962
119585
SQL> select count(*) from sales;
COUNT(*)
---------30000000
SQL> exec
dbms_stats.set_table_stats('ADAM','SALES',numrows=>100,numblks=>1
)
PL/SQL procedure successfully completed.
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------100
1
SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='yyyy-mmdd:hh24:mi:ss';
Session altered.
SQL> select table_name,stats_update_time from
user_tab_stats_history;
TABLE_NAME
STATS_UPDATE_TIME
------------------------------------------------------------------SALES
2010-05-18:09:47:16
SALES
2010-05-18:09:47:38
We see two rows, representing the old statistics of the sales table. The first was from the time, as
there where NULL entries (before the first gather_table_stats). The second row represents the
accurate statistics. I am going to restore them:
SQL> begin
dbms_stats.restore_table_stats('ADAM','SALES',
to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));
end;
/
PL/SQL procedure successfully completed.
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------29933962
119585
Explain, Exemplify, Empower
The Oracle Instructor
Home
About
Downloads
Oracle HA Architecture
Data Guard
Exadata
OU course formats
available for cursors that have been compiled with the STATISTICS_LEVEL initialization
parameter set to ALL.
The V$SQL_PLAN_STATISTICS_ALL view enables side by side comparisons of the
estimates that the optimizer provides for the number of rows and elapsed time. This view
combines both V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.
Max query length v$undostat
Optimizer
The optimizer is one of the most fascinating components of the Oracle Database, since
it is
essential to the processing of every SQL statement. The optimizer determines the most
efficient execution plan for each SQL statement based on the structure of the given
query, the
available statistical information about the underlying objects, and all the relevant
optimizer and
execution features.
The RULE (and CHOOSE) OPTIMIZER_MODE has been deprecated and desupported in 11g.
(The only way to get rule-based behavior in 11g is by using the RULE hint in a query, which is
not supported either). In general, using the RULE hint is not recommended, but for individual
queries that need it, it is there. Consult with Oracle support before using the RULE hint in 11g.
In 11g, the cost-based optimizer has two modes: NORMAL and TUNING.
In NORMAL mode, the cost-based optimizer considers a very small subset of possible execution
plans to determine which one to choose. The number of plans considered is far smaller than in
past versions of the database in order to keep the time to generate the execution plan within strict
limits. SQL profiles (statistical information) can be used to influence which plans are considered.
The TUNING mode of the cost-based optimizer can be used to perform more detailed analysis of
SQL statements and make recommendations for actions to be taken and for auxiliary statistics to
be accepted into a SQL profile for later use when running under NORMAL mode. TUNING
mode is also known as the Automatic Tuning Optimizer mode, and the optimizer can take several
minutes for a single statement (good for testing). See the Oracle Database Performance Tuning
Guide Automatic SQL Tuning (Chapter 17 in the 11.2 docs).
Oracle states that the NORMAL mode should provide an acceptable execution path for most
SQL statements. SQL statements that do not perform well in NORMAL mode may be tuned in
TUNING mode for later use in NORMAL mode. This should provide a better performance
balance for queries that have defined SQL profiles, with the majority of the optimizer work for
complex queries being performed in TUNING mode once, rather than repeatedly, each time the
SQL statement is parsed.
With each new release the optimizer evolves to take advantage of new functionality and
the new statistical information to generate better execution plans. Oracle Data
base 12c makes this evolution go a step further with the introduction of a new adaptive
approach to query optimizations
Adaptive Query Optimization
By far the biggest change to the optimizer in Oracle Database 12c is Adaptive Query
Optimization.
Adaptive Query Optimization is a set of capabilities that enable the optimizer to make
run time
adjustments to execution plans and discover additional information that can lead
to better statistics.
This new approach is extremely helpful when existing statistics are not sufficient to
generate an optimal plan.
There are two distinct aspects in Adaptive Query Optimization, adaptive plans, which
focuses on
Improving the initial execution of a query and adaptive statistics, which provide
additional information
to improve subsequent executions
Adaptive Plans
Adaptive plans enable the optimizer to defer the final plan decision for a statement,until
execution
time. The optimizer instruments its chosen plan (the default plan), with statistics
collectors so that at
runtime, it can detect if its cardinality estimates, differ greatly from the actual number of
rows seen by the operations in the plan. If there is a significant difference, then the plan
or a portion of it can be
automatically adapted to avoid suboptimal performance on the first execution of a SQL
statement
Adaptive Statistics
The quality of the execution plans determined by the optimizer depends on the quality of
the tatistics
available. However, some query predicates become too complex to rely on base table
statistics alone and the optimizer can now augment these statistics with adaptive
statistics.
optimizer_index_caching integer 60
optimizer_index_cost_adj integer 20
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
How Optimization Looks at the Data
Rule-based optimization is Oracle-centric, whereas cost-based optimization is data-centric. The
optimizer mode under which the database operates is set via the initialization parameter
OPTIMIZER_MODE. The possible optimizer modes are as follows:
All_rows attempts to optimize the query to get the very last row as
fast as possible. This makes sense in a stored procedure for example
where the client does not regain control until the stored procedure
completes. You don't care if you have to wait to get the first row if the
last row gets back to you twice as fast. In a client server/interactive
application you may well care about that.
FIRST_ROWS Gets the first row faster (generally forces index use).
This is good for untuned systems that process lots of single
transactions.
Rather, this allows you to pass this information you have learned onto the
CBO so it can make better decisions on your system. Also, it points out why
just looking at the cost of a query plan in an attempt to determine which plan
is going to be faster is an exercise in futility: Take two identical plans, with
two different costs, which one is faster? Neither is.
The effect of adjusting these two parameters is that they have a
profound and immediate impact on the CBO. They radically change
http://docs.oracle.com/cd/E25178_01/server.1111/e16638/technique.htm
http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e14496/psr_trouble.htm
ALL_ROWS Gets all rows faster (generally forces index suppression). This is
good for untuned, high-volume batch systems. This is the default.
FIRST_ROWS Gets the first row faster (generally forces index use). This is
good for untuned systems that process lots of single transactions.
FIRST_ROWS (1|10|100|1000) Gets the first n rows faster. This is good for
applications that routinely display partial results to users such as paging data
to a user in a web application.
CHOOSE Now obsolete and unsupported but still allowed. Uses cost-based
optimization for all analyzed tables. This is a good mode for well-built and
well-tuned systems (for advanced users). This option is not documented for
11gR2 but is still usable.
RULE Now obsolete and unsupported but still allowed. Always uses rulebased optimization. If you are still using this, you need to start using costbased optimization, as rule-based optimization is no longer supported under
Oracle 10g Release 2 and higher.
The default optimizer mode for Oracle 11g Release 2 is ALL_ROWS. Also, costbased optimization is used even if the tables are not analyzed. Although
RULE/CHOOSE are definitely desupported and obsolete and people are often
scolded for even talking about it, I was able to set the mode to RULE in
11gR2. Consider the following error I received when I set OPTIMIZER_MODE to
a mode that doesnt exist (SUPER_FAST):
The ALL_ROWS hint instructs the optimizer to optimize a statement block with
a goal of best throughput, which is minimum total resource consumption. For
example, the optimizer uses the query optimization approach to optimize this
statement for best throughput:
ALL_ROWS
SELECT /*+ ALL_ROWS */ employee_id, last_name, salary, job_id
FROM employees
WHERE employee_id = 107;
If you specify either the ALL_ROWS or the FIRST_ROWS hint in a SQL
statement, and if the data dictionary does not have statistics about tables
accessed by the statement, then the optimizer uses default statistical values,
such as allocated storage for such tables, to estimate the missing statistics
and to subsequently choose an execution plan. These estimates might not be
as accurate as those gathered by the DBMS_STATS package, so you should
use the DBMS_STATS package to gather statistics.
If you specify hints for access paths or join operations along with either the
ALL_ROWS or FIRST_ROWS hint, then the optimizer gives precedence to the
access paths and join operations specified by the hints.
First_rows attempts to optimize the query to get the very first row back to
the client as fast as possible. This is good for an interactive client server
environment where the client runs a query and shows the user the first 10
rows or so and waits for them to page down to get more.
All_rows attempts to optimize the query to get the very last row as fast as
possible. This makes sense in a stored procedure for example where the
client does not regain control until the stored procedure completes. You don't
care if you have to wait to get the first row if the last row gets back to you
twice as fast. In a client server/interactive application you may well care
about that.
In TOAD or SQL Navigator, When we select the data, it display immediately.
But it does not mean that, it is faster. If we scroll down, it might be fetching
the data in the background mode. First_rows is best place for OLTP
environment. Also in some reporting environment, if user wants to see initial
data first and later see the rest of the data, then first_rows is good option.
When we run the query in the stored procedure, first_rows would not be a
good choice, all_rows is good option here, because, there is no use to fetch
the first few records immediatley inside the stored procedure.
http://www.oracle.com/technetwork/database/bi-datawarehousing/pres-what-toexpect-from-optimizer--128499.pdf
optimizer_use_sql_plan_baselines
Monitoring SPM
Dictionary view DBA_SQL_PLAN_BASELINE
Via the SQL Plan Control in EM DBControl
Managing SPM
PL/SQL package DBMS_SPM or via SQL Plan Control in EM DBControl
Requires the administer sql management object privilege
Previously I wrote about how to view a plan of a sql. Today I will tell you about a good feature
DBMS_XPLAN.DISPLAY_AWR function comes with Oracle 10G which helps you to view
plan of an old sql. . If you have license for tuning pack and diagnostics pack you can get
historical information about the old SQLs which ran on your database. For more info about
licensing feature of these packs refer to the Oracle Database Licensing Information 10g
Release 1 (10.1) manual
DBMS_XPLAN.DISPLAY_AWR displays the contents of an execution plan stored in the AWR.
Syntax is;
DBMS_XPLAN.DISPLAY_AWR(
sql_id IN VARCHAR2,
plan_hash_value IN NUMBER DEFAULT NULL,
db_id IN NUMBER DEFAULT NULL,
format IN VARCHAR2 DEFAULT TYPICAL);
If db_id paramater is not specified the function will use the id of the local db.
If you dont specify plan_hash_value parameter, function will bring all the stored execution plans
for the given sql_id
Format parameter have so many capabilities you can get the list from the manual.
Simple demonstration ; (all tests are done with 10.2.0.1 Express Edition)
You can also use DBA_HIST_SQL_PLAN table for viewing the historic plan info.
For our scope, it is enough to say that we have an issue caused by generation of
new optimizer statistics. We want to get back our good old statistics! Therefore, we
look at the historic optimizer statistics:
I am now going to gather statistics on the table manually the same would be done
automatically by the standard scheduler job during the night:
STATS_UPDATE_TIME
-------------------------------------2010-05-18:09:47:16
2010-05-18:09:47:38
We see two rows, representing the old statistics of the sales table. The first was from the time, as
there where NULL entries (before the first gather_table_stats). The second row represents the
accurate statistics. I am going to restore them:
SQL> begin
dbms_stats.restore_table_stats('ADAM','SALES',
to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));
end;
/
PL/SQL procedure successfully completed.
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------29933962
119585
Home
About
Downloads
Oracle HA Architecture
Data Guard
Exadata
OU course formats
I just created a demo user, a table and an index on that table. Notice that the two segments take
about 1.5 Gig space, should you like to reproduce the demo yourself. Right now, there are no
optimizer statistics for the table:
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------"NULL values here"
I am now going to gather statistics on the table manually the same would be done
automatically by the standard scheduler job during the night:
SQL> exec dbms_stats.gather_table_stats('ADAM','SALES')
PL/SQL procedure successfully completed.
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------29933962
119585
SQL> select count(*) from sales;
COUNT(*)
---------30000000
As we can see, the statistics are quite accurate, reflecting well the actual size of the table. The
index is used for the following query, as we can tell by runtime already:
SQL> set timing on
I am now going to introduce a problem with the optimizer statistics artificially by just setting
them very inaccurate. A real-world problem that is caused by new optimizer statistics is a little
harder to come up with probably you will never encounter it during your career
SQL> exec dbms_stats.set_table_stats('ADAM','SALES',numrows=>100,numblks=>1)
PL/SQL procedure successfully completed.
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------100
1
With the above (completely misleading) statistics, the optimizer will think that a full table scan
of the sales table is fairly cheap. Please notice that I ask after id 4712 and not 4711 again,
because it could happen that the already computed execution plan (index range scan) is still in
the library cache available for reuse. I could also flush the shared pool here to make sure that a
new execution plan has to be generated for the id 4711.
SQL> select amount_sold from sales where id=4712;
AMOUNT_SOLD
----------5000
Elapsed: 00:00:01.91
We can tell by the runtime of almost 2 seconds here that this was a full table scan. Proof would
be to retrieve the execution plan from the library cache. I leave that to your studies. Please be
aware that the autotrace feature might be misleading here. For our scope, it is enough to say that
we have an issue caused by generation of new optimizer statistics. We want to get back our good
old statistics! Therefore, we look at the historic optimizer statistics:
SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='yyyy-mm-dd:hh24:mi:ss';
Session altered.
SQL> select table_name,stats_update_time from user_tab_stats_history;
TABLE_NAME
-----------------------------SALES
SALES
STATS_UPDATE_TIME
-------------------------------------2010-05-18:09:47:16
2010-05-18:09:47:38
We see two rows, representing the old statistics of the sales table. The first was from the time, as
there where NULL entries (before the first gather_table_stats). The second row represents the
accurate statistics. I am going to restore them:
SQL> begin
dbms_stats.restore_table_stats('ADAM','SALES',
to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));
end;
/
PL/SQL procedure successfully completed.
SQL> select num_rows,blocks from user_tables;
NUM_ROWS
BLOCKS
---------- ---------29933962
119585
After the statement has executed, you can display the plan by querying the V$SQL_PLAN view.
V$SQL_PLAN contains the execution plan for every statement stored in the cursor cache. Its
definition is similar to the PLAN_TABLE. See "PLAN_TABLE Columns".
The advantage of V$SQL_PLAN over EXPLAIN PLAN is that you do not need to know the
compilation environment that was used to execute a particular statement. For EXPLAIN PLAN, you
would need to set up an identical environment to get the same plan when executing the
statement.
The V$SQL_PLAN_STATISTICS view provides the actual execution statistics for every operation
in the plan, such as the number of output rows and elapsed time. All statistics, except the number
of output rows, are cumulative. For example, the statistics for a join operation also includes the
statistics for its two inputs. The statistics in V$SQL_PLAN_STATISTICS are available for cursors
that have been compiled with the STATISTICS_LEVEL initialization parameter set to ALL.
The V$SQL_PLAN_STATISTICS_ALL view enables side by side comparisons of the estimates that
the optimizer provides for the number of rows and elapsed time. This view combines both
V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.