Sunteți pe pagina 1din 69

1

PERFORMANCE TUNING
ORACLE DATABASE PERFORMANCE AND TUNING

Table and Index Compression


Oracle Tables can compress on block level compression.
This feature will be available from oracle 9i onwards, which is very useful in many data warehouses and read only large
tables. Table compression can reduce storage space requirement (drastically some times) and may make the queries run
faster (need to read only fewer blocks of data).
You may enable compression on a table at the time of table creation or by altering the table. Remember the existing data in
the table is not compressed on uncompressed when you do the alter.
CREATE TABLE COMPRESSTABLE (
COMPRESSCOL1 VARCHAR2 (20),
COMPRESSCOL2 DATE)
TABLESPACE TABLESPACEname
NOLOGGING
COMPRESS
PCTFREE 0;

ALTER TABLE COMPRESSTABLE COMPRESS;


The data compression is transparent to the user. You run queries against the table the same way you use to do before. Oracle
compresses data blocks only when the data is loaded in direct path. The statements could be
INSERT with APPEND hint
INSERT with PARALLEL hint (parallel DML)
CREATE TABLE AS
Table Compression is suitable for large tables, where the updates/deletes are close to none. If there are updates/deletes, you
may end up using more space to update, Oracle has to uncompress the row, and insert it again; row deleted will free up
some space which may not be sufficient for the next inserted row, because conventional inserts are not compressed, direct
load inserts always load above the HWM.
You can either compress the table, or selectively on partitions. It may be a good idea to compress the older data on a
partitioned table. To do this, you have to perform a
ALTER TABLE MYPARTTABLE MOVE PARTITION JAN04 TABLESPACE COMP_DATA COMPRESS PCTFREE 0;
After the partition move, you may also have to do:
ALTER TABLE MYPARTTABLE

PERFORMANCE TUNING

MODIFY PARTITION JAN04 REBUILD UNUSABLE LOCAL INDEXES;


Another place to use compression is when you create materialized views, because most of the MVs are read only. If the MV
already exist, you may do
ALTER MATERIALIZED VIEW MYMV COMPRESS;
The data will be compressed when the materialized view is refreshed.
Restrictions:
We cannot specify data segment compression for an index-organized table, for any overflow segment or partition of an
overflow segment, or for any mapping table segment of an index-organized table.
We cannot specify data segment compression for hash partitions or for either hash or list sub-partitions.
We cannot specify data segment compression for an external table
The dictionary views DBA_TABLES, DBA_TAB_PARTITIONS have a column named COMPRESSION, which will be either
DISABLED or ENABLED.
Index Key Compression
Oracle Index key compression : You compress the leading columns of an index (or index organized table) to save space.
Oracle compresses only non-partitioned indexes that are non-unique or unique indexes of at least two columns. Bitmap
indexes cannot be compressed.
Usually, keys in an index have two pieces, a grouping piece and a unique piece. If the key is not defined to have a unique
piece, Oracle provides one in the form of a rowid appended to the grouping piece. Key compression is a method of breaking
off the grouping piece and storing it so it can be shared by multiple unique pieces.
The Key compression is achieved by breaking the index entry into two pieces a prefix entry (or the grouping piece) and the
suffix entry (the unique piece). Key compression is done within an index block but not across multiple index blocks. Suffix
entries form the compressed version of index rows. Each suffix entry references a prefix entry, which is stored in the same
index block as the suffix entry.
Although key compression reduces the storage requirements of an index, it can increase the CPU time required to reconstruct
the key column values during an index scan. It also incurs some additional storage overhead, because every prefix entry has
an overhead of 4 bytes associated with it.
Example creating a compressed index-organized table:

CREATE TABLE INDEXKEYCOM


(OWNER VARCHAR2(30),
TABLE_NAME VARCHAR2(30),
TABLESPACE_NAME VARCHAR2 (30),
PRIMARY KEY (OWNER, TABLE_NAME))

PERFORMANCE TUNING

ORGANIZATION INDEX
COMPRESS;

Example creating a compressed index:

CREATE INDEX pidx_INDEXKEYCOM


ON INDEXKEYCOM (COUNTRY, STATE, SEX)
TABLESPACE IKEYCOMPRESS_TS
COMPRESS;
We can specify an integer along with the COMPRESS clause, which specifies the number of prefix columns to compress. For
unique indexes, the valid range of prefix length values is from 1 to the number of key columns minus 1. The default is the
number of key columns minus 1. For non-unique indexes, the valid range is from 1 to the number of key columns. The default
is the number of key columns.

Oracle Resumable Space Allocation


Ora-0165* is a common to all and it is always need a special attention, otherwise the entire work may get
impacted.
If you often have issues with batch jobs running out of space producing unable to extent errors in the database,
now Oracle can suspend the session in error until you add more space and resume the session from where it left.
Resumable space allocation solution can be used for the following errors:

ORA-1653 unable to extend table ... in tablespace ...


ORA-1654 unable to extend index ... in tablespace ...
ORA-1650 unable to extend rollback segment ... in tablespace ...
ORA-1628 max # extents ... reached for rollback segment ...
ORA-1654 max # extents ... reached in index ...
ORA-1631 max # extents ... reached in table ...

The session need to enable the resumable mode using:

PERFORMANCE TUNING
ALTER SESSION ENABLE RESUMABLE;
Oracle Dictionary views can be queried to obtain information about the status of resumable statements:
V$SESSION_WAIT
Statement is suspended the session invoking the statement is put into a wait state. Row is inserted into this view
for the session with the EVENT column containing "statement suspended, wait error to be cleared".
DBA_RESUMABLE and USER_RESUMABLE
Views contain rows for all currently executing or suspended resumable statements. Can be used by a dba, AFTER
SUSPEND trigger, or another session to monitor the progress of, or obtain specific information about, resumable
statements.

Invisible Indexes in Oracle 11g

Introduction

11g allows indexes to be marked as invisible. Invisible indexes are maintained or structured like any other index,
but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE
at the instance or session level.
It can be created as invisible by using the INVISIBLE keyword, and their visibility can be toggled using the ALTER
INDEX command.
CREATE INDEX index_name ON table_name(column_name) INVISIBLE;
ALTER INDEX index_name INVISIBLE;
ALTER INDEX index_name VISIBLE;
A query using the indexes column in the WHERE clause ignores the index and does a full table scan.
Create a table and execute select commands
SET AUTOTRACE ON
SELECT * FROM invisible_table WHERE id = 9999;
---------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 3 | 7 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| INVISIBLE_TABLE | 1 | 3 | 7 (0)| 00:00:01 |
Change the OPTIMIZER_USE_INVISIBLE_INDEXES parameter makes the index available to the optimizer.
ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;
SELECT * FROM invisible_table WHERE id = 9999;
-----------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 3 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| INVISIBLE_TABLE_ID | 1 | 3 | 1 (0)| 00:00:01 |
The index visible means it is still available to the optimizer when the OPTIMIZER_USE_INVISIBLE_INDEXES
parameter is reset.

PERFORMANCE TUNING
ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE;
ALTER INDEX invisible_table_id VISIBLE;
-----------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 3 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| INVISIBLE_TABLE_ID | 1 | 3 | 1 (0)| 00:00:01 |
II (Invisible Indexes) can be useful for processes with specific indexing needs, where the presence of the indexes
may adversely affect other functional areas. They are also useful for testing the impact of dropping an index.
The visibility status of an index is indicated by the VISIBILITY column of the [DBA|ALL|USER]_INDEXES views.
Oracle Virtual Indexes
Introduction

VI (Virtual Indexes) is another undocumented feature used by Oracle. Virtual indexes, as the name suggests are
pseudo-indexes that will not behave the same way that normal indexes behave, and are meant for a very specific
purpose.
A virtual index is created in a slightly different manner than the normal indexes. A virtual index has no segment
pegged to it, i.e., the DBA_SEGMENTS view will not show an entry for this. Oracle handles such indexes internally
and few required dictionary tables are updated so that the optimizer can be made aware of its presence and
generate an execution plan considering such indexes.
This functionality is not intended for standalone usage. It is part of the Oracle Enterprise Manger Tuning Pack
(Virtual Index Wizard). The virtual index wizard functionality allows the user to test a potential new index prior to
actually building the new index in the database. It allows the CBO to evaluate the /7potential new index for a
selected SQL statement by building an explain plan that is aware of the potential new index. This allows the user to
determine if the optimizer would use the index, once implemented.
This feature is here to be supported from Enterprise Manager and not for standalone usage. I went a bit further
and actually tested it using SQL*Plus, basically, trying to use the same feature but without the enterprise manager.
On a developer angle, I could not see much use of Virtual Indexes, where we can create and drop indexes while
testing. However, this feature could prove handy if a query or group of queries have to be tested in production (for
want of simulation or urgency!), to determine if a new index will improve the performance, without impacting
existing or new sessions.
Some attributes of the Virtual Indexes.
a. Permanent and continue to exist unless we drop them.
b. Creation will not affect existing and new sessions. Only sessions marked for Virtual Index usage will become
aware of their existence.
c. VI indexes will be used only when the hidden parameter "_use_nosegment_indexes" is set to true.
d. Rule based optimizer did not recognize Virtual Indexes, however, CBO recognizes them. Anyway, RBO is obsolute
in Oracle 10g onwards.
e. Dictionary view DBA_SEGMENTS will not show an entry for Virtual Indexes. The table DBA_INDEXES and
DBA_OBJECTS will have an entry for them in Oracle 8i; in Oracle 9i onwards, DBA_INDEXES no longer show Virtual
Indexes.
f. Virtual Indexes cannot be altered and throw a "fake index" error!
g. Virtual Indexes can be analyzed, using the ANALYZE command or DBMS_STATS package, but the statistics
cannot be viewed (in Oracle 8i, DBA_INDEXES will not show this either). Oracle may be generating artificial
statistics and storing it somewhere for referring it later.

Creating Virtual Index


create unique index am304_u1 on am304(col2) nosegment;

Parameter _USE_NOSEGMENT_INDEXES

PERFORMANCE TUNING
This is a hidden/internal parameter and therefore undocumented. Such parameters should not be altered for Oracle
databases unless Oracle Support either advises or recommends that you do so. In our case, we make an exception
(!), but only to be set at session level. Do not set it for the complete instance.
Setting the "_use_nosegment_indexes" parameter enables the optimizer to use virtual indexes.

Examples:

Creating the virtual index:


create index vinew on tb(a1) nosegment;
Checking some dictionary tables:
01. select segment_name, segment_type, bytes from dba_segments where segment_name = 'VINEW';
You will get a message no records found.
02. select object_name, object_type, status from dba_objects where object_name = 'VINEW';
OBJECT_NAME
-----------------VINEW

|OBJECT_TYPE
|----------------|INDEX

|STATUS
|--------------|VALID

03. select index_name, index_type, status from dba_indexes where index_name = 'VINEW';
INDEX_NAME
|INDEX_TYPE
---------------------- |-----------VINEW
|NORMAL

|STATUS
|--------------|VALID

Virtual Index will not prevent the creation of an index with the same column(s).

How to find out virtual index from database?


Virtual index can be created in oracle database which doesn't has any physical body and location. It can create with
NOSEGMENT clause for testing purpose.

SQL> create table test11 (a number,b number);


Table created.
SQL> create index v_test11 on test11(a) nosegment;
Index created.
SQL> select index_name,owner from dba_indexes
where index_name='V_TEST11' and owner='SYS';
no rows selected
SQL> select index_owner,index_name,column_name,table_name from dba_ind_columns
2 where index_owner='SYS' and index_name='V_TEST11';
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
SYS V_TEST11 A TEST11
SQL> select index_name from dba_ind_columns
2 minus
3 select index_name from dba_indexes;

PERFORMANCE TUNING

INDEX_NAME
-----------------------------AAA_V
V_T1_A
V_TEST11
V_TEST1_A
SQL> select owner,object_id
2 from dba_objects
3 where object_name='V_TEST11';
OWNER OBJECT_ID
------------------------------ ---------SYS 7718
SQL> select owner,object_id,object_type,status,temporary from dba_objects
2 where object_name='V_TEST11';
OWNER OBJECT_ID OBJECT_TYPE STATUS T
------------------------------ ---------- ------------------ ------- SYS 7718 INDEX VALID N
SQL> create index test11_b on test11(b);
Index created.
SQL>select object_name,object_id,object_type from dba_objects where object_type='INDEX' and owner='SYS' and
object_name like '%TEST%'
OBJECT_NAM OBJECT_ID OBJECT_TYPE
---------- ---------- -----------------TEST11_B 7730 INDEX
V_TEST11 7718 INDEX
SQL> select obj#,ts#,file#,block#,type#,flags,property from ind$ where obj# in (7718,7730);
OBJ# TS# FILE# BLOCK# TYPE# FLAGS PROPERTY
7730 0 1 15832 1 0 0
7718 0 0 0 1 4096 0

Above query shows that in the IND$ flags of virtual index is 4096 and for other index is 0. That means we can find
out the VIRTUAL INDEX FROM following queries ONLY.
SQL> select index_name from dba_ind_columns
2 minus
3 select index_name from dba_indexes;
AND
SQL> select obj#,ts#,file#,block#,type#,flags,property from ind$ where flags=4096

STATSPACK Statistics Level


"DBAs can change the amount of information or detail of statistics Statspack gathers by specifying a snapshot level. The level
you choose dictates how much data Statspack collects. Level 5 is the default.

PERFORMANCE TUNING

Level 0: Statspack collects general performance statistics such as wait statistics, system events, system statistics, rollbacksegment data, row cache, SGA, background events, session events, lock statistics, buffer-pool statistics, and parent latch
data.
Level 5: Statspack collects all the statistics it gathers at level 0 plus performance data about high-resource-usage SQL
statements.
Level 10: Statspack collects all the statistics from level 0 and level 5 as well as child-latch information. At level 10, the
snapshot can sometimes take longer to gather data because level 10 can be resource-intensive. You should use it only on the
advice of Oracle personnel.
Levels 5 and 10 capture high-resource SQL statements that exceed any of the following four threshold parameters:
the number of executions of the SQL statement (default = 100)
the number of disk reads the SQL statement performs (default = 1,000)
the number of parse calls the SQL statement performs (default = 1,000)
the number of buffer gets the SQL statement performs (default = 10,000)
If a SQL statement's resource usage exceeds any one of these threshold values, Statspack captures the statement when it
takes a snapshot.\"
To specify the statistics level for a particular statspack snapshot, use the command;
SQL> execute statspack.snap (i_snap_level=> statistics_level);
To change the default value for this and all future snapshots, use the command;
SQL> execute statspack.snap (i_snap_level=> statistics_level, i_modify_parameter=> 'true');
Bear in mind though, that the default statistics level is actually 5 - which is usually enough to capture all the information you
need (long running SQL queries, in my case). With Oracle 9i, level 6 stores the explain plans for these SQL statements, whilst
with 9.2 level 7 gathers segment statistics. As the article says, only use 10 if you're asked to by Oracle Support.
Setting the Statistics Levels
In order for Oracle to collect those statistics, you must have proper initialization parameters set in the instance. The
parameter is STATISTICS_LEVEL and is set in the init.ora. The good news is that this is modifiable via ALTER SYSTEM
command and some underlying parameters are even modifiable via ALTER SESSION. This parameter can take three values:
1. BASIC: At this setting Oracle des not collect any stats. Although this is not recommended, you may decide to set this in a
fine-tuned production system to save some overhead.
2. TYPICAL: This is the default value. In this setting, Oracle collects the following statistics.

Buffer Cache ? These statistics advise the DBA how to tune the multiple buffer pools. The statistics can also be
collected by setting another parameter DB_CACHE_ADVICE independently using initialization file, stored parameter
file, ALTER SYSTEM or ALTER SESSION. If it's independently set, that setting takes preference over the statistics
level setting.

Mean Time to Recover ? These statistics help the DBA set an acceptable Mean Time to Recover (MTTR) setting,
sometimes due to the requirements from Service Level Agreements with the users.

Shared Pool Sizing ? Oracle can provide valuable clues to size the shared pool effectively based on the the usage and
these statistics provide information on that.

Segment Level Statistics ? These statistics are collected at the segment level to help determine the wait events

PERFORMANCE TUNING

occurring at each segment. We are interested in these statistics.

PGA Target ? These statistics help tune the Program Global Area effectively based on the usage.

Timed Statistics ? This is an old concept. The timed statistics were enabled in earlier versions with the initialization
parameter timed_statistics. However, the statistic was so useful that Oracle made it default with the setting of
statistic_level. It can be set independently, too; and if set, overrides the statistics_level setting.

3. ALL: In this setting, all of the above statistics are collected as well as an additional two.
Row Source Execution Stats ?These statistics help tune the sql statements by storing the execution statistics with the parser.
This can provide an extremely useful tool in the development stages.
Timed OS Statistics ? Along with the timed statistics, if the operating system permits it, Oracle can also collect timed stats
from the host. Certain operating systems like Unix allow it. It too can be set independently; and if set, overrides the
statistics_level setting.
If you set these via any of the three methods, Initialization File, ALTER SYSTEM or ALTER SESSION, you can find out the
current setting by querying the view V$STATISTICS_LEVEL as follows:
SELECT ACTIVATION_LEVEL, STATISTICS_NAME, SYSTEM_STATUS, SESSION_STATUS FROM V$STATISTICS_LEVEL ORDER
BY ACTIVATION_LEVEL, STATISTICS_NAME;
So, set the STATISTICS_LEVEL to TYPICAL either by ALTER SYSTEM or by an initialization parameter file. Do not forget to
restart the database if you choose the latter.

Table Partition Performance analysis


Collection of Statistics for Cost-Based Optimization/DBMS_STATS vs. ANALYZE
The cost-based approach relies on statistics and if the cost-based Approach is used , then statistics should be
generated for all tables, clusters, and all types of indexes accessed by your SQL statements. If the size and data
distribution of your tables change frequently, then generate statistics regularly to ensure the statistics accurately
represent the data in the tables.
SELECT * FROM TEST PARTITION;
This uses global statistics but no predicate
SELECT * FROM TEST S WHERE S.AMOUNT_OF_SHIFT > 1000;
This uses a predicate for more than one partition and may use global statistics
SELECT * FROM TEST PARTITION (SEP2009) S WHERE S.AMOUNT_OF_SHIFT > 1000;
This uses global statistics and predicate to one partition.
Gathering global statistics with the DBMS_STATS package is more useful because ANALYZE always runs serially.
DBMS_STATS can run in serial or parallel. Whenever possible, DBMS_STATS calls a parallel query to gather
statistics with the specified degree of parallelism; otherwise, it calls a serial query or the ANALYZE statement.
Index statistics are not gathered in parallel.
ANALYZE gathers statistics for the individual partitions and then calculates the global statistics from the partition
statistics. DBMS_STATS can gather separate statistics for each partition as well as global statistics for the entire
table or index. Depending on the SQL statement being optimized, the optimizer may choose to use either the
partition (or subpartition) statistics or the global statistics.

10

PERFORMANCE TUNING

CREATE TABLE PARTTAB(


ordid NUMBER,
PARTCOL DATE,
DETAILS NUMBER,
AMOUNT NUMBER)
PARTITION BY RANGE(PARTCOL)
SUBPARTITION BY HASH(DETAILS) SUBPARTITIONS 2
(PARTITION q1 VALUES LESS THAN(TO_DATE('01-04-2009','DD-MM-YYYY')) TABLESPACE TBLSPCE1,
PARTITION q2 VALUES LESS THAN(TO_DATE('01-07-2009','DD-MM-YYYY')) TABLESPACE TBLSPCE2,
PARTITION q3 VALUES LESS THAN(TO_DATE('01-10-2009','DD-MM-YYYY')) TABLESPACE TBLSPCE3,
PARTITION q4 VALUES LESS THAN(TO_DATE('01-12-2009','DD-MM-YYYY')) TABLESPACE TBLSPCE4
);
A local non prefixed index will be associated with it:
CREATE INDEX IDX_PARTTAB ON PARTTAB (ordid) LOCAL;
The PARTTAB table has been populated before to start the following examples.

GATHER_TABLE_STATS
Collects table, column, and index statistics.
Compute, serial mode, without histograms, Default granularity.
SQL> execute dbms_stats.gather_table_stats(>ownname => 'test',>tabname => 'PARTTAB',>partname => null,- --> Gather stats on all partitions.
>estimate_percent => null,- --> Compute mode
>block_sample => false,- --> Default value. No Sense in Compute mode
>method_opt => 'FOR ALL COLUMNS SIZE 1',- --> Table and columns statistics. No histogram generated
>degree => null,- --> default parallel degree based on DOP set on PARTTAB.
>granularity => 'default',- --> Gather Global and Partition statistics
>cascade => true ,- --> with index stats generated
>stattab => null,- --> The statistics will be stored in the dictionary.
>statid => null,>statown => null);
PL/SQL procedure successfully completed.

Index Statistics won't be calculated by default if CASCADE=>TRUE .

SQL> select table_name, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, AVG_ROW_LEN


, GLOBAL_STATS, USER_STATS, sample_size from user_tables
where table_name = 'PARTTAB';
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE AVG_ROW_LEN GLOBAL_STATS USER_STATS
SAMPLE_SIZE
---------- -------- ------ ------------ --------- ----------- ------------ ---------- ----------PARTTAB 400 8 0 0 11 YES NO 400

11

PERFORMANCE TUNING
Now that the statistics have been updated.
The column GLOBAL_STATS has been also initialized.
SQL> select partition_name "Partition", NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, AVG_ROW_LEN,
SAMPLE_SIZE, global_stats, user_stats
from user_tab_partitions
where table_name = 'PARTTAB'
order by partition_position
/
Partition NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE AVG_ROW_LEN GLOBAL_STATS USER_STATS
SAMPLE_SIZE
---------- -------- ------ ------------ ---------- ----------- ------------ ---------- ----------Q1 100 2 0 0 11 YES NO 100
Q2 100 2 0 0 11 YES NO 100
Q3 100 2 0 0 11 YES NO 100
Q4 100 2 0 0 11 YES NO 100

The statistics are again obtained at the table level with the GLOBAL_STATS .
SQL> select partition_name \"Partition\", subpartition_name \"Subpartition\", NUM_ROWS, BLOCKS,
EMPTY_BLOCKS
SAMPLE_SIZE, global_stats, user_stats
from user_tab_subpartitions
where table_name = 'PARTTAB'
order by partition_name, subpartition_position
/
Partition Subpartition NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE AVG_ROW_LEN SAMPLE_SIZE
GLOBAL_STATS USER_STATS
---------- ------------- -------- ------ ------------ --------- ----------- ----------- ------------ ---------Q1 SYS_SUBP10365 NO NO
Q1 SYS_SUBP10366 NO NO
Q2 SYS_SUBP10367 NO NO
Q2 SYS_SUBP10368 NO NO
Q3 SYS_SUBP10369 NO NO
Q3 SYS_SUBP10370 NO NO
Q4 SYS_SUBP10371 NO NO
Q4 SYS_SUBP10372 NO NO
The statistics aren't computed at the subpartition level which is in phase
with the 'DEFAULT' granularity.

SQL>select COLUMN_NAME, NUM_DISTINCT, DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED from


user_tab_col_statistics where table_name = 'PARTTAB'
/
COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS NUM_BUCKETS LAST_ANALYZED
--------------- ------------ ---------- ---------- ----------- ------------ORDID 0 0 400 1 12-DEC-02
PARTCOL 4 .25 0 1 12-DEC-02
DETAILS 100 .01 0 1 12-DEC-02
AMOUNT 0 0 400 1 12-DEC-02
The NUM_BUCKETS is set to 1 as there is no histogram generation. but, the column

12

PERFORMANCE TUNING
statistics are well initialized
The same result is showed below on each partition columns:
SQL>select partition_name, COLUMN_NAME, NUM_DISTINCT, DENSITY, NUM_NULLS, NUM_BUCKETS,
LAST_ANALYZED from user_part_col_statistics
where table_name = 'PARTTAB'
/
PARTITION_ COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS NUM_BUCKETS LAST_ANALYZED
---------- --------------- ------------ ---------- ---------- ----------- ------------Q1 ORDID 0 0 100 1 12-DEC-07
Q1 PARTCOL 1 1 0 1 12-DEC-07
Q1 DETAILS 100 .01 0 1 12-DEC-07
Q1 AMOUNT 0 0 100 1 12-DEC-07
Q2 ORDID 0 0 100 1 12-DEC-07
Q2 PARTCOL 1 1 0 1 12-DEC-07
Q2 DETAILS 100 .01 0 1 12-DEC-07
Q2 AMOUNT 0 0 100 1 12-DEC-07
Q3 ORDID 0 0 100 1 12-DEC-07
Q3 PARTCOL 1 1 0 1 12-DEC-07
Q3 DETAILS 100 .01 0 1 12-DEC-07
Q3 AMOUNT 0 0 100 1 12-DEC-07
Q4 ORDID 0 0 100 1 12-DEC-07
Q4 PARTCOL 1 1 0 1 12-DEC-07
Q4 DETAILS 100 .01 0 1 12-DEC-07
Q4 AMOUNT 0 0 100 1 12-DEC-07

the statistics loaded for subpartitions of the PARTTAB table are displayed below:
SQL> select subpartition_name \"Subpartition\", COLUMN_NAME, NUM_DISTINCT, DENSITY, NUM_NULLS,
NUM_BUCKETS from dba_subpart_col_statistics where table_name = 'PARTTAB'
order by column_name
/
Subpartition COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS NUM_BUCKETS
--------------- --------------- ------------ ---------- ---------- ----------SYS_SUBP10365 PARTCOL
SYS_SUBP10365 ORDID
SYS_SUBP10365 DETAILS
SYS_SUBP10365 AMOUNT
SYS_SUBP10366 PARTCOL
SYS_SUBP10366 ORDID
SYS_SUBP10366 DETAILS
SYS_SUBP10366 AMOUNT
SYS_SUBP10367 PARTCOL
SYS_SUBP10367 ORDID
SYS_SUBP10367 DETAILS
SYS_SUBP10367 AMOUNT
SYS_SUBP10368 PARTCOL
SYS_SUBP10368 ORDID
SYS_SUBP10368 DETAILS
SYS_SUBP10368 AMOUNT
SYS_SUBP10369 PARTCOL
SYS_SUBP10369 ORDID
SYS_SUBP10369 DETAILS

13

PERFORMANCE TUNING
SYS_SUBP10369
SYS_SUBP10370
SYS_SUBP10370
SYS_SUBP10370
SYS_SUBP10370
SYS_SUBP10371
SYS_SUBP10371
SYS_SUBP10371
SYS_SUBP10371
SYS_SUBP10372
SYS_SUBP10372
SYS_SUBP10372
SYS_SUBP10372

AMOUNT
PARTCOL
ORDID
DETAILS
AMOUNT
PARTCOL
ORDID
DETAILS
AMOUNT
PARTCOL
ORDID
DETAILS
AMOUNT

No statistics were loaded on subpartition's columns.


Here partitioned objects contain more than one sets of statistics. This is because statistics can be generated for the
entire object, partition, or subpartition.

A Roadmap To Query Tuning


For each SQL statement, there are different approaches that could be used to retrieve the required data. Optimization is the
process of choosing the most efficient way to retrieve this data based upon the evaluation of a number of different criteria.
The CBO bases optimization choices on pre-gathered table and index statistics while the RBO makes it's decisions based on a
set of rules and does not rely on any statistical information. CBO's reliance on statistics makes it vastly more flexible than
the RBO since as long as up to date statistics are maintained, it will accurately reflect real data volumes. The RBO is desupported in Oracle10g.
To gather 10046 trace at the session level:
alter
alter
alter
alter

session
session
session
session

set
set
set
set

timed_statistics = true;
statistics_level=all;
max_dump_file_size = unlimited;
events '10046 trace name context forever,level 12';

Features of DBMS_SUPPORT Package


In this article I have described undocumented feature within Oracle there is no guarantee that the results will be exactly as
described for all releases .
Installing DBMS Package
----------------------[oracle@localhost admin]$ ls -ltr *supp*
-rw-r----- 1 oracle oracle 1546 Feb 27 2001 dbmssupp.sql
-rw-r----- 1 oracle oracle 1198 Sep 19 2005 prvtsupp.plb
SQL> @$ORACLE_HOME/rdbms/admin/dbmssupp
---------------------------------------------- run your select(s) -SQL> exec DBMS_SUPPORT.START_TRACE;
PL/SQL procedure successfully completed.
SQL> /* Execute your query */
SQL> exec DBMS_SUPPORT.STOP_TRACE;
PL/SQL procedure successfully completed.

14

PERFORMANCE TUNING

Trace Output:
System name: Linux
Node name:
localhost.localdomain
Release:
2.6.18-53.el5xen
Version:
#1 SMP Sat Nov 10 19:46:12 EST 2007
Machine:
x86_64
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 4947, image: oracle@localhost.localdomain (TNS V1-V3)
*** 2008-01-21 12:00:25.204
*** SERVICE NAME:(SYS$USERS) 2008-01-21 12:00:25.204
*** SESSION ID:(158.3) 2008-01-21 12:00:25.204
=====================
PARSING IN CURSOR #6 len=198 dep=1 uid=0 oct=3 lid=0 tim=1172745727738352 hv=4125641360 ad='6c2b8cc0'
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare2 from obj$ where owner#=:1 and name=:2
and namespace=:3 and remoteowner is null
and linkname is null and subname is null
END OF STMT
PARSE #6:c=0,e=620,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=1172745727738344
=====================
PARSING IN CURSOR #3 len=121 dep=2 uid=0 oct=3 lid=0 tim=1172745727740552 hv=3150898423 ad='6c1ddee0'
select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by
bucket
END OF STMT
PARSE #3:c=0,e=587,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=3,tim=1172745727740544
EXEC #3:c=0,e=2148,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=3,tim=1172745727742876
=====================
--More--(80%)
==------------------------------------------------------------------------------------------------

How to interpret the internal trace output


STAT
Lines report explain plan statistics for the numbered <CURSOR>.
PARSE
Parse a statement
EXEC
Execute a pre-parsed statement.
FETCH
Fetch rows from a cursor.
This is a very short brief explanation for Interpreting and investigating query parsing ,wait events.

Understanding the SCN


In order to understand how Oracle performs recovery, its first necessary to understand Oracles SCN in terms of
the various places where it can be stored and how its used for instance and media recovery.
The SCN is an internal number maintained by the database management system (DBMS) to log changes made to a
database. The SCN increases over time as changes are made to the database by Structured Query Language
(SQL). By understanding how the SCN is used, you can understand how Oracle recovery works. Oracle9i enables
you to examine the current SCN using the following SQL:
SQL> select dbms_flashback.get_system_change_number from dual;
Whenever an application commits a transaction, the log writer process (LGWR) writes records from the redo log
buffers in the System Global Area (SGA) to the online redo logs on disk. LGWR also writes the transactions SCN to

15

PERFORMANCE TUNING
the online redo log file. The success of this atomic write event determines whether your transaction succeeds, and
it requires a synchronous (wait-until-completed) write to disk.
Note: The need for a synchronous write upon commit is one of the reasons why the online redo log can become a
bottleneck for applications and why you should commit as infrequently as is practical. In general, Oracle writes
asynchronously to the database datafiles for performance reasons, but commits require a synchronous write
because they must be guaranteed at the time they occur.
SCN and Checkpoints:
A checkpoint occurs when all modified database buffers in the Oracle SGA are written out to datafiles by the
database writer (DBWn) process. The checkpoint process (CKPT) updates all datafiles and control files with the SCN
at the time of the checkpoint and signals DBWn to write out the blocks. A successful checkpoint guarantees that all
database changes up to the checkpoint SCN have been recorded in the datafiles. As a result, only those changes
made after the checkpoint need to be applied during recovery. Checkpoints occur automatically as follows:

Whenever a redo log switch takes place

Whenever the time set by the LOG_CHECKPOINT_TIMEOUT initialization parameter is reached

Whenever the amount of redo written reaches the number of bytes associated with the
LOG_CHECKPOINT_INTERVAL

Typically, LOG_CHECKPOINT_INTERVAL is chosen so that checkpoints only occur on log switches. Oracle stores the
SCN associated with the checkpoint in four places: three of them in the control file and one in the datafile header
for each datafile.
The System Checkpoint SCN:
After a checkpoint completes, Oracle stores the system checkpoint SCN in the control file. You can access the
checkpoint SCN using the following SQL:
SQL> select checkpoint_change# from v$database;
CHECKPOINT_CHANGE#
-------------------292767
The Datafile Checkpoint SCN:
After a checkpoint completes, Oracle stores the SCN individually in the control file for each datafile. The following
SQL shows the datafile checkpoint SCN for a single datafile in the control file:
SQL> select name,checkpoint_change# from v$datafile where name like '%users01%';
NAME
CHECKPOINT_CHANGE#
----------------------------------- -------------------/u02/oradata/OMFD1/users01.dbf
292767
The Start SCN:
Oracle stores the checkpoint SCN value in the header of each datafile. This is referred to as the start SCN because
it is used at instance startup time to check if recovery is required. The following SQL shows the checkpoint SCN in
the datafile header for a single datafile:
SQL> select name,checkpoint_change# from v$datafile_header where name like '%users01%';
NAME
CHECKPOINT_CHANGE#
----------------------------------- --------------------

16

PERFORMANCE TUNING
/u02/oradata/OMFD1/users01.dbf

292767

The Stop SCN:


The stop SCN is held in the control file for each datafile. The following SQL shows the stop SCN for a single datafile
when the database is open for normal use:
SQL> select name,last_change# from v$datafile where name like '%users01%';
NAME
LAST_CHANGE#
----------------------------------- -----------/u02/oradata/OMFD1/users01.dbf
During normal database operation, the stop SCN is NULL for all datafiles that are online in read-write mode. SCN
Values while the Database Is Up Following a checkpoint while the database is up and open for use, the system
checkpoint in the control file, the datafile checkpoint SCN in the control file, and the start SCN in each datafile
header all match. The stop SCN for each datafile in the control file is NULL. SCN after a Clean Shutdown After a
clean database shutdown resulting from a SHUTDOWN IMMEDIATE or SHUTDOWN NORMAL of the database,
followed by STARTUP MOUNT, the previous queries on v$database and v$datafile return the following:
SQL> select checkpoint_change# from v$database;
CHECKPOINT_CHANGE#
-------------------293184
SQL> select name,checkpoint_change#,last_change# from v$datafile where name like '%user%';
NAME
CHECKPOINT_CHANGE# LAST_CHANGE#
----------------------------------- -------------------- -------------/u02/oradata/OMFD1/users01.dbf
293184
293184
SQL> select name,checkpoint_change# from v$datafile_header where name like '%users01%';
NAME
CHECKPOINT_CHANGE#
----------------------------------- -------------------/u02/oradata/OMFD1/users01.dbf
293184
During a clean shutdown, a checkpoint is performed and the stop SCN for each datafile is set to the start SCN from
the datafile header. Upon startup, Oracle checks the start SCN in the file header with the datafile checkpoint SCN.
If they match, Oracle checks the start SCN in the datafile header with the datafile stop SCN in the control file. If
they match, the database can be opened because all block changes have been applied, no changes were lost on
shutdown, and therefore no recovery is required on startup. After the database is opened, the datafile stop SCN in
the control file once again changes to NULL to indicate that the datafile is open for normal use.

SCN after an Instance Crash


The previous example showed the behavior of the SCN after a clean shutdown. To demonstrate the behavior of the
checkpoints after an instance crash, the following SQL creates a table (which performs an implicit commit) and
inserts a row of data into it without a commit:
create table x(x number) tablespace users;
insert into x values(100);
If the instance is crashed by using SHUTDOWN ABORT, the previous queries on v$database and v$datafile return
the following after the database is started up in mount mode:
SQL> select checkpoint_change# from v$database;
CHECKPOINT_CHANGE#

17

PERFORMANCE TUNING
-------------------293185
SQL> select name,checkpoint_change#,last_change# from v$datafile where name like '%users01%';
NAME
CHECKPOINT_CHANGE# LAST_CHANGE#
----------------------------------- -------------------- -------------/u02/oradata/OMFD1/users01.dbf
293185
SQL> select name,checkpoint_change# from v$datafile_header where name like '%users01%';
NAME
CHECKPOINT_CHANGE#
----------------------------------- -------------------/u02/oradata/OMFD1/users01.dbf
293185
In this case, the stop SCN is not set, which is indicated by the NULL value in the LAST_CHANGE# column. This
information enables Oracle, at the time of the next startup, to determine that the instance crashed because the
checkpoint on shutdown was not performed. If it had been performed, the LAST_CHANGE# and
CHECKPOINT_CHANGE# values would match for each datafile as they did during a clean shutdown. If an instance
crashes at shutdown, then instance crash recovery is required the next time the instance starts up.
Recovery from an Instance Crash
Upon the next instance startup that takes place after SHUTDOWN ABORT or a DBMS crash, the Oracle DBMS
detects that the stop SCN for datafiles is not set in the control file during startup. Oracle then performs crash
recovery. During crash recovery, Oracle applies redo log records from the online redo logs in a process referred to
as roll forward to ensure that all transactions committed before the crash are applied to the datafiles. Following roll
forward, active transactions that did not commit are identified from the rollback segments and are undone before
the blocks involved in the active transactions can be accessed. This process is referred to as roll back. In our
example, the following transaction was active but not committed at the time of the SHUTDOWN ABORT, so it needs
to be rolled back:
SQL> insert into x values(100);
After instance startup, the X table exists, but remains empty. Instance recovery happens automatically at database
startup without database administrator (DBA) intervention. It may take a while because of the need to apply large
amounts of outstanding redo changes to data blocks for transactions that completed and those that didnt complete
and require roll back.
Recovery from a Media Failure
Up until this point, the checkpoint start SCN in the datafile header has always matched the datafile checkpoint SCN
number held in the control file. This is reasonable because during a checkpoint, the datafile checkpoint SCN in the
control file and the start SCN in the datafile header are both updated, along with the system checkpoint SCN. The
following SQL shows the start SCN from the datafile header and datafile checkpoint SCN from the control file for
the same file:
SQL> select 'controlfile' "SCN location",name,checkpoint_change# from v$datafile where name like '%users01%'
union
select 'file header',name,checkpoint_change# from v$datafile_header where name like '%users01%';
SCN location NAME
CHECKPOINT_CHANGE#
-------------- ----------------------------------- -------------------controlfile /u02/oradata/OMFD1/users01.dbf
293188
file header /u02/oradata/OMFD1/users01.dbf
293188
Unlike the v$datafile view, there is no stop SCN column in the v$datafile_header view because v$datafile_header is
not used at instance startup time to indicate that an instance crash occurred. However, the v$datafile_header does
provide the Oracle DBMS with the information it requires to perform media recovery. At instance startup, the
datafile checkpoint SCN in the control file and the start SCN in the datafile header are checked for equality. If they
dont match, it is a signal that media recovery is

18

PERFORMANCE TUNING
required.
For example, media recovery is required if a media failure has occurred and the original datafile has been replaced
with a backup copy. In this case, the start SCN in the backup copy is less than the checkpoint SCN value in the
control file, and Oracle requests archived redo logsgenerated at the time of previous log switchesin order to
reapply the changes required to bring the datafile up to the current point in time.
In order to recover the database from a media failure, you must run the database in ARCHIVELOG mode to ensure
that all database changes from the online redo logs are stored permanently in archived redo log files. In order to
enable ARCHIVELOG mode, you must run the command ALTERDATABASE ARCHIVELOG when the database is in a
mounted state.
You can identify files that need recovery after you have replaced a datafile with an older version by starting the
instance in mount mode and running the following SQL:
SQL> select file#,change# from v$recover_file;
FILE# CHANGE#
---------- ---------4
313401
In this example, file 4 is the datafile in the USERS tablespace. By reexecuting the previous SQL to display the
datafile checkpoint SCN in the control file and the start SCN in the datafile header, you can see that the start SCN
is older due to the restore of the backup datafile that has taken place:
SQL> select 'controlfile' "SCN location",name,checkpoint_change#
from v$datafile where name like '%users01%'
union
select 'file header',name,checkpoint_change#
from v$datafile_header where name like '%users01%';
SCN location NAME
CHECKPOINT_CHANGE#
-------------- ----------------------------------- -------------------controlfile /u02/oradata/OMFD1/users01.dbf
313551
file header /u02/oradata/OMFD1/users01.dbf
313401
If you were to attempt to open the database, you would receive errors like the following:
ORA-01113: file 4 needs media recovery
ORA-01110: datafile 4: '/u02/oradata/OMFD1/users01.dbf'
You can recover the database by issuing RECOVER DATABASE from SQL*Plus while the database is in a mounted
state. If the changes needed to recover the database to the point in time before the crash are in an archived redo
log, then you will be prompted to accept the suggested name:
ORA-00279: change 313401 generated at 11/10/2001 18:50:23 needed for thread
ORA-00289: suggestion : /u02/oradata/OMFD1/arch/T0001S0000000072.ARC
ORA-00280: change 313401 for thread 1 is in sequence #72
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
If you respond to the prompt using AUTO, Oracle applies any archived redo logs it needs, followed by any
necessary changes in the online redo logs, to bring the database right up to the last committed transaction before
the media failure that caused the requirement for the restore.
So far, weve considered recovery scenarios where the goal is to recover the database to the most recent
transaction. This is known as complete recovery. The RECOVER DATABASE command has several other options that
enable you to recover from a backup to a point in time before the most recent transaction by rolling forward and
then stopping the application of the redo log changes at a specified point. This is known as incomplete recovery.
You can specify a time or an SCN as the recovery point. For example,
recover database until time '2001-11-10:18:52:00';
recover database until change 313459;
Before you perform incomplete recovery, its recommended that you restore a complete database backup first.
After incomplete recovery, you must open the mounted database with ALTER DATABASE OPEN RESETLOGS. This
creates a new incarnation of the database and clears the contents of the existing redo logs to make sure they cant
be applied.
Recovery from a Media Failure Using a Backup Control File

19

PERFORMANCE TUNING

In the previous example, we had access to a current control file at the time of the media failure. This means that
none of the start SCN values in the datafile headers exceeded the system checkpoint SCN number in the control
file. To recap, the system checkpoint number is given by the following:
SQL> select checkpoint_change# from v$database;
You might be wondering why Oracle needs to maintain the last system checkpoint value in the control file as well
as checkpoint SCNs in the control file for each datafile (as used in the previous example). There are two reasons
for this. The first is that you might have read-only tablespaces in your database. In this case, the database
checkpoint SCN increases, and the checkpoint SCN for the datafiles in the read-only tablespace remains frozen in
the control file.
The following SQL report output shows a database with a read-write tablespace (USERS) and read-only tablespace
(TEST). The start SCN in the file header and the checkpoint SCN in the control file for TEST are less than the
system checkpoint value. Once a tablespace is read only, checkpoints have no effect on the files in it. The other
read-write tablespace has checkpoint values that match the system checkpoint:
SCN location
NAME
CHECKPOINT_CHANGE#
-------------------- ---------------------------------- ---------------controlfile
SYSTEM checkpoint
355390
file header
/u02/oradata/OD2/users01.dbf
355390
file in controlfile /u02/oradata/OD2/users01.dbf
355390
file header
/u02/oradata/OD2/test01.dbf
355383
file in controlfile /u02/oradata/OD2/test01.dbf
355383
The second reason for the maintenance of multiple checkpoint SCNs in the control file is that you might not have a
current control file available at recovery time. In this case, you need to restore an earlier control file before you can
perform a recovery. The system checkpoint in the control file may indicate an earlier change than the start SCN in
the datafile headers.
The following SQL shows an example where the system checkpoint SCN and datafile checkpoint SCN indicate an
earlier change than the start SCN in the datafile header:
SQL> select 'controlfile' "SCN location",'SYSTEM checkpoint' name,checkpoint_change#
from v$database
union
select 'file in controlfile',name,checkpoint_change#
from v$datafile where name like 'users01%'
union
select 'file header',name,checkpoint_change#
from v$datafile_header where name like '%users01%';
SCN location
NAME
CHECKPOINT_CHANGE#
------------------- ------------------------------ -----------------controlfile
SYSTEM checkpoint
333765
file header
/u02/oradata/OD2/users01.dbf
355253
file in controlfile /u02/oradata/OD2/users01.dbf
333765
If try you to recover a database in the usual way in this situation, Oracle detects that the control file is older than
some of the datafiles, as indicated by the checkpoint SCN values in the datafile headers, and reports the following
message:
SQL> recover database
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
If you want to proceed with recovery in this situation, you need to indicate to Oracle that a noncurrent control file
possibly containing mismatches in the SCN values identified by the previous error messagesis about to be
specified for recovery by using the following command:
recover database using BACKUP CONTROLFILE;
Overview : Tables Fragmentation
Audience : DBAs
Date: 29-July-07

20

PERFORMANCE TUNING

What is Table fragmentation?


When rows are not stored contiguously, or if rows are split onto more than one page, performance decreases
because these rows require additional page accesses. Table fragmentation is distinct from file fragmentation.
When lots of DML operation apply on tables then tables is fragmented.
because DML is not release free space from table below HWM.
Hint: HWM is indicator for USED BLOCKS in database. Blocks below the high water mark (used blocks) have at
least once contained data. This data might have been deleted.
Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high
water mark in a full table scan.
DDL statement always reset HWM.
How to find table fragmentation?
SQL> select count(*) from big1;
1000000 rows selected.
SQL> delete from big1 where rownum <= 300000;
300000 rows deleted.
SQL> commit;
Commit complete.
SQL> update big1 set object_id = 0 where rownum <=350000;
342226 rows updated.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.
Table Size ( with fragmented)
SQL> select table_name,round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME
size
------------------------------ -----------------------------------------BIG1

72952kb

Actual data in table


SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME
size
------------------------------ -----------------------------------------BIG1
30604.2kb
Note= 72952 30604 = 42348 Kb is wasted space in table

21

PERFORMANCE TUNING

The difference between two values is 60% and Pctfree 10% (default) so table is 50% extra space which is wasted
because there is no data.
How to reset HWM / remove fragemenation?
For that we need to reorganize fragmented table
We have four options to reorganize fragmented tables
1. alter table move + rebuild indexes
2. export / truncate / import
3. create table as select ( CTAS)
4. dbms_redefinition
Option: 1 alter table move + rebuild indexes
SQL> alter table BIG1 move;
Table altered.
SQL> select status,index_name from user_indexes
2 where table_name = 'BIG1';
STATUS INDEX_NAME
-------- -----------------------------UNUSABLE BIGIDX
SQL> alter index bigidx rebuild;
Index altered.
SQL> select status,index_name from user_indexes
2 where table_name = 'BIG1';
STATUS INDEX_NAME
-------- -----------------------------VALID BIGIDX
SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.
SQL> select table_name,round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 38224kb
SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"

22

PERFORMANCE TUNING
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 30727.37kb
Option: 2 Create table as select
SQL> create table big2 as select * from big1;
Table created.
SQL> drop table big1 purge;
Table dropped.
SQL> rename big2 to big1;
Table renamed.
SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.
SQL> select table_name,round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 85536kb
SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 68986.97kb
SQL> select status from user_indexes
2 where table_name = 'BIG1';
no rows selected
SQL> -- Note we need to create all indexes.
Option: 3 export / truncate / import
SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables

23

PERFORMANCE TUNING
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 85536kb
SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 42535.54kb
SQL> select status from user_indexes where table_name = 'BIG1';
STATUS
-------VALID
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options
C:>exp scott/tiger@Orcl file=c:big1.dmp tables=big1
Export: Release 10.1.0.5.0 - Production on Sat Jul 28 16:30:44 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table BIG1 468904 rows exported
Export terminated successfully without warnings.
C:>sqlplus scott/tiger@orcl
SQL*Plus: Release 10.1.0.5.0 - Production on Sat Jul 28 16:31:12 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Production

24

PERFORMANCE TUNING
With the Partitioning, OLAP and Data Mining options
SQL> truncate table big1;
Table truncated.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options
C:>imp scott/tiger@Orcl file=c:big1.dmp ignore=y
Import: Release 10.1.0.5.0 - Production on Sat Jul 28 16:31:54 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.01.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SCOTT's objects into SCOTT
. . importing table "BIG1" 468904 rows imported
Import terminated successfully without warnings.
C:>sqlplus scott/tiger@orcl
SQL*Plus: Release 10.1.0.5.0 - Production on Sat Jul 28 16:32:21 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 85536kb
SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

25

PERFORMANCE TUNING
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 42535.54kb
SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.
SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 51840kb
SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';
TABLE_NAME size
------------------------------ -----------------------------------------BIG1 42542.27kb
SQL> select status from user_indexes where table_name = 'BIG1';
STATUS
-------VALID
SQL> exec dbms_redefinition.can_redef_table('SCOTT','BIG1',> dbms_redefinition.cons_use_pk);
PL/SQL procedure successfully completed.
Option: 4 dbms_redefinition
SQL> create table TABLE1 (
2 no number,
3 name varchar2(20) default 'NONE',
4 ddate date default SYSDATE);
Table created.
SQL> alter table table1 add constraint pk_no primary key(no);
Table altered.
SQL> begin

26

PERFORMANCE TUNING
2 for x in 1..100000 loop
3 insert into table1 ( no , name, ddate)
4 values ( x , default, default);
5 end loop;
6 end;
7/
PL/SQL procedure successfully completed.
SQL> create or replace trigger tri_table1
2 after insert on table1
3 begin
4 null;
5 end;
6/
Trigger created.
SQL> select count(*) from table1;
COUNT(*)
---------100000
SQL> delete table1 where rownum <= 50000;
50000 rows deleted.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats('SCOTT','TABLE1');
PL/SQL procedure successfully completed.
SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';
TABLE_NAME size
------------------------------ -----------------------------------------TABLE1 2960kb
SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';

27

PERFORMANCE TUNING
TABLE_NAME size
------------------------------ -----------------------------------------TABLE1 822.69kb
SQL> --Minimum Privs required "DBA" role or "SELECT" on dbms_redefinition pkg
SQL> --First check table is condidate for redefinition.
SQL>
SQL> exec sys.dbms_redefinition.can_redef_table('SCOTT',> 'TABLE1',> sys.dbms_redefinition.cons_use_pk);
PL/SQL procedure successfully completed.
SQL> --After verifying that the table can be redefined online, you manually crea
te an empty interim table (in the same schema as the table to be redefined)
SQL>
SQL> create table TABLE2 as select * from table1 WHERE 1 = 2;
Table created.
SQL> exec sys.dbms_redefinition.start_redef_table ( 'SCOTT',> 'TABLE1',> 'TABLE2');
PL/SQL procedure successfully completed.
SQL> --This procedure keeps the interim table synchronized with the original tab
le.
SQL>
SQL> exec sys.dbms_redefinition.sync_interim_table ('SCOTT',> 'TABLE1',> 'TABLE2');
PL/SQL procedure successfully completed.
SQL> --Create PRIMARY KEY on interim table(TABLE2)
SQL> alter table TABLE2
2 add constraint pk_no1 primary key (no);
Table altered.
SQL> create trigger tri_table2
2 after insert on table2
3 begin

28

PERFORMANCE TUNING
4 null;
5 end;
6/
Trigger created.
SQL> --Disable foreign key on original table if exists before finish this proces
s.
SQL>
SQL> exec sys.dbms_redefinition.finish_redef_table ( 'SCOTT',> 'TABLE1',> 'TABLE2');
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats('SCOTT','TABLE1');
PL/SQL procedure successfully completed.
SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';
TABLE_NAME size
------------------------------ -----------------------------------------TABLE1 1376kb
SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';
TABLE_NAME size
------------------------------ -----------------------------------------TABLE1 841.4kb
SQL> select status,constraint_name
2 from user_constraints
3 where table_name = 'TABLE1';
STATUS CONSTRAINT_NAME
-------- -----------------------------ENABLED PK_NO1
SQL> select status ,trigger_name
2 from user_triggers

29

PERFORMANCE TUNING
3 where table_name = 'TABLE1';
STATUS TRIGGER_NAME
-------- -----------------------------ENABLED TRI_TABLE2
SQL> drop table TABLE2 PURGE;
Table dropped.
Slow Running SQL results in Oracle performance degradation
ChunPei Feng & R. Wang
Environment
Oracle 9iR2 and Unix, production database and standby database
Circumstance
In the morning, routine daily database checking shows that the database has an unusual heavy load. As DBA,
definitely, the first checking step is to monitor the top OS processes with command TOP or PRSTAT, which offer an
ongoing look at processor activity in real time. In this case, however, a list of the most CPU-intensive processes on
the system does not tell us anything special which might particularly cause the database performance degradation.
Next, information fetching about TOP SQL and long-running SQL also fail to figure out the possible reason of this
performance problem.
Also, the team of application development confirms that no change has been made at the application level. And,
application log doesnt show exception on heavy jobs and excessive user logon.
According to the information above, it can be concluded that the corrupt database performance is caused by issues
relating to the database server.
Steps to diagnose:
1. Check and Compare Historical Statspack Reports
So far, no one is able to tell which job attributes to performance degradation because hundreds of processes, which
reside on tens of Unix servers, make DBAs difficult to track process by process. Here, the more feasible action is to
recur to Statspack, which provides a great deal of performance information about an Oracle database. By keeping
historical Statspack reports, it makes possible to compare current Statspack report to the one in last week. The
report, generated at peak period (9:00AM - 10:00AM), is sampled to compare to one of report created in last week
at same period.
Upon comparison, the instant finding is that CPU time is increased by 1,200 (2341 vs. 1175) seconds. Usually, the
significant increase on CPU time very likely attribute to the following two scenarios:

More jobs loaded

The execution plan of SQLs is changed

Top 5 Timed Events in Statspack Reports


~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Event

Current Stataspack Report


Waits

% Total

Ela Time

30

PERFORMANCE TUNING
Time(s)
--------------------------------------------

------------

CPU time

-----------

--------

2,341

42.60

db file sequential read

387,534

2,255

41.04

global cache cr request

745,170

231

4.21

log file sync

98,041

229

4.17

log file parallel write

96,264

158

2.88

Statspack Report in Last Week


% Total

Event

Waits

Ela Time
Time(s)

--------------------------------------------

------------

-----------

--------

db file sequential read

346,851

1,606

47.60

1,175

34.83

CPU time
global cache cr request

731,368

206

6.10

log file sync

90,556

91

2.71

db file scattered read

37,746

90

2.66

2. Narrow down by examining SQL Part of Statspack Reports


Next, we examine the SQL part of Statspack report and find the following SQL statement (Query 1) is listed at the
very beginning of Buffer Gets part. It tells us that this SQL statement is the consumer of 1161.27 seconds CPU
Time. In last weeks report, no information about this SQL statement has been reported at the very beginning
part. And, it only took 7.39 seconds to be finished. Its obvious that this SQL statement must be one of the
attributors of performance degradation.
SELECT login_id, to_char(gmt_create, 'YYYY-MM-DD HH24:MI:SS')
from IM_BlackList where black_id = :b1
Query 1: Query on table IM_BlackList with bind variable
<!--[if !supportLineBreakNewLine]-->
<!--[endif]--> SQL Part of Statspack Report
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Current Stataspack Report

Buffer Gets

Executions

Gets per Exec

CPU

Elapsd

Times (s)

Times (s)

%Total

Hash Value

------------

-------------

---------------

--------

----------

----------

--------------

17,899,606

47,667

375.5

55.6

1161.27

1170.22

3481369999

Module: /home/oracle/AlitalkSrv/config/../../AlitalkSrv/
SELECT login_id, to_char(gmt_create, YYYY-MM-DD HH24:MI:SS)

31

PERFORMANCE TUNING
from IM_BlackList where black_id = :b1

Statspack Report in Last Week

Buffer Gets

Executions

Gets per Exec

CPU

Elapsd

Times (s)

Times (s)

%Total

Hash Value

------------

------------

---------------

-------

----------

----------

--------------

107,937

47,128

2.3

0.8

7.39

6.94

3481369999

Module: /home/oracle/AlitalkSrv/config/../../AlitalkSrv/
SELECT login_id, to_char(gmt_create, 'YYYY-MM-DD HH24:MI:SS')
from IM_BlackList where black_id = :b1
Now, our investigation has been significantly narrowed down to single SQL statement. That is
SELECT login_id, to_char(gmt_create, 'YYYY-MM-DD HH24:MI:SS')
from IM_BlackList where black_id = :b1
This is a typical SQL query with binding variable and it should benefit from b-tree index created. But, the statistics
show that it seems conduct full table scan rather than using proper index.
The following checking on index of field black_id in table IM_BlackList clearly demonstrates the availability of the
index.
SQL> select index_name,column_name from user_ind_columns where table_name = 'IM_BLACKLIST';
IM_BLACKLIST_PK LOGIN_ID
IM_BLACKLIST_PK BLACK_ID
IM_BLACKLIST_LID_IND BLACK_ID
The question now is, how come full table scan replace usage of index for this SQL statement? In order to testify our
supposition, we simply execute this SQL statement against production database and its clear that full table scan
is conducted rather than index access.
3. Go Check Histograms generated by Objects Analyzing
To figure out the problem, we then check histograms on the field of BLACK_ID against standby database. Thats
also a comparison practice between production database and standby database. Because the activity of gathering
statistics does happen on production database, but not on standby database, we are hoping to find some difference
between the histograms on the filed BLACK_ID and then to measure the impact of statistics collecting. We select
histograms as criteria because histograms is cost-based optimizer (CBO) feature that allows Oracle to see the
possible number of values of a particular column, which is known as data skewing, and histograms can track the
number of occurrences of a particular data values when CBO decide on what type of index to use or even whether
to use an index.
To gather histograms information against standby database, we run:
SQL> select COLUMN_NAME ,ENDPOINT_NUMBER, ENDPOINT_VALUE ,
table_name = 'IM_BLACKLIST' and column_name = 'BLACK_ID';
Query 2:gather histograms information from dba_hisrograms
Then, we get:
COLUMN_NAME

ENDPOINT_NUMBER

ENDPOINT_VALUE

--------------------

-----------------------

---------------------

from dba_histograms where

32

PERFORMANCE TUNING

BLACK_ID

2.5031E+35

BLACK_ID

2.5558E+35

BLACK_ID

2.8661E+35

BLACK_ID

5.0579E+35

BLACK_ID

5.0585E+35

BLACK_ID

5.0585E+35

BLACK_ID

5.0589E+35

BLACK_ID

5.0601E+35

BLACK_ID

5.1082E+35

BLACK_ID

5.1119E+35

BLACK_ID

10

5.1615E+35

BLACK_ID

11

5.1616E+35

BLACK_ID

12

5.1628E+35

BLACK_ID

13

5.1646E+35

BLACK_ID

14

5.2121E+35

BLACK_ID

15

5.2133E+35

BLACK_ID

16

5.2155E+35

BLACK_ID

17

5.2662E+35

BLACK_ID

18

5.3169E+35

BLACK_ID

19

5.3193E+35

BLACK_ID

20

5.3686E+35

BLACK_ID

21

5.3719E+35

BLACK_ID

22

5.4198E+35

BLACK_ID

23

5.4206E+35

BLACK_ID

24

5.4214E+35

BLACK_ID

25

5.4224E+35

BLACK_ID

26

5.4238E+35

BLACK_ID

27

5.4246E+35

BLACK_ID

28

5.4743E+35

BLACK_ID

29

5.5244E+35

BLACK_ID

30

5.5252E+35

BLACK_ID

31

5.5252E+35

33

PERFORMANCE TUNING

BLACK_ID

32

5.5272E+35

BLACK_ID

33

5.5277E+35

BLACK_ID

34

5.5285E+35

BLACK_ID

35

5.5763E+35

BLACK_ID

36

5.6274E+35

BLACK_ID

37

5.6291E+35

BLACK_ID

38

5.6291E+35

BLACK_ID

39

5.6291E+35

BLACK_ID

40

5.6291E+35

BLACK_ID

41

5.6305E+35

BLACK_ID

42

5.6311E+35

BLACK_ID

43

5.6794E+35

BLACK_ID

44

5.6810E+35

BLACK_ID

45

5.6842E+35

BLACK_ID

46

5.7351E+35

BLACK_ID

47

5.8359E+35

BLACK_ID

48

5.8887E+35

BLACK_ID

49

5.8921E+35

BLACK_ID

50

5.9430E+35

BLACK_ID

51

5.9913E+35

BLACK_ID

52

5.9923E+35

BLACK_ID

53

5.9923E+35

BLACK_ID

54

5.9931E+35

BLACK_ID

55

5.9947E+35

BLACK_ID

56

5.9959E+35

BLACK_ID

57

6.0428E+35

BLACK_ID

58

6.0457E+35

BLACK_ID

59

6.0477E+35

BLACK_ID

60

6.0479E+35

BLACK_ID

61

6.1986E+35

BLACK_ID

62

6.1986E+35

BLACK_ID

63

6.1994E+35

34

PERFORMANCE TUNING

BLACK_ID

64

6.2024E+35

BLACK_ID

65

6.2037E+35

BLACK_ID

66

6.2521E+35

BLACK_ID

67

6.2546E+35

BLACK_ID

68

6.3033E+35

BLACK_ID

69

6.3053E+35

BLACK_ID

70

6.3069E+35

BLACK_ID

71

6.3553E+35

BLACK_ID

72

6.3558E+35

BLACK_ID

73

6.3562E+35

BLACK_ID

74

6.3580E+35

BLACK_ID

75

1.1051E+36

Output 1: Histograms data on standby database


Subsequently, then same command has been executed against production database. The output looks like
followings:
COLUMN_NAME

ENDPOINT_NUMBER

ENDPOINT_VALUE

--------------------

-----------------------

---------------------

BLACK_ID

1.6715E+35

BLACK_ID

2.5558E+35

BLACK_ID

2.7619E+35

BLACK_ID

2.9185E+35

BLACK_ID

5.0579E+35

BLACK_ID

5.0589E+35

BLACK_ID

5.0601E+35

BLACK_ID

5.1100E+35

BLACK_ID

5.1601E+35

BLACK_ID

5.1615E+35

BLACK_ID

10

5.1624E+35

BLACK_ID

11

5.1628E+35

BLACK_ID

12

5.1642E+35

BLACK_ID

13

5.2121E+35

BLACK_ID

14

5.2131E+35

35

PERFORMANCE TUNING

BLACK_ID

15

5.2155E+35

BLACK_ID

16

5.2676E+35

BLACK_ID

17

5.3175E+35

BLACK_ID

18

5.3684E+35

BLACK_ID

19

5.3727E+35

BLACK_ID

20

5.4197E+35

BLACK_ID

21

5.4200E+35

BLACK_ID

22

5.4217E+35

BLACK_ID

23

5.4238E+35

BLACK_ID

24

5.4244E+35

BLACK_ID

25

5.4755E+35

BLACK_ID

26

5.5252E+35

BLACK_ID

27

5.5252E+35

BLACK_ID

28

5.5252E+35

BLACK_ID

29

5.5283E+35

BLACK_ID

30

5.5771E+35

BLACK_ID

31

5.6282E+35

BLACK_ID

32

5.6291E+35

BLACK_ID

33

5.6291E+35

BLACK_ID

34

5.6291E+35

BLACK_ID

35

5.6299E+35

BLACK_ID

36

5.6315E+35

BLACK_ID

37

5.6794E+35

BLACK_ID

38

5.6798E+35

BLACK_ID

39

5.6816E+35

BLACK_ID

40

5.6842E+35

BLACK_ID

41

5.7838E+35

BLACK_ID

42

5.8877E+35

BLACK_ID

43

5.8917E+35

BLACK_ID

44

5.9406E+35

BLACK_ID

45

5.9909E+35

BLACK_ID

46

5.9923E+35

36

PERFORMANCE TUNING

BLACK_ID

47

5.9923E+35

BLACK_ID

48

5.9946E+35

BLACK_ID

49

5.9950E+35

BLACK_ID

50

5.9960E+35

BLACK_ID

51

5.9960E+35

BLACK_ID

52

5.9960E+35

BLACK_ID

53

5.9960E+35

BLACK_ID

54

5.9960E+35

BLACK_ID

55

5.9960E+35

BLACK_ID

56

5.9960E+35

BLACK_ID

57

6.0436E+35

BLACK_ID

58

6.0451E+35

BLACK_ID

59

6.0471E+35

BLACK_ID

60

6.1986E+35

BLACK_ID

61

6.1998E+35

BLACK_ID

62

6.2014E+35

BLACK_ID

63

6.2037E+35

BLACK_ID

64

6.2521E+35

BLACK_ID

65

6.2544E+35

BLACK_ID

66

6.3024E+35

BLACK_ID

67

6.3041E+35

BLACK_ID

68

6.3053E+35

BLACK_ID

69

6.3073E+35

BLACK_ID

70

6.3558E+35

BLACK_ID

71

6.3558E+35

BLACK_ID

72

6.3558E+35

BLACK_ID

73

6.3558E+35

BLACK_ID

74

6.3580E+35

BLACK_ID

75

1.1160E+36

Output 2: Histograms data on production database


Comparing to the value of histograms derived from standby database, we find that the histograms values on
production database is not distributed evenly as that on standby database. The exception occurred in range of line

37

PERFORMANCE TUNING
50 -56 and line 70-73. Thats important finding because histograms are used to predict cardinality and cardinality
is the key measure in using B-tree index or bitmap index. The difference of histograms may be the most direct
cause for this performance problem were facing.
4. Trace with event 10053
We then analyze the 10053 event and try to get more information to figure out this problem. And, this operation
is also done against both standby database and production database.
To enable trace with event 10053, we run:
alter session set events '10053 trace name context forever';
And then, we rerun Query 1.
The comparison of these two 10053 trace files, as shown in color red in Output 3 and Output 4, shows that the cost
of full table scan are all 38. The difference is that index access cost is jumped from 4 to 65 after conducting
optimizer statistics. So far, its very clear that this SQL statement is executed via path of full table scan rather
than index access.
Event 10053 Trace files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Against Standby Database

Table stats Table: IM_BLACKLIST Alias: IM_BLACKLIST


TOTAL :: CDN: 57477 NBLKS: 374 AVG_ROW_LEN: 38
-- Index stats
INDEX NAME: IM_BLACKLIST_LID_IND COL#: 2
TOTAL :: LVLS: 1 #LB: 219 #DK: 17181 LB/K: 1 DB/K: 2 CLUF: 44331
INDEX NAME: IM_BLACKLIST_PK COL#: 1 2
TOTAL :: LVLS: 1 #LB: 304 #DK: 57477 LB/K: 1 DB/K: 1 CLUF: 55141
_OPTIMIZER_PERCENT_PARALLEL = 0
***************************************
SINGLE TABLE ACCESS PATH
Column: BLACK_ID Col#: 2 Table: IM_BLACKLIST Alias: IM_BLACKLIST
NDV: 17181 NULLS: 0 DENS: 5.8204e-05
NO HISTOGRAM: #BKT: 1 #VAL: 2
TABLE: IM_BLACKLIST ORIG CDN: 57477 ROUNDED CDN: 3 CMPTD CDN: 3
Access path: tsc Resc: 38 Resp: 38
Access path: index (equal)
Index: IM_BLACKLIST_LID_IND
TABLE: IM_BLACKLIST
RSC_CPU: 0 RSC_IO: 4
IX_SEL: 0.0000e+00 TB_SEL: 5.8204e-05
Skip scan: ss-sel 0 andv 27259
ss cost 27259
table io scan cost 38
Access path: index (no sta/stp keys)
Index: IM_BLACKLIST_PK
TABLE: IM_BLACKLIST
RSC_CPU: 0 RSC_IO: 309
IX_SEL: 1.0000e+00 TB_SEL: 5.8204e-05
BEST_CST: 4.00 PATH: 4 Degree: 1
***************************************
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************

38

PERFORMANCE TUNING
GENERAL PLANS
***********************
Join order[1]: IM_BLACKLIST [IM_BLACKLIST]
Best so far: TABLE#: 0 CST: 4 CDN: 3 BYTES: 75
Final:
CST: 4 CDN: 3 RSC: 4 RSP: 4 BYTES: 75
IO-RSC: 4 IO-RSP: 4 CPU-RSC: 0 CPU-RSP: 0
Output 3: Event 10053 Trace file on Standby database

Against Production Database

SINGLE TABLE ACCESS PATH


Column: BLACK_ID Col#: 2 Table: IM_BLACKLIST Alias: IM_BLACKLIST
NDV: 17069 NULLS: 0 DENS: 1.4470e-03
HEIGHT BALANCED HISTOGRAM: #BKT: 75 #VAL: 75
TABLE: IM_BLACKLIST ORIG CDN: 57267 ROUNDED CDN: 83 CMPTD CDN: 83
Access path: tsc Resc: 38 Resp: 38
Access path: index (equal)
Index: IM_BLACKLIST_LID_IND
TABLE: IM_BLACKLIST
RSC_CPU: 0 RSC_IO: 65
IX_SEL: 0.0000e+00 TB_SEL: 1.4470e-03
Skip scan: ss-sel 0 andv 27151
ss cost 27151
table io scan cost 38
Access path: index (no sta/stp keys)
Index: IM_BLACKLIST_PK
TABLE: IM_BLACKLIST
RSC_CPU: 0 RSC_IO: 384
IX_SEL: 1.0000e+00 TB_SEL: 1.4470e-03
BEST_CST: 38.00 PATH: 2 Degree: 1
***************************************
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***********************
Join order[1]: IM_BLACKLIST [IM_BLACKLIST]
Best so far: TABLE#: 0 CST: 38 CDN: 83 BYTES: 2407
Final:
CST: 38 CDN: 83 RSC: 38 RSP: 38 BYTES: 2407
IO-RSC: 38 IO-RSP: 38 CPU-RSC: 0 CPU-RSP: 0
Output 4:Event 10053 Trace file on Production database
5. Perform Statistics again without Analyzing index field
Our diagnosis demonstrates that the sort of skewed values on column BLACK_ID impacts the CBO optimizer in
determining optimal execution plan. Thus, the next practice wed like to do is to eliminate or overwrite histograms
information on column BLACK_ID.
We run;
analyze table im_blacklist compute statistics;

And then, re-running Query 2 produces the following output:

39

PERFORMANCE TUNING
COLUMN_NAME

ENDPOINT_NUMBER

ENDPOINT_VALUE

------------------

------------------------

--------------------

GMT_CREATE

2452842.68

GMT_MODIFIED

2452842.68

LOGIN_ID

2.5021E+35

BLACK_ID

1.6715E+35

GMT_CREATE

2453269.44

GMT_MODIFIED

2453269.44

LOGIN_ID

6.3594E+35

BLACK_ID

1.1160E+36

Now, the column BLACK_ID has no skewed values displayed like other columns. The present statement analyze
object table along with columns. The existing histograms information on columns has been overwritten and then
we expect that CBO optimizer can make right decision in determining execution plan.
Therefore, we rerun SQL query 1 and is happy to see that this query is executed via index access instead of full
table scan. The problem is eventually solved.
By reviewing the complete process to figure out problem, we realize that the cost of index access is dramatically
increased and is even more costly than full table scan. The question now is, why?
The quick answer is that statistics made it.
Deeper Discussion
1. CBO what? How?
Firstly introduced to oracle 7, Cost-Based Optimizer (CBO) checks the execution plan from several possible options
and selects the one with lowest cost. Its an extremely sophisticated component of oracle and it governs the
execution of every oracle query. CBO is initialized by setting init parameter optimizer_mode. But, this parameter
presents differently between oracle 9i and 10G. In Oracle 9i, its still possible to have the database engine acted
upon RBO (Rule-Based Optimizer). But, in Oracle 10g, we only have the choice to define how we benefit from CBO
because RBO is de-supported oracle 10G onwards. Whatever oracle version we are on, the most important key is
always properly preparing and presenting statistics on database objects. The built-in oracle package DBMS_STATS,
which is also recommended from Oracle Corp. rather command Analyze, can help us to gather statistics in predefined way.
We keep in mind that data statistics has been collected prior to the problem. The produced statistics may be
improper for Query 1 and thus mislead the determination of execution plan.
2. SQL Query Internal
In order to present deeper understanding about the circumstance we are on, wed like to examine this specific
SQL query more internally.
The Query 1 is quite typical SQL statement with bind variable and it should naturally come with index access as
long as index is available. But, the fact disappoints us. The only explanation, which can convince us, is that hard
parse of Query 1 analyzes the distribution of histograms of column BLACK_ID and then makes decision to go with
full table scan rather than index access because full table scan is less costly than index access at that point. And, it
also should be lowest. The selection of full table scan will then dominate the execution of this SQL statement as
long as Query 1 does not age out or is cluttered, which may happen if shared_pool_size is too small or nonreusable SQL (i.e. SQL that has literals "where black_id = 321) is introduced in the source.

40

PERFORMANCE TUNING
3. How histograms impact CBO optimizer in Determining Execution Plan?
Next, it is time to know how histograms impact the selection of execution plan of SQL statement.
In Oracle, the cost-based optimizer (CBO) can use histograms to get accurate estimates of the distribution of
column data. A histogram partitions the values in the column into bands, so that all column values in a band fall
within the same range. Histograms provided improved selectivity estimates in the presence of data skew, resulting
in optimal plans with non-uniform data distributions.
In turn, histograms are used to predict cardinality and the number of rows returned to a query. And, cardinality of
values of individual table column is also a key measure to define which index mechanism benefit oracle database
performance.
In mathematics, the cardinality of a set is a measure of the "number of elements of the set". In Oracle, columns of
tables with very few unique values, which are called low cardinality, are good candidates of bitmap indexes, other
than B-tree index with which we are mostly familiar. Let's assume that we have a computer_type index and that 80
percent of the values are for the DESKTOP type. Whenever a query with clause where computer_type = 'DESKTOP'
is specified, a full-table scan would be the fastest execution plan, while a query with clause where
computer_type='LAPTOP' would be faster in using access via an index. That is, if the data values are heavily
skewed so that most of the values are in a very small data range, the optimizer may avoid using the index for
values in that range and manage to use the index for values outside that range.
Histograms, like all other oracle optimizer statistics, are static. They are useful only when they reflect the current
data distribution of a given column. (The data in the column can change as long as the distribution remains
constant.) If the data distribution of a column changes frequently, we must recompile its histograms frequently.
Histograms will be used to determine the execution plan and thus affect performance. Its undoubted that it incur
additional overhead during the parsing phase of a SQL statement. And, generally, histograms can be used
effectively only when:

A table column is referenced in one or more SQL statement query.Yes, a column may hold skewed
values, but it never be referenced in a SQL statement. Its no need to analyze this column, which
mistakenly create histograms on a skewed column.

A columns values cause the CBO to make an incorrect guess. For heavily skewed column, its
necessary to gather the histograms to aid CBO to choose the best plan.

Histograms are not useful for columns with the following characteristics:

All predicates on the column use bind variables. C Thats the circumstance we are on.

The column data is uniformly distributed. (Ideally, the clause AUTO SIZE of package DBMS_STATS
determines if histograms are created.)

The column is unique and is used only with equality predicates.

4. Analyze vs. DBMS_STATS


In this case, the typical statistics are collected by executing command ANALYZE. It looks like:
ANALYZE TABLE im_blacklist COMPUTE STATISTICS
FOR TABLE
FOR ALL INDEXES
FOR ALL INDEXED COLUMNS
The command above analyzes all of the tables columns, with index or not, by using ANALYZE command other
than package DBMS_STATS, which is highly recommended by Oracle Corp. to be used to gather statistics

41

PERFORMANCE TUNING
information.(Identifying clause AUTO SIZE while issuing package DBMS_STATS will make database to automatically
decide which columns need histograms.) The ANALYZE statement we issued above will create histograms for every
columns in table IM_BLACKLIST. And, ideally, the histograms will appropriately present the distribution of columns
values. The fact, shown in output 2, shows that the distribution of values of column BLACK_ID is sort of skewed
(line 50-56 and line 70-73) and thus optimizer internally chooses full table scan instead of index access because at
that point full table scan is considered lowest costly execution plan.
Is full table scan really fastest execution plan among possible options in this case? No, its definitely not. That
means, optimizer doesnt choose the optimal execution plan and mistakenly chooses improper one. What does
that mean? Yes, oracle optimizer is not perfect. It is a piece of software. It was written by humans, and humans
make mistakes. In this case, its very clear that statistics is not gathered properly for column BLACK_ID because
we use command ANALYZE instead of DBMS_STATS. And, the values of column BLACK_ID is very likely not
skewed, at least not highly skewed. It may be a oracle bug in creating histograms when we issue command
ANALYZE. Its also possible that CBO optimizer fail to choose most optimal execution plan (It may need to be
enhanced in future version of Oracle).

Reproduce the happening of Problem internally


Now, wed like to re-produce this case step by step from the very beginning and depict what happens internally.
1. Analyzing table generates un-uniform histograms on column BLACK_ID.
In this case, we use command ANALYZE to gather statistics of database objects rather
than using package
DBMS_STATS. As recommended, package DBMS_STATS lets you collect statistics in parallel, collect global statistics
for partitioned objects, and fine tune your statistics collection in other ways. The exception of exclusively using
command ANALYZE are listed for the following purpose:

Validate the structure of an object by using the VALIDATE clause

To list migrated or chained rows by using LIST CHAINED ROWS clause

Collect statistics not used by the optimizer.

To collect information on freelist blocks

Note: The testing in using package DBMS_STATS to gather statistics for this specific table (and columns) is not
conducted because the circumstance is very likely an occasional event and can not be easily sampled again.
After issuing command ANALYZE, the distribution of histograms is created as sort of un-uniform, as shown in
output 2. It should not be arbitrarily concluded that the values of column BLACK_ID are quite skewed because no
significant data manipulation happens in production database. But, its definitely possible that the values are sort
of skewed unless analyzing doesnt do correctly to generate the histograms (it may happen). The un-uniform
histograms of column BLACK_ID may correctly present the values distribution, or it is improperly created and
couldnt present the correct data values range. We cant easily tell what really happen. But, here, we can expect
CBO optimizer to make right decision in picking up optimal execution plan. Unfortunately, CBO optimizer fails to
make right decision.
2. Parse in share pool
The Query 1, in this case, is a repeatable running SQL statement with bind variable. We dont know what kind of
execution plan is created at the first time of running Query 1 after database started. But, at least, it could be
concluded that the former execution plan of Query 1 is optimal (via index access) and this execution plan is kept in
SQL area of share pool for reusing. There is no way to know how long it will stay there because it heavily depends
on the database activity and the effect of that activity on the contents of the SQL area. The following events may
happen on share pool.

The share pool is flushed

42

PERFORMANCE TUNING

This execution plan may be aged out

Heavy competition occurs on limited share pool

The happening of whatever events depicted above likely eliminates the execution plan of this SQL statement out of
share pool. For this case, it indeed happens. Therefore, the first-time running of Query, right after collecting
statistics, will causes the loading of SQL statement source code to share pool and subsequently parsing of SQL
statement. During the parsing, oracle optimizer will check the histograms of column BLACK_ID and then compute
costs of possible execution plans. Unfortunately, oracle optimizer eventually chooses full table scan rather than
index access due to presented sort of skewed histograms of column BLACK_ID. Subsequently, we experience
performance degradation and heavy load.
The scenario described above is only assumption and the most straightforward to explain the circumstance we are
experiencing.
3. Impact of bind variable
Another possibility, which also makes CBO optimizer to choose sub-optimal execution plan, is presence of bind
variable in Query 1. As discussed in former section, histograms are not useful while all predicates on the column
use bind variables. Therefore, Query 1 is absolutely not candidate for using histograms to help CBO optimizer in
determining execution plan.
Here, its necessary to talk about an init parameter _optim_peek_user_binds, an undocumented session-level
parameter. When set to TRUE (default), the CBO optimizer uses the values of the bind variable at the time the
query is compiled, and proceeds as though the query has constants instead of bind variable. With 9i and onwards,
Oracle picks the values of bind variable in the FIRST PARSE phase and generates execution plans according to the
values in this first PARSE. If subsequent bind values are skewed, then execution plans may not be optimal for the
subsequent bind variable.
Therefore, can we say Oracle optimizer act incorrectly? No, we cant. At the first-time running, full table scan may
be the fastest and the lowest costly execution plan. It hundred percent depends on the value of bind variable b1.
For repeatable calling of Query 1 with bind variable, the values of bind variable do not keep constant and thus
determination of execution plan heavily depends on how the values of bind variable changes and if re-parsing
happens due to the availability of identical SQL statement in share pool.
It, considerably, can be identified a bug in Oracle 9ir2. Similar problems have also been reported as in metalink
with Doc ID 3668224 and 3765484.
3. New execution plan is sub-optimal
No matter what happens in step 2, re-running of Query 1 will keep using this execution plan regardless of
changeable values of bind variable. And then, the performance degradation occurs because of using the expensive
execution plan.
4. Re-analyzing table overwrite the histograms of columns
Once we figure out the problem, we then issue table-only ANALYZE command and that looks like:
Analyze table im_blacklist compute statistics;
When we analyze a table with clause COMPUTE STATISTICS, both table and column statistics are collected. But,
the previous un-uniform histograms of column BLACK_ID in data dictionary table dba_histograms has been
overwritten. Even though the values of column BLACK_ID is still skewed (or sort of skewed), the histograms of
column BLACK_ID is not showed like Output 2 with default 75 buckets. Instead, it only shows two buckets (0 and
1) in table dba_histograms. Actually, this table-only analyze command acts as same as identifying clause SIZE 1 to
analyze command used previously, like:
ANALYZE TABLE im_blacklist COMPUTE STATISTICS

43

PERFORMANCE TUNING
FOR TABLE
FOR ALL INDEXES
FOR ALL INDEXED COLUMNS
SIZE 1;
Afterward, our next manual running of Query 1 (separated under SQL*PLUS and not with bind variable) with
constant cause hard parse and then generate new execution plan according to current histograms of column
BLACK_ID. At this time, the histograms do not present skewed and thus optimizer correctly chooses the execution
plan via index access.
Furthermore, its expected that the next real calling of Query 1 with bind variable will also cause hard parse and
CBO optimizer can choose the correct execution plan via index access path because the histograms of column
BLACK_ID is not shown as skewed any more.
More on CBO optimizer
As we already known, CBO optimizer heavily depends on the statistics to choose the right execution plan for very
SQL statement. Namely, CBO optimizer wouldnt work well without the statistics. And, does that mean we need to
collect statistics of oracle objects regularly? Its very hard to say. Generally, regular collecting statistics is
necessary, but sometimes, such as in the case we are talking about, statistics collection may negatively harm the
database performance. We say, oracle optimizer is not perfect.
In order make effective use of the CBO you should:

Analyze all tables regularly

Set the required OPTIMIZER_GOAL (FIRST_ROWS or ALL_ROWS)

Use hints to help direct the CBO where required

Use hints in PL/SQL to ensure the expected optimizer is used

Be careful with the use of bind variables

Basically, CBO optimizer work well for ad-hoc queries. For hard coded, repeated SQL statements, and query with
bind variables, these should be tuned to obtain a repeatable optimal execution plan.
Here, wed like to re-discuss the methods to gather statistics in Oracle database. Rather than command ANALYZE,
Oracle highly suggest to use PL/SQL package DBMS_STATS to do that. An innovate feature with DBMS_STATS,
comparing to command ANALYZE, is that DBMS_STATS can automatically decide which columns need histograms.
This is done by specifying clause SIZE AUTO. That is, for this case, the skewed histograms of column BLACK_ID
may not be generated if we use DBMS_STATS with clause SIZE AUTO. Thus, the sub-optimal execution plan will not
be chosen at all. Thats very important point of DBMS_STATS. And, its also ideal and could not be guaranteed.
The subsequent issue we'd like to talk about here is when we need to do collection of statistics. The followings are
good candidates for re-gathering statistics.

After large amounts of data changes (loads, purges, and bulk updates)

After database upgrade or creations of new database

Newly created database objects, such as tables

After migration from RBO to CBO

New high/low values for keys generated

44

PERFORMANCE TUNING

Upgrading CPUs and I/O subsystem (system statistics)

Besides appropriate statistics gathering, you should always monitor database performance over time. To achieve
that, regularly creating and keeping Statspack reports are good for DBAs. Historical statspack reports offer DBAs
useful reference to know how database performs. If some exceptional issues happens, DBAs can easily compare
the statspack reports and thus figure out the problem.
TK Prof in Oracle
TK Prof is an Oracle tool used to display the statistics generated during a trace. When an Oracle session is traced
(by SQL*Trace, Oracle Trace, or Database Trace), a trace file is generated. This trace file is barely human-readable;
TK Prof collates and formats the data into a a more meaningful form.
Finding the trace file
All trace files are written to the same location: a directory that is defined when the database is booted. To find out
the location of this directory, run the following SQL.
SELECT value
FROM sys.v_$parameter
WHERE name = 'user_dump_dest'
If this returns a 'Table or view does not exist' error, then have the DBA grant select privileges on sys.v_$parameter
to everybody. Go to the directory shown, and list the files in date order; on Unix, this is ls -ltr. If the trace files are
not readable, ask the DBA to change the privileges. There is a database initialisation parameter that the DBA can
set so that all future trace files are created readable.
Running TK Prof
Running TK Prof is simple:
tkprof trace_file output_file [ explain=userid/password@database ]
trace_file is the name of the trace file you found in the previous step, and output_file is the file to which TK Prof
will send the output. The optional explain argument will display an Explain Plan for all SQLs in the trace file. There
are other optional arguments to tkprof, see the Oracle Utilities manual for more detail.
TK Prof output
The output of TK Prof is very well described in the Oracle Utilities manual, so it will not be described again here.
The sort of things you should be looking for are:
For each SQL, check the Elapsed statistic. This shows the elapsed time for each SQL. High values obviously indicate
long-running SQL

Note the Disk and Query columns. These indicate data retrieval from disk and data retrieval from memory
respectively. If the Disk column is relatively low compared to the Query column, then it could mean that
the SQL has been run several times and the data has been cached. This might not give a true indication of
the performance when the data is not cached. Either have the database bounced by the DBA, or try the
trace again another day.

The first row of statistics for each SQL is for the Parse step. If a SQL is run many times, it usually does not
need to be re-parsed unless Oracle needs the memory it is taking up, and swaps it out of the shared pool.
If you have SQLs parsed more than once, get the DBA to check whether the database can be tuned to
reduce this.

45

PERFORMANCE TUNING

A special feature of the Explain Plan used in TK Prof is that it shows the number of rows read for each step
of the execution plan. This can be useful to track down Range Scan problems where thousands of rows are
read from an index and table, but only a few are returned after the bulk are filtered out.

In order to run SQL statements, Oracle must perform its own SQL statements to query the data dictionary,
looking at indexes, statistics etc. This is called Recursive SQL. The last two entries in the TK Prof output
are summaries of the Recursive and Non-Recursive (ie. "normal") SQL. If the recursive SQL is taking up
more than a few seconds, then it is a likely sign that the Shared Pool is too small. Show the TK Prof output
to the DBA to see if the database can be tuned.

If your Explain Plan in the TK Prof output shows 0 rows for every line, check the following:

Make sure you turn tracing off or exit your traced session before running TK Prof. Some statistics are only
written at the end.

Have you run any ALTER SESSION commands that affect the optimizer? If so, then the plan shown may
differ from the real plan. Note that the real plan is not shown: TK Prof re-evaluates the plan when you run
TK Prof. Make sure that you turn SQL_TRACE on before you ALTER SESSION. TK Prof is clever enough to
see the ALTER SESSION command in the trace file and evaluate plans accordingly. It will probably display
two plans: the default plan, and the new plan taking the ALTER SESSION into account.

SQL Statement Parsing in Oracle


Parsing of a SQL statement involves several steps. Blow is a flow diagram that will show you how this works. It also
explains difference between soft and hard parse.
Step 1: Statement is submitted.
Step 2: Initial syntactic check is made. If there is an error statement returned to the client.
Step 3: Checks if statement open cursor for the statement, if yes then statement is executed if not, step 4 is
performed.
Step 4: Checks if SESSION_CACHED_CURSORS initialization parameter is set and cursor is the Session Cursor
cache. If yes then statement is executed if not, step 5 is performed.
Step 5: Checks if HOLD_CURSOR is set to Y. HOLD_CURSOR is an precompiler parameter that specifies that an
individual cursor should be held open. If cursor in Held Cursor Cache then statement is executed if not, step 6 is
performed.
Step 6: A cursor is opened. Statement is hashed and compared with the hashed value in the SQL area. If it found
in SQL area then statement is executed and it is SOFT PARSE. If statement is not found then it statement is parsed
and executed and it is called HARD PARSE

46

PERFORMANCE TUNING

SQL Advisor in Oracle 10g


Another great feature of Oracle 10G that allow you to tune SQL. Now you don't need to tune SQL statement manually. This
new feature does it for you.
SQL Tuning Advisor using DBMS_SQLTUNE package and very simple to use.
The example below shows how to use SQL advisor.
1. Grant following access to the user that is going to run this new tool. In the example below SCOTT is the owner of the
schema.
GRANT ADVISOR TO SCOTT;
GRANT SELECT_CATALOG_ROLE TO SCOTT;
GRANT EXECUTE ON DBMS_SQLTUNE TO SCOTT;
2. Create the tuning task

DECLARE

47

PERFORMANCE TUNING
task_name_var VARCHAR2(30);
sqltext_var CLOB;
BEGIN
sqltext_var := 'SELECT * from EMP where empno = 1200';
task_name_var := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text => sqltext_var,
user_name => 'SCOTT',
scope => 'COMPREHENSIVE',
time_limit => 60,
task_name => 'sql_tuning_task_test1',
description => 'This is a test tuning task on EMP table');
END;
/
Some time you may have queries that might take longer than the time that you have specified in the "time_limit" parameter.
If this is the case then remove this parameter.
NOTE: You can not create more than one task with the same name. If this is the case then drop the existing task or use a
different name.
2.1 To view the existing task for the user run the following statement.

select task_name from dba_advisor_log where owner = 'SCOTT';


3. Execute the tuning task

Execute dbms_sqltune.Execute_tuning_task (task_name => 'sql_tuning_task_test1');


3.1 You can check the status of the task using following query.
select status from dba_advisor_log where task_name='sql_tuning_task_test1';
4. Now view the Recommendation

set linesize 100


set long 1000
set longchunksize 1000
SQL> select dbms_sqltune.report_tuning_task('sql_tuning_task_test1') from dual;
DBMS_SQLTUNE.REPORT_TUNING_TASK('SQL_TUNING_TASK_TEST1')
---------------------------------------------------------------------------------------------------GENERAL INFORMATION SECTION
------------------------------------------------------------------------------Tuning Task Name : sql_tuning_task_test1
Scope : COMPREHENSIVE
Time Limit(seconds): 60
Completion Status : COMPLETED
Started at : 06/22/2006 15:33:13
Completed at : 06/22/2006 15:33:14
------------------------------------------------------------------------------SQL ID : ad1437c24nqpn
SQL Text: SELECT * from EMP where empno = 1200
------------------------------------------------------------------------------FINDINGS SECTION (1 finding)
------------------------------------------------------------------------------1- Statistics Finding

48

PERFORMANCE TUNING
--------------------Table "SCOTT"."EMP" was not analyzed.
Recommendation
-------------Consider collecting optimizer statistics for this table.
execute dbms_stats.gather_table_stats(ownname => 'SCOTT', tabname =>
'EMP', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,

Based on this information, you can decide what actions are necessary to tune the SQL.

Automatic Workload Repository (AWR)

One of the nice features of Oracle 10g that I really like is the Automatic Workload Repository (AWR). This new tool is kind of
replacement of STATSPACK. AWR takes snapshots of the system every 60 minutes. You can also create manual snapshots like
in statspack. At the end this tool give you feature to generate txt or HTML (I like it) report.
The article below will explain how to create manual snapshots.
You can manually create snapshots with the CREATE_SNAPSHOT procedure if you want to capture statistics at times different
than those of the automatically generated snapshots.

BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/

You can also drop a range of snapshots using the DROP_SNAPSHOT_RANGE procedure. To view a list of the snapshot Ids
along with database Ids, check the DBA_HIST_SNAPSHOT view. For example, you can drop the following range of snapshots:

BEGIN
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 1,
high_snap_id => 10 dbid => 131045908);
END;
/

If you like, you can also adjust the interval and retention of snapshot generation for a specified database id, but note that this
can affect the precision of the Oracle diagnostic tools.
The INTERVAL setting affects how often in minutes that snapshots are automatically generated. The RETENTION setting
affects how long in minutes that snapshots are stored in the workload repository. To adjust the settings, use the
MODIFY_SNAPSHOT_SETTINGS procedure. For example:

BEGIN
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retention => 14400,
interval => 15, dbid => 131045908);
END;
/

49

PERFORMANCE TUNING

In this example, the retention period is specified as 14400 minutes (10 days) and the interval between each snapshot is
specified as 15 minutes. You can also specify NULL to preserved existing value. If you dont specify optional database
identifier then local database is used.is 3310949047. If you do not specify a value for dbid, the local database identifier is
used as the default value. You can check the current settings for your database instance with the DBA_HIST_WR_CONTROL
view.

SQL>
SQL>
SQL>
SQL>

col RETENTION format a20


col SNAP_INTERVAL format a30
set linesize 120
select * from DBA_HIST_WR_CONTROL;

DBID SNAP_INTERVAL RETENTION


---------- ------------------------------ -------------------131045908 +00000 01:00:00.0 +00007 00:00:00.0

In above example, snapshot interval is 1 hr and retention period is 7 days.


Running the awrrpt.sql Report
To run an AWR report, a user must be granted the DBA role. You can view the AWR reports with Oracle Enterprise Manager or
by running the following SQL scripts:
- The awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids
.
- The awrrpti.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids for a
specified database and instance.
For example:
To generate a text report for a range of snapshot ids, run the awrrpt.sql script at the SQL prompt:

@$ORACLE_HOME/rdbms/admin/awrrpt.sql

Reading Statspack
In Oracle, Performance Tuning is based on the following formula:
Response Time = Service Time + Wait Time
Where

Service Time is time spent on the CPU

Wait Time is the sum of time spent on Wait Events i.e. non-idle time spent waiting for an event to
complete or for a resource to become available.

Service Time is comprised of time spent on the CPU for Parsing, Recursive CPU usage (for PLSQL and recursive
SQL) and CPU used for execution of SQL statements (CPU Other).
Service Time = CPU Parse + CPU Recursive + CPU Other

50

PERFORMANCE TUNING
The above components of Service Time can be found from the following statistics:

Service Time from CPU used by this session

CPU Parse from parse time cpu

CPU Recursive from recursive cpu usage

From these, CPU Other can be calculated as follows:


CPU other = CPU used by this session - parse time CPU - recursive CPU usage
Many performance-tuning tools (including Statspack) produce a list of the top wait events. For example,
Statspacks report contains the "Top 5 Wait Events" section. (Pre-Oracle9i Release 2).
It is a common mistake to start dealing with Wait Events first and not taking in consideration the corresponding
response time. So always compare the time consumed by the top wait events to the 'CPU used by this session' and
identify the biggest consumers.
Here is an example where CPU Other was found to be a significant component of total Response Time even though
the report shows direct path read as top wait event:

Top 5 Wait Events


Events

Waits

Wait Time(cs)

% Total Wt Time

direct path read

4232

10827

52.01

db file scattered read

6105

6264

30.09

direct path write

1992

3268

15.70

control file parallel write

893

198

.95

db file parallel write

40

131

.63

Statistic

Total

Per Second

Per Trans

CPU used by this session

358806

130.5

12372.6

parse time cpu

38

0.0

1.3

recursive cpu usage

186636

67.9

6435.7

From these figures we can obtain:

Wait Time = 10,827 x 100% / 52,01% = 20,817 cs

Service Time = 358,806 cs

Response Time = 358,806 + 20,817 = 379,623 cs

51

PERFORMANCE TUNING

CPU Other = 358,806 - 38 - 186,636 = 172,132 cs

If we now calculate percentages for the top Response Time components:

CPU Other = 45.34%

CPU Recursive = 49.16%

direct path read = 2.85%

etc. etc.

So we can see the I/O-related Wait Events actually are not a significant component of the overall Response Time.
For us it makes sense concentrate our tuning effort on the service time component.
CPU Other is a significant component of Response Time, so a possible next step is to look at the CPU intensive SQL
and not at direct path read wait event.
Starting with Oracle9i Release 2, Statspack presents Service Time (obtained from the statistic CPU used by this
session) together with the top Wait Events in a section called Top 5 Timed Events, which replaces the section
Top 5 Wait Events of previous releases.
Here is an example:

Top 5 Timed Events


Events

Waits

Time(s)

% Total Ela Time

library cache lock

141

424

76.52

db file scattered read

3367

96

17.4

32

5.79

CPU time
db file sequential read

161

.18

control file parallel write

40

.05

Statistic

Total

Per Second

Per Trans

CPU used by this session

3211

4.3

1605.5

parse time cpu

59

0.1

29.5

recursive cpu usage

232

0.3

116.0

These figures give us directly the percentages of the Wait Events against the total Response Time so no further
calculations are necessary to assess the impact of Wait Events. Service Time is presented as CPU time in this
section and corresponds to the total CPU utilisation. We can drill down to the various components of Service Time
as follows:

CPU Other = 3,211 - 59 - 232 = 2,920 cs

52

PERFORMANCE TUNING

CPU Other = 2,920 / 3,211 x 5.79% = 5.26%

CPU Parse = 59 / 3,211 x 5.79% = 0.11%

CPU Recursive = 232 / 3,211 x 5.79% = 0.42%

In this example, the main performance problem was an issue related to the Library Cache.
The second most important time consumer was waiting for physical I/O due to multiblock reads (db file scattered
read).

Identifying problematic SQLs from Statspack


From the above calculations you will get the significant components which caused the performance problem. Based
on this components lets decide on the various Statspack section to identify the problematic SQL s.

Other CPU

If this shows CPU other as being significant the next step will be to look at the SQL performing most block
accesses in the SQL by Gets section of the Statspack report. A better execution plan for this statement resulting in
fewer Gets/Exec will reduce its CPU consumption.

CPU Parse

If CPU Parse time is a significant component of Response Time, it can be because cursors are repeatedly opened
and closed every time they are executed instead of being opened once, kept open for multiple executions and only
closed when they are no longer required. The SQL ordered by Parse Calls can help find such cursors.

Disk I/O related waits.

Identifying SQL statements responsible for most physical reads from the Statspack section SQL ordered by Reads
has similar concepts as for SQL ordered by Gets.
% Total can be used to evaluate the impact of each statement. Reads per Exec together with Executions can be
used as a hint of whether the statement has a suboptimal execution plan causing many physical reads or if it is
there simply because it is executed often. Possible reasons for high Reads per Exec are use of unselective indexes
require large numbers of blocks to be fetched where such blocks are not cached well in the buffer cache, index
fragmentation, large Clustering Factor in index etc.

Latch related waits.

Statspack has 2 sections to help find such unsharable statements, SQL ordered by Sharable Memory and SQL
ordered by Version Count. This can help with Shared Pool and Library Cache/Shared Pool latch tuning.
Statements with many versions (multiple child cursors with the same parent cursor i.e. identical SQL text but
different properties such as owning schema of objects, optimizer session settings, types & lengths of bind variables
etc.) are unsharable. This means they can consume excessive memory resources in the Shared Pool and cause
performance problems related to parsing e.g. Library Cache and Shared Pool latch contention or lookup time e.g.
Library Cache latch contention.

How Deadlock situation occurs?


Deadlock is a situation that occurs when two or more users are waiting for data locked by each other. Oracle
automatically detects a deadlock and resolves them by rolling back one of the statements involved in the deadlock.
The example below demonstrates how deadlock occurs.

53

PERFORMANCE TUNING
Suppose there is a table test with two rows.
create table test (
row_row_num row_number,
txt varchar2(10) );
insert into test values ( 1, 'First' );
insert into test values ( 2, 'Second' );
commit;
SQL> Select * from test ;
ROW_NUM
1
2

TXT
First
Second

Ses#1: Issue the following command:


SQL> update test set txt='ses1' where row_num=1;
Ses#2: Issue the following command:
SQL> update test set txt='ses2' where row_num=2;
SQL> update test set txt='ses2' where row_num=1;
Ses#2 is now waiting for the lock held by Ses#1
Ses#1: Issue the following command:
SQL> update test set txt='ses1' where row_num=2;
This update would cause Ses#1 to wait on the lock held by Ses#2, but Ses#2 is already waiting on this session.
This causes a deadlock
When you want to re-tune your database?
It is one of the most important responsibilities of a DBA to tune database after certain changes have made by
application/users. Though you may need to tune certain things depending on the user s feedback but there few
things that can you do before anybody start compiling about the performance.
The following is a list of changes to your system that may affect performance of your database.

When you migrating to a new or upgrade Operating System

Applying an OS or database level patch

Upgrading/Migrating a database

Adding a new application/features

Running a new batch job

Adding a significant number of new users

Oracle recommends, that you should gather database performance report before making any changes. Gathering
Statspack is one of the best database performance reports. I will recommend gathering Statspack report on
monthly basis to know the performance of your databases
When to Rebuild an Index?

54

PERFORMANCE TUNING
It is important to periodically examine your indexes to determine if they have become skewed and might need to
be rebuilt.
When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention
may occur,creating a bottleneck in performance. Here is a sample procedure on how to identify the such indexes:
1. Gather statistics on your indexes. For large indexes (over one hundred thousand records in the underlying
table), use ESTIMATE instead of COMPUTE STATISTICS.

For example:
SQL> analyze index emp_empno_pk compute statistics;
Index analyzed.
2. Run the query below to find out how skewed each index is. This query checks on all indexes that are on emp
table.
SQL>select index_name, blevel,
decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',2,
'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from user_indexes where table_name='EMP';
INDEX_NAME

BLEVEL

EMP_EMPNO_PK

OK
0 OK BLEVEL

3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to
narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is
requested for each BLEVEL. If the BLEVEL were to be more than 4, it is recommended to rebuild the index.
Note: If you do not analyze the index, the index_check.sql script will show "BLEVEL HIGH" for such an index.
4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the
INDEX_STATS virtual table.

SQL> analyze index emp_empno_pk validate structure;


Index analyzed.
5. Run the following query to find out PCT_DELETED ratio.
SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
2 (LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
3 from index_stats
4 where NAME='EMP_EMPNO_PK';
PCT_DELETED

DISTINCTIVENESS
0

The PCT_DELETED column shows the percent of leaf entries (i.e. index entries) that have been deleted and remain
unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED
is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do
so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive
redo allocation on some systems.
The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average. For
example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in (10000-9000) x
100 / 10000 = 10. This shows a good distribution of values. If, however, the table has 10000 records and only 2
distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very

55

PERFORMANCE TUNING
few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild
but good candidates for bitmapped indexes.
Top 10 init.ora parameters
Some of the init.ora parameters are really critical in the performance of a database. Here we are discussing about
the top 10 init.ora parameters. These are the important parameters and should be taken care while creating or
working on databases.

DB_NAME

DB_DOMAIN

CONTROL_FILES

DB_BLOCK_SIZE

DB_BLOCK_BUFFERS

LOG_BUFFER

SHARED_POOL_SIZE

SORT_AREA_SIZE

PROCESSES

ROLLBACK_SEGMENTS

DB_NAME : This parameter specifies the local name of the database. This is an optional parameter but Oracle
recommends to set this parameter before you create the database. it must be set as text string up to eight
characters. The value which is provided to this parameter will be recorded in control file, datafiles and redo log files
during the database creation. Default value for this parameter is NULL.
For Example:
DB_NAME= prod
prod is the name of the database.
DB_DOMAIN: DB_DOMAIN Specifies the logical location ( Domain) with in the network. The combination of
DB_NAME and DB_DOMAIN parameters should be unique with in the network. This parameter is important when
you are going to use distributed database system.
For Example:
DB_DOMAIN=test.com
test.com is the domain name and global database name can recognize by prod.test.com where prod is the
database name.
CONTROL_FILES: This parameter specifies the name of the control files. when database is created, Oracle creates
control file according to the path which specifies in the init.ora file. If no value is assigned to this parameter then
Oracle create this parameter in the default location. Eight different files can be assigned to this parameter but it is
recommended to have three different control files on different disks.

56

PERFORMANCE TUNING
DB_BLOCK_SIZE: This parameter specifies the data block size. The size of the block should be multiple of the
block size of OS. For example it can be 2k, 4k up to 32k in Oracle 8i but the maximum value is OS- Dependent.
DB_BLOCK_BUFFERS: This is very critical performance parameter that determines the number of buffers in the
buffer cache in the System Global Area. Importance of this parameter is more because , data block size cannot be
changed after database is created. In that case this parameter can be used to tune the size of data buffer. Buffer
cache size can be calculated by the following formula.
Data buffer Cache size=DB_BLOCK_SIZE x DB_BLOCK_BUFFERS
LOG_BUFFER: This parameter specifies the size of redo log buffer. It is buffer for uncommitted transactions in the
memory. The default setting for this parameter is four times the maximum data block size for the host Operating
System.
SHARED_POOL_SIZE: This parameter specifies the size of shared memory for the instance. This is important
parameter for memory tuning and can be altered after database creation
SORT_AREA_SIZE: This specifies the size of memory used for sorting and merging of data. This represents the
area that can be used by each user process to perform sorting and merging of data.
PROCESSES: This parameter determines the maximum number of OS process that can be connected to database
at the same time. The value for this parameter must include 5 for background process. i.e. if you want to have 20
users then you must have it 25.
ROLLBACK_SEGMENTS: This parameter specifies the list of rollback segments for an Oracle Instance.
Performance is also gets affected by the size of Rollback Segment. This should be larger enough to hold the
rollback entries of the transaction
How to find Oracle Hidden Parameters ?
Oracle has many hidden parameters. You will not find them in V$PARAMETER or see them with SHOW PARAMETERS
command as these are hidden. All these parameter start with _ (Underscore). Like _system_trig_enabled.
These parameters are undocumented. You wont find them in Oracle documentation. These parameters are for
specific purpose only. Some of them are OS specific and used in unusual recovery situations. Some are also used to
enable and disable new feature. You should be very much careful while using them. Please check with Oracle
Support before using them.
Here is a query that you can use to find these parameters.
SELECT X.KSPPINM NAME, DECODE(BITAND(KSPPIFLG/256, 1), 1, 'TRUE', 'FALSE') SESMOD,
DECODE( BITAND(KSPPIFLG/65536, 3), 1, 'IMMEDIATE', 2, 'DEFERRED', 3, 'IMMEDIATE', 'FALSE' ) SYSMOD,
KSPPDESC DESCRIPTION FROM SYS.X_$KSPPI X WHERE X.INST_ID = USERENV('INSTANCE') AND
TRANSLATE(KSPPINM,'_','#') LIKE '#%' ORDER BY 1 ; < /P >
STEPS TO CREATE DATABASE MANUALLY ON LINUX
This article shows you steps to create a database manually on Linux.
Step 1:
First create all the necessary directories. Followings are my directories:
testdb1]$ ls
admin backup archive
admin]$ ls
adump bdump cdump udump

57

PERFORMANCE TUNING
Step 2:
Next prepare the database creation script. Following is my script "testdb1.sql"
CREATE DATABASE "testdb1"
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/d02/monish/testdb1/redo1.log' SIZE 10M,
GROUP 2 '/d02/monish/testdb1/redo2.log' SIZE 10M,
GROUP 3 '/d02/monish/testdb1/redo3.log' SIZE 10M
DATAFILE
'/d02/monish/testdb1/system.dbf' size 100m,
'/d02/monish/testdb1/usr04.dbf' size 10m
sysaux datafile '/d02/monish/testdb1/sysaux.dbf' size 100m
undo tablespace undotbs
datafile '/d02/monish/testdb1/undo.dbf' size 50m
CHARACTER SET US7ASCII
;
Step 3:
Prepare the init file. Like this one [inittestdb1.ora]
*.audit_file_dest='/d02/monish/testdb1/admin/adump'
*.background_dump_dest='/d02/monish/testdb1/admin/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/d02/monish/testdb1/control01.ctl',
'/d02/monish/testdb1/control02.ctl','/d02/monish/testdb1/control03.ctl'
*.core_dump_dest='/d02/monish/testdb1/admin/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='testdb1'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=indiaXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=/d02/monish/testdb1/archive'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=200278016
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=601882624
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/d02/monish/testdb1/admin/udump'
*.db_recovery_file_dest='/d02/monish/testdb1/backup'
*.db_recovery_file_dest_size=2147483648
Step 4:
Now perform the following steps:
$ export ORACLE_SID=testdb1

58

PERFORMANCE TUNING

$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.3.0 - Production on Thu May 22 17:35:28 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/inittestdb1.ora nomount
ORACLE instance started.
Total System Global Area 603979776 bytes
Fixed Size 1263176 bytes
Variable Size 167774648 bytes
Database Buffers 427819008 bytes
Redo Buffers 7122944 bytes
SQL> @testdb1.sql
Database created.
Step 5:
So your database is create. Now just run the catalog.sql and catproc.sql scripts.
You will find the in $ cd $ORACLE_HOME/rdbms/admin
SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/catalog.sql
SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/catproc.sql
SQL> select name from v$database;
NAME
--------TESTDB1
ORACLE SILENT MODE INSTALLATION
You can automate the installation and configuration of Oracle software, either fully or partially, by specifying a
response file when you start the Oracle Universal Installer. The Installer uses the values contained in the response
file to provide answers to some or all of the Installer prompts:
If you include responses for all of the prompts in the response file and specify the -silent option when starting the
Installer, then the Installer runs in silent mode. During a silent-mode installation, the Installer does not display any
screens. Instead, it displays progress information in the terminal that you used to start it.
PREREQUEST
Adding dba group
[root@tritya root]# groupadd -g 200 dba
oracle User creation
[root@tritya root]# useradd -g dba -d /home/oracle -s /bin/bash -c Oracle Software Owner -m -u 300 oracle
Kernel Settings
vi /etc/sysctl.conf

59

PERFORMANCE TUNING
SET THE ORACLE RECOMENDED KERNEL PARAMETERS IN THIS FILE
OraInst file
[root@tritya root]# mkdir -p /var/opt/oracle
[root@tritya root]# cd /var/opt/oracle
[root@tritya oracle]# vi oraInst.loc
and enter the values
inventory_loc=home/oracle/oraInventory
inst_group=
(save and exit)
[root@tritya root]# cat /var/opt/oracle/oraInst.loc
inventory_loc=home/oracle/oraInventory
inst_group=
[root@tritya oracle]# chown oracle:dba oraInst.loc
[root@tritya oracle]# chmod 664 oraInst.loc
[root@tritya oracle]# su - oracle
[oracle@tritya oracle]$ cd database/response/
[oracle@tritya response]$ vi enterprise.rsp
Modify the Below Three Values for SOFTWARE ONLY INSTALLATION
ORACLE_HOME=/home/oracle/product/10.2.0.1
ORACLE_HOME_NAME=orcl
n_configurationOption=3
[oracle@tritya database]$ ./runInstaller -silent -responsefile /home/oracle/database/response/enterprise.rsp
Starting Oracle Universal Installer...
...............
skipped ....
Installation in progress (Thu May 15 23:54:45 IST 2008)
............................................................... 18% Done.
............................................................... 36% Done.
............................................................... 54% Done.
............................................................... 73% Done.
............ 76% Done.
Install successful
Linking in progress (Thu May 15 23:59:36 IST 2008)
Link successful

60

PERFORMANCE TUNING
Setup in progress (Fri May 16 00:06:30 IST 2008)
.............. 100% Done.
Setup successful
The following configuration scripts
/home/oracle/product/10.2.0.1/root.sh
need to be executed as root for configuring the system. If you skip the execution of the configuration tools, the
configuration will not be complete and the product wont function properly. In order to get the product to function
properly, you will be required to execute the scripts and the configuration tools after exiting the OUI.
Open A new Window with root user and execute the below script
[root@tritya ]# sh /home/oracle/product/10.2.0.1/root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /home/oracle/product/10.2.0.1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
[root@tritya ]#
Test Oracle Installation From Oracle User
[oracle@tritya oracle]$ source .bash_profile
[oracle@tritya oracle]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Fri May 16 00:10:19 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
Steps to generate the AWR report
An AWR report is very similar to the STATSPACK report from Oracle9i, and it contains vital elapsed-time information on what
happened during particular snapshot range.
Step 1: Go to the following location

$ cd $ORACLE_HOME/rdbms/admin

Step 2: Run this command

61

PERFORMANCE TUNING

$ ls -al awr*
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x
-rwxrwxr-x

1
1
1
1
1
1
1
1
1
1
1
1

oracle
oracle
oracle
oracle
oracle
oracle
oracle
oracle
oracle
oracle
oracle
oracle

oinstall
oinstall
oinstall
oinstall
oinstall
oinstall
oinstall
oinstall
oinstall
oinstall
oinstall
oinstall

20892 May 23 2005 awrddinp.sql


7252 May 27 2005 awrddrpi.sql
2005 May 27 2005 awrddrpt.sql
11286 Apr 18 2005 awrextr.sql
49166 Sep 1 2004 awrinfo.sql
2462 Jan 5 2005 awrinpnm.sql
8495 May 23 2005 awrinput.sql
10324 Apr 18 2005 awrload.sql
7575 Apr 18 2005 awrrpti.sql
1999 Oct 24 2003 awrrpt.sql
6676 Jan 5 2005 awrsqrpi.sql
1469 Jan 5 2005 awrsqrpt.sql

Step 3: Connect to the Oracle

$ sqlplus /nolog
SQL*Plus: Release 10.2.0.3.0 - Production on Wed May 21 08:51:52 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
SQL> conn / as sysdba
Connected.
SQL>

Step 5: Now run the awrrpt.sql. Select the format for the report as either HTML or TEXT.

SQL> @awrrpt
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
----------- ------------ -------- -----------2339164857 MSB 1 msb
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type:

Step 6: Select number of days you want to go back or just hit enter for listing all completed snapshots.

Instances in this Workload Repository schema


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ -----------* 2339164857 1 MSB msb abcd-dba-xxx
.abcd.com
* 2339164857 1 MSB msb abcddb
Using 2339164857 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

62

PERFORMANCE TUNING

Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days:

Step 7: Then specify Begin and End snapshot Ids.

Listing all Completed Snapshots


Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ ----msb MSB 102 17 May 2008 04:30 1
103 17 May 2008 05:30 1
104 17 May 2008 06:30 1
105 17 May 2008 07:30 1
106 17 May 2008 08:31 1
107 17 May 2008 09:30 1
108 17 May 2008 10:30 1
109 17 May 2008 11:30 1
110 17 May 2008 12:30 1
111 17 May 2008 13:30 1
112 17 May 2008 14:30 1
113 17 May 2008 15:30 1
114 17 May 2008 16:30 1
115 17 May 2008 17:30 1
116 17 May 2008 18:30 1
117 17 May 2008 19:30 1
118
119
120
121
122
123
124
125
126

20
20
20
20
20
20
20
20
20

May
May
May
May
May
May
May
May
May

2008
2008
2008
2008
2008
2008
2008
2008
2008

15:46
16:30
17:30
18:30
19:30
20:30
21:30
22:30
23:30

1
1
1
1
1
1
1
1
1

127
128
129
130

21
21
21
21

May
May
May
May

2008
2008
2008
2008

05:50
06:30
07:30
08:30

1
1
1
1

Specify the Begin and End Snapshot Ids


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 102
Begin Snapshot Id specified: 102
Enter value for end_snap: 117
End Snapshot Id specified: 117

Step 8: Here you specify the name of the report or select the default name assigned.

Specify the Report Name


~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_102_117.html. To use this name,
press <return> to continue, otherwise enter an alternative.
Enter value for report_name:

Step 9: The report gets generated.

63

PERFORMANCE TUNING

End of Report
</BODY></HTML>
Report written to awrrpt_1_102_177.html
SQL>

Step 10: Exit from SQLPLUS.

SQL> exit

Step 11:
Run command ls ltr to show the new file created under the path: $ORACLE_HOME/rdbms/admin

-rw-r--r-- 1 oracle oinstall 186 May 21 09:06 awrrpt_1_102_117.html

Report is ready to view.

Operating System: Redhat Linux


Database: Oracle 10g

Oracle Execution Plan and Optimizers


A. Execution Plan
When user issues a sql statement (read, write or delete) Oracle builds an execution plan which defines how Oracle will find or write
the data. Oracle provides EXECUTION PLAN command for the user to explore the way Oracle will take to run the issued sql
statement. The general syntax for the EXPLAIN PLAN is:
1. explain plan for your-precious-sql-statement; /*default table name of Oracle is PLAN_TABLE*/
2. explain plan into table_name for your-precious-sql-statement;
Table A.1: Writing the Execution Plan of a sql statement into a table
Reading an execution plan from the plan table is as below.

explain plan for select /*+ rule */ * from test_for_ep where a = 5; /*Filling the plan table*/
select substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,30 ) "Operation" object_name "Object"
from plan_table
start with id = 0
connect by prior id=parent_id; /*Reading from the plan table*/
OR
@$ORACLE_HOME/rdbms/admin/utlxpls.sql /*command can be used to read plan table*/
SELECT STATEMENT ()
TABLE ACCESS (BY INDEX ROWID) TEST_FOR_EP
INDEX (RANGE SCAN) TEST_FOR_EP_IX /*Output of previous select statement*/

Table A.2: Reading execution plan of a sql statement from the plan table
In an explain plan output, the more indented an operation is, the earlier it is executed. And the result of this operation (or
operations, if more than one have are equally indented AND have the same parent) is then feeded to the parent operation. In our
case, it is obvious that the index (TEST_FOR_EP_IX) is used first (most indented) then used for a TABLE ACCESS, second most
indented, then the result is returned.
Hint #1
If we only query fields of a table that are already in an index, Oracle doesn't have to read the data blocks because it can get the
relevant data from the index:
create table test_for_ep (a number, b varchar2(100), constraint uq_tp unique(a));
delete plan_table;
1st case:
explain plan for select /*+ rule */ * from test_for_ep where a = 5;

64

PERFORMANCE TUNING
SELECT STATEMENT ()
TABLE ACCESS (BY INDEX ROWID) TEST_FOR_EP
INDEX (RANGE SCAN) UQ_TP
2nd case:
explain plan for select /*+ rule */ a from test_for_ep where a > 5 and a < 50;
SELECT STATEMENT ()
INDEX (RANGE SCAN) UQ_TP

B. Optimizers
I. Rule Based Optimizer (RBO)
The rule-based optimizer is the oldest and most stable of the optimizers. The rule-based optimizer is very simple and uses
information in the data dictionary to make decisions about using an index. Also, unlike the cost-based optimizer, the order of tables
in the FROM clause and the order of Booleans in the WHERE clause affect the execution plan for the query.
In Oracle's rule-based optimizer, the ordering of the table names in the FROM clause determines the driving table. The driving table
is important because it is retrieved first, and the rows from the second table are then merged into the result set from the first table.
Therefore, it is essential that the second table return the least amount of rows based on the WHERE clause.
RBO decised which execution plan will be choosed to execute the query according to the below RBO Rule Table. Rules are ordered
sequentially according to their rankings:
1. Single Row by Rowid
2. Single Row by Cluster Join
3. Single Row by Hash Cluster Key with Unique or Primary Key
4. Single Row by Unique or Primary Key
5. Clustered Join
6. Hash Cluster Key
7. Indexed Cluster Key
8. Composite Index
9. Single-Column Indexes
10. Bounded Range Search on Indexed Columns
11. Unbounded Range Search on Indexed Columns
12. Sort Merge Join
13. MAX or MIN of Indexed Column
14. ORDER BY on Indexed Column
15. Full Table Scan
Table B.1: Rule Based Optimizer Rule Table (Access paths and their ranking)
We can make some general observations about the characteristics of the rule-based optimizer:

Always use the Index - If an index can be used to access a table, choose the index. Indexes are always preferred over a
full-table scan of a sort merge join ( a sort merge join does not require an index).

Always starts with the driving table - The last table in the from clause will be the driving table For the RBO, this should
be the table that chooses the least amount of rows. The RBO uses this driving table as the first table when performing
nested loop join operations.

Full-table scans as a last resort - The RBO is not aware of Oracle parallel query and multi-block reads, and does not
consider the size of the table. Hence, the RBO dislikes full-table scans and will only use them when no index exists.

Any index will do - The RBO will sometimes choose a less than ideal index to service a query. This is because the RBO
does not have access to statistics that show the selectivity of indexed columns.

Simple is sometimes better - Prior to Oracle8i, the RBO often provided a better overall execution plan for some databases.
II. Cost Based Optimizer (CBO)
Analyze is required to gather information about below subjects:

Row Number in a table

Block situation where data is stored

Index Information

Row lengths

65

PERFORMANCE TUNING
There are two methods to analyze a table:
1. ANALYZE
Analyze command is used to gather statistics about a table, an index or a cluster and also user can specify the number of the rows
or the percentage of the table to be analyzed. Example usages of analyze command are:
1.
2.
3.
4.

ANALYZE
ANALYZE
ANALYZE
ANALYZE

TABLE employees COMPUTE STATISTICS;


INDEX employees_pk COMPUTE STATISTICS;
TABLE employees ESTIMATE STATISTICS SAMPLE 100 ROWS;
TABLE employees ESTIMATE STATISTICS SAMPLE 15 PERCENT;

Table B.2: ANALYZE command usages


2. DBMS_STATS
Dbms_stats package is an analyzer which has ability of parallel execution, copying statistics from one database to another and
deleting gathered statistics from database. Copying statistics from one server to another is a great feature which gives you the
chance to prepare statistics of your database on a copy database and then carry these information to your live database. Example
usages of dbms_stats command are:

1. EXEC DBMS_STATS.gather_database_stats;
2. EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);
3. EXEC DBMS_STATS.gather_schema_stats(SCOTT);
4. EXEC DBMS_STATS.gather_schema_stats(SCOTT, estimate_percent => 15);
5. EXEC DBMS_STATS.gather_table_stats(SCOTT, EMPLOYEES);
6. EXEC DBMS_STATS.gather_table_stats(SCOTT, EMPLOYEES, estimate_percent => 15);
7. EXEC DBMS_STATS.gather_index_stats(SCOTT, EMPLOYEES_PK);
8. EXEC DBMS_STATS.gather_index_stats(SCOTT, EMPLOYEES_PK, estimate_percent => 15);
9. EXEC DBMS_STATS.delete_database_stats;
10. EXEC DBMS_STATS.delete_schema_stats(SCOTT);
11. EXEC DBMS_STATS.delete_table_stats(SCOTT, EMPLOYEES);
12. EXEC DBMS_STATS.delete_index_stats(SCOTT, EMPLOYEES_PK);

Table B.3: DBMS_STATS command usages


The table order still makes a difference in execution time, even when using the cost-based optimizer. The driving table is the table
that will initiate the query and should be the table with the smallest number of rows. Ordering the tables in the FROM clause can
make a huge difference in execution time.
Hint #2
Cost-based optimization The driving table is first after FROM clause - place smallest table first after FROM, and list tables from
smallest to largest.
Rule-based optimization The driving table is last in FROM clause - place smallest table last in FROM clause, and list tables from
largest to smallest.
SOURCE
1.
2.
3.
4.

Execution Plan, Optimizer ve eitleri, http://www.ceturk.com/makaleoku.asp?id=224


Oracle's explain plan, http://www.adp-gmbh.ch/ora/explainplan.html
EXPLAIN PLAN Usage, http://www.oracle-base.com/articles/8i/ExplainPlanUsage.php
Tuning with Rule-Based Optimization, http://www.remote-dba.net/t_tuning_rule_based_optimization.htm

Introduction to EXPLAIN PLAN


An EXPLAIN PLAN is a tool that you can use to have Oracle explain to you how it plans on executing your query.
This is useful in tuning queries to the database to get them to perform better. Once you know how Oracle plans on
executing your query, you can change your environments to run the query faster. The components of the execution
plan includes.
* An ordering of the tables referenced by the statement.
* An access method for each table mentioned in the statement.
* A join method for tables affected by join operations in the statement.
Explain Plan output shows how Oracle executes SQL statements.

66

PERFORMANCE TUNING

Creating the Output Table


Before issuing an EXPLAIN PLAN statement, create a table to hold its output. Use one of the following approaches:
* Run the SQL script UTLXPLAN.SQL to create a sample output table called PLAN_TABLE in your schema. The exact
name and location of this script depends on your operating system. PLAN_TABLE is the default table into which the
EXPLAIN_PLAN statement inserts rows describing execution plans.
* Issue a CREATE TABLE statement to create an output table with any name you choose. When you issue an
EXPLAIN PLAN statement you can direct its output to this table.
Any table used to store the output of the EXPLAIN PLAN statement must have the same columns and datatypes as
the PLAN_TABLE

CREATE TABLE plan_table


(statement_id

VARCHAR2(30),

timestamp

DATE,

remarks VARCHAR2(80),
operation

VARCHAR2(30),

options VARCHAR2(30),
object_node

VARCHAR2(128),

object_owner

VARCHAR2(30),

object_name

VARCHAR2(30),

object_instance NUMERIC,
object_type

VARCHAR2(30),

optimizer

VARCHAR2(255),

search_columns NUMERIC,
id

NUMERIC,

parent_id

NUMERIC,

position NUMERIC,
cost

NUMERIC,

cardinality

NUMERIC,

bytes

NUMERIC,

other_tag

VARCHAR2(255)

other

LONG);

How to use EXPLAIN PLAN?

67

PERFORMANCE TUNING

Issue an EXPLAIN PLAN for the query you are intrested in tunning. The command is of the form:
EXPLAIN PLAN SET STATEMENT_ID='X' FOR some SQL statement;
You need to use the statement_id and then give your SQL

SQL> explain plan set statement_id = 'q1' for

2 select object_name from test where object_name like 'T%';


Explained.

I used T1 for my statement id. But you can use anything you want. My SQL statement is the second line. Now I
query the PLAN_TABLE to see how this statement is executed. This can be done in the following ways.
* A simple select query to see the contents of the PLAN_TABLE (SELECT * FROM PLAN_TABLE) or
* Selecting the PLAN_TABLE output in the Nested Format.

SQL> SELECT LPAD(' ',2*(level-1)) || operation || ' ' || options || ' '

||

2 object_name || ' ' || DECODE(id,0,'Cost = ' || position) AS "Query

Plan",other

3 FROM plan_table

4 START WITH id = 0

5 AND statement_id='T1'

6 CONNECT BY PRIOR ID = PARENT_ID

7 AND statement_id = 'T1'

Query Plan OTHER

68

PERFORMANCE TUNING

--------------------------------------------------

SELECT STATEMENT Cost =

TABLE ACCESS FULL TEST

This tells me that my SQL statement will perform a FULL table scan on the TEST table (TABLE ACCESS FULL TEST).
Now lets add an index on that table and see how the things differ:
SQL> create index test_name_idx on test(object_name);

Index created.

SQL> truncate table plan_table;

Table truncated.

SQL> explain plan set statement_id = 'T1' for

2 select object_name from test where object_name like 'T%';

Explained.

SQL> SELECT LPAD(' ',2*(level-1)) || operation || ' ' || options || ' '

||

2 object_name || ' ' || DECODE(id,0,'Cost = ' || position) AS "Query

Plan",other

3 FROM plan_table

69

PERFORMANCE TUNING

4 START WITH id = 0

5 AND statement_id='T1'

6 CONNECT BY PRIOR ID = PARENT_ID

7* AND statement_id = 'T1'

Query Plan OTHER

-------------------------------------------------SELECT STATEMENT Cost =

INDEX RANGE SCAN TEST_NAME_IDX


As I added an index to the table. Before I issue another EXPLAIN PLAN, I truncate the contents of my PLAN_TABLE
to prepare for the new plan. Then I query the PLAN_TABLE to prepare for the new plan. Notice that this time I'm
using an index (TEST_TIME_IDX) that was created on the table. Hopefully the query faster now that it has an index
to use instead of FULL TABLE SCAN the earlier.
The row source count values in EXPLAIN PLAN output identify the number of rows processed by each step in the
plan. This helps us to identify inefficiencies in the query.
Note: When evaluating the plan, always examine the statements actual resource consumption. For better results,
use the Oracle Trace or SQL trace facility and TKPROF to examine individual SQL statement performance.
EXPLAIN PLAN Restrictions
* Oracle does not support EXPLAIN PLAN for statements performing implicit type conversion of date bind variables.
With bind variables in genral, the EXPLAIN PLAN output may not represent real execution Plan.
* From the text to a SQL statement, TKPROF cannot determine the types of the bind variables. It assumes that the
type is CHARACTER, and gives an error message if this is not the case. We can avoid this limitation by putting
appropriate type conversions in the SQL statement.

S-ar putea să vă placă și