Sunteți pe pagina 1din 46

Basic Tasks of Oracle DBA

Adding users in Oracle


Backup and restore
Import and export data
Job scheduling

and much more. So, here it is:

DISCLAIMER: All programming examples within the articles are meant for illustration
purposes only, and The Database Design Resource Center holds no warrant for
correctness if used.

A very basic task is to add users in Oracle.


One of the most important tasks is to establish and perform an Oracle Backup. Without it, you
are leading a dangerous life...
As a professional Oracle DBA, after having established sound backup procedures, let us hope
you never will need to perform an Oracle Recovery. Chances are, however, that you may have
to some day...
Indexes are an important performance enhancement facility. In this article, we look at some
options and tools for creating Oracle Indexes.
If you need to, you can export parts of, or the complete database, to a flat file using Oracle
Export. Nice feature, but very old-fashioned...
After you have performed an Oracle Export, you can use the Oracle Import function to load
data into another database. Very useful feature, but as with Oracle Export: A bit old-fashioned.
However, it does the job.
Some database tables may contain millions of rows, and performance may slow down. A good
solution for the Oracle DBA may be to split such tables into Oracle Partitions.
Instead of manually starting different database tasks, the Oracle DBA can automate the whole
process by using the Oracle Scheduler.
In order to perform Oracle backup and recovery, you need to have a redo log up and running.
This article looks at aspects when dealing with Oracle log files.
The last article in the Oracle DBA section is about performing real-time changes to tables and
other database objects:Redefining Oracle Objects.
Adding users in Oracle databases

Managing users in Oracle databases is an important area of database administration. Without


users, there can be no database change, and thus no need for a database.

Creation of new users in Oracle or adding users to an existing database comprises of many
steps out of which the most important is specifying values for several parameters in the
database. The question is what steps which should be taken by the DBA to perform this function
and what are the different types of users which exist in Database?

In a database, there are various types of users which have different responsibilities and rights.
The main categories are: Two user accounts are automatically created with the database and
granted the DBA role. These two user accounts are:

SYS (initial password: CHANGE_ON_INSTALL)


SYSTEM (initial password: MANAGER)

When new users in Oracle are added, some rights are assigned to that user so that actions are
performed on the database either directly or through roles. There are two types of privileges
given to a user:

System privileges through which the user can manage the performance of database actions.
Object privileges which allow access to objects, i.e. tables, table columns, indexes, synonyms,
procedures, etc.

Various methods to add new users in a database are:

CREATE USER user_name IDENTIFIED BY password;

CREATE USER uwclass IDENTIFIED BY uwclass;

CREATE USER user IDENTIFIED {BY password |


EXTERNALLY}

[DEFAULT TABLESPACE tablespace]


[TEMPORARY TABLESPACE tablespace]
[ { QUOTA {n [K|M] | UNLIMITED} ON
tablespace } [, ... ] ]
[PROFILE profile]
} [ ... ];

user - user name.


IDENTIFIED BY password | EXTERNALLY - EXTERNALLY is identified by the operating
system outside of the database. The OS_AUTHENT_PREFIX prefix in the parameter file must
be set for this option.
DEFAULT TABLESPACE tablespace_name - all objects created by this user are placed into
this tablespace unless user specifically specifies otherwise. The SYSTEM tablespace is the
default if not specified.
TEMPORARY TABLESPACE tablespace_name - storage of intermediate results. The
SYSTEM tablespace is the default if not specified.
QUOTA n [K|M] | UNLIMITED ON tablespace_name - give a user permission to create
objects in a tablespace using the QUOTA clause. The QUOTA clause applies a quota of
space for a user in a tablespace allowing a user to create objects within that quota of
tablespace space. The QUOTE clause effectively gives a use permission to create objects in a
tablespace. The role RESOURCE automatically grants unlimited space in a tablespace.

To provide system privileges to the user, the DBA will perform the following:

GRANT {system privilege [, ... ] } TO { { user | role | PUBLIC }


[, ... ] } [WITH ADMIN OPTION];

All users in Oracle are required to have the CREATE SESSION privilege in order to access the
database. Each user must be granted the CREATE SESSION privilege either directly or through
a role.

System privileges can be granted by one user to other users when the user granting the
privilege has the WITH ADMIN OPTION.

Object privileges allow a user to perform a specified action on a specific object. Other users can
access user-owned objects by preceding the object name with the user name
(username.object). Object privileges extend down to table columns.
GRANT {object privilege [, ... ] | ALL [PRIVILEGES] } ON [schema.] object
TO { { user | role | PUBLIC } [, ... ] }
[WITH GRANT OPTION];

GRANT {object privilege [, ... ] | ALL [PRIVILEGES] } [(column [, ... ])] ON [schema.] object
TO { { user | role | PUBLIC } [, ... ] }
[WITH GRANT OPTION];

Only INSERT, UPDATE and REFERENCES privileges can be granted at the column level.

To create users in Oracle whose authentication is done by the operating system or by password
files, the DBA will use:

Method 1:

Step 1. Set the initSID.ora parameters as:

remote_os_authent=TRUE os_authent_prefix = "OPS$"

Step 2. Generate a new spfile

CREATE spfile FROM pfile='initorabase.ora';

3. Add the following to the sqlnet.ora

sqlnet.authentication_services = (NTS)

Method 2:

Step 1: Connect as system/manager in SQL*Plus and create the Oracle user:

CREATE USER ops$oracle IDENTIFIED EXTERNALLY;

GRANT create session TO ops$oracle;

Step 2: Create a user in the operating system named oracle if one does not already exist.

Step 3: Go to command line (terminal window in UNIX, cmd in Windows. Type 'sqlplus' (without
the single quotes).

Method 3:

Step 1: Connect as system/manager in SQL*Plus and create the Oracle user:

CREATE USER "PC100USER" IDENTIFIED EXTERNALLY;

where PC100 is the name of the client computer. Then


GRANT CREATE SESSION TO "PC100USER";

Step 2: Create a user in Windows named USER.

Step 3: Log on Windows as USER and go to the C: command line.

The following methods for authenticating database administrators replace the CONNECT
INTERNAL syntax provided with earlier versions of Oracle:

operating system authentication


password file

Depending on whether you wish to administer your database locally on the same machine
where the database resides or to administer many different databases from a single remote
client, the DBA can choose between operating system authentication or password files to
authenticate database administrators.

On most operating systems, OS authentication for database administrators involves placing the
OS username of the database administrator in a special group or giving that OS username a
special process right.

The database uses password files to keep track of database usernames that have been granted
administrator privileges.

When the DBA grants SYSDBA or SYSOPER privileges to users in Oracle then that user's
name and privilege information is added to a password file. If the server does not have an
EXCLUSIVE password file, that is, if the initialization parameter
REMOTE_LOGIN_PASSWORDFILE is NONE or SHARED then the DBA receives an error
message if these privileges are attempted to be granted.

A user's name only remains in the password file while that user has at least one of these two
privileges. When the DBA revoke the last of these privileges from a user, that user is removed
from the password file. To create a password file and add new users in Oracle to it,

1. Follow the instructions for creating a password file.


2. Set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE.
3. Connect with SYSDBA privileges as shown in the following example:
4. CONNECT SYS/change_on_install AS SYSDBA
5. Start up the instance and create the database if necessary, or mount and open an
existing database.
6. Create users as necessary. Grant SYSOPER or SYSDBA privileges to DBA and other
users as appropriate.
7. These users in Oracle are now added to the password file and can connect to the
database as SYSOPER or SYSDBA with a username and password (instead of using
SYS). The use of a password file does not prevent OS authenticated users in Oracle
from connecting if they meet the criteria for OS authentication.
Oracle Backup: An introduction.

When performing an Oracle backup, you create a representative copy of the present original
data. If/when the original data is lost, the DBA can use the backup to reconstruct lost
information.

This database copy includes important parts of the database, such as the control file, archive
logs and datafiles-structures.

In the event of a media failure, the database backup is the key to successfully recovering data.
A few common questions which are related to database backup in general are:

The frequency of the backup


Choosing a strategy for the backup
Type of backup

Frequent and regular whole database or tablespace backups are essential for any recovery
scheme.

The frequency of backups should be based on the rate or frequency of changes to database
data such as insertions, updates, and deletions of rows in existing tables, and addition of new
tables.

If a database's data is changed at a high rate, the database backup frequency should be
proportionally high.

When the Oracle database is created, the DBA has to plan beforehand for the protection of the
database against potential failures.

There are two modes of handling an Oracle backup according to which the DBA can choose an
appropriate strategy:

NOARCHIVELOG mode: If it is acceptable to lose a limited amount of data if there is a disk


failure, you can operate the database in NOARCHIVELOG mode and avoid the extra work
required to archive filled online redo log files.

ARCHIVELOG mode: If it is not acceptable to lose any data, the database must be operated in
ARCHIVELOG mode, ideally with a multiplexed online redo log. If it is needed to recover to a
past point in time to correct a major operational or programmatic change to the database, be
sure to run in ARCHIVELOG mode and perform control file backups whenever making structural
changes.

Recovery to a past point in time is facilitated by having a backup control file that reflects the
database structure at the desired point-in-time. If so, do not operate the database in
NOARCHIVELOG mode because the required whole database backups, taken while the
database is shutdown, cannot be made frequently. Therefore, high-availability databases
always operate in ARCHIVELOG mode to take advantage of open data file backups.

Backup Strategies in NOARCHIVELOG Mode

If a database is operated in NOARCHIVELOG mode, filled groups of online redo log files are not
archived.

Therefore, the only protection against a disk failure is the most recent whole backup of the
database.

Whenever you alter the physical structure of a database operating in NOARCHIVELOG mode,
immediately take a consistent whole database backup. A whole database backup fully reflects
the new structure of the database.

Backup Strategies in ARCHIVELOG Mode

If a database is operating in ARCHIVELOG mode, filled groups of online redo log files are being
archived.

Therefore, the archived redo log coupled with the online redo log and data file backups can
protect the database from a disk failure, providing for complete recovery from a disk failure to
the instant that the failure occurred (or, to the desired past point-in-time).

Following are common backup strategies for a database operating in ARCHIVELOG mode:

When the database is initially created, perform a whole database, closed backup of the entire
database. This initial whole database backup is the foundation of backups because it provides
backups of all data files and the control file of the associated database.
Subsequent whole database backups are not required, and if a database must remain open at
all times, whole database, closed backups are not feasible. Instead, the DBA can take open
database or tablespace backups to keep database backups up-to-date.
Every time a structural change is made to the database, take a control file backup. If operating
in ARCHIVELOG mode and the database is open, use either Recovery Manager or the
ALTER DATABASE command with the BACKUP CONTROLFILE option.

The following methods are valid for backing-up an Oracle database:

Export/Import - Oracle exports are "logical" database backups (not physical) as they extract
data and logical definitions from the database into a file.
Other Oracle backup strategies normally back-up the physical data files. In exports one can
selectively re-import tables but cannot roll-forward from a restored export file.
To completely restore a database from an export file one practically needs to recreate the
entire database. Full exports include more information about the database in the export file as
compared to user level exports.

1. Shut down the database from sqlplus or server manager.


2. Backup all files to secondary storage (eg. tapes). Ensure that you backup all data files,
all control files and all log files.
3. When completed, restart your database.

The Oracle Export utility creates an Oracle backup by writing data from an Oracle database to
operating system files in an Oracle database format.

Export files store information about schema objects created for a database. Database exports
are not a substitute for a whole Oracle backup and don't provide the same recovery advantages
that the built-in functionality of Oracle offers.

Cold or Off-line Oracle backup - Shut the database down and backup up ALL data, log, and
control files. A cold backup is a backup performed while the database is off-line and
unavailable to its users
Hot or On-line Oracle Backup - A hot backup is a backup performed while the database is
online and available for read/write.
If the database is available and in ARCHIVELOG mode, set the tablespaces into backup mode
and backup their files. Also remember to backup the control files and archived redo log files.
Except for Oracle exports, one can only do on-line Oracle backup when running in
ARCHIVELOG mode.
RMAN Backup - While the database is off-line or on-line, use the "rman" utility to backup the
database.
The Recovery Manager utility manages the Oracle backup, restore and recovery operations of
Oracle databases. Recovery Manager uses information about the database to automatically
locate, then back up, restore and recover datafiles, control files and archived redo logs.
Recovery Manager gets the required information from either the databases' control file, or via
a central repository of information called a recovery catalog, which is maintained by Recovery
Manager.
You can perform Recovery Manager Backups using Oracle Enterprise Manager. Oracle
Enterprise Manager-Backup Manager is a GUI interface to Recovery Manager that enables
you to perform backup and recovery via a point-and-click method.
Recovery Manager is a command line interface (CLI) that directs an Oracle server process to
back up, restore or recover the database it is connected to. The Recovery Manager program
issues commands to an Oracle server process. The Oracle server process reads the datafile,
control file or archived redo log being backed up, or writes the datafile, control file or archived
redo log being restored or recovered.

Do the following queries to get a list of all files that need to be backed up:

select member from sys.v_$datafile;

select member from sys.v_$logfile;

select name from sys.v_$controlfile;

Sometimes Oracle takes forever to shutdown with the "immediate" option. As a workaround to
this problem, shutdown using these commands:

alter system checkpoint;


shutdown abort
startup restrict
shutdown immediate

Each tablespace that needs to be backed-up must be switched into backup mode before
copying the files out to secondary storage.

This can be done as shown below:

ALTER TABLESPACE xyz BEGIN BACKUP;


! cp xyfFile1 /backupDir/
ALTER TABLESPACE xyz END BACKUP;

Recovery Manager command:

run {
allocate channel t1 type `SBT_TAPE';
backup
format `df_%s_%t'
(datafile 10);
}

When Recovery Manager executes the above command, it sends the Oracle backup request to
the Oracle server performing the backup.

The Oracle server process identifies the output channel as the type `SBT_TAPE', and requests
the Media Management Library to load a tape and write the output specified.
Oracle Recovery : Restoring the database

Oracle recovery makes is possible to restore a physical backup and reconstruct it, and make it
available to the Oracle server.

To recover a restored datafile is to update it using redo records, i.e., records of changes made
to the database after the backup was taken.

If you use Oracle Recovery Manager (RMAN), you can also recover restored datafiles with an
incremental backup, which is a backup of a datafile that contain only changed data blocks.

Oracle performs crash recovery and instance recovery automatically after an instance failure.

Instance recovery is an automatic procedure that involves two distinct operations: rolling forward
the backup to a more current time by applying online redo records and rolling back all changes
made in uncommitted transactions to their original state.

The question is: What are the various methods to perform an Oracle recovery that can be used
by the DBA?

There are three basic types of Oracle recovery:

Instance recovery
Crash recovery
Media recovery.

Oracle performs the first two types of recovery automatically at instance startup and only media
recovery requires you to issue commands.

Instance Recovery:

Instance recovery, which is only possible in an OPS configuration, occurs in an open database
when one instance discovers that another instance has crashed.

A surviving instance automatically uses the redo log to recover the committed data in the
database buffers that was lost when the instance failed. Further, Oracle undoes any
transactions that were in progress on the failed instance when it crashed and then clears any
locks held by the crashed instance after the Oracle recovery is complete.

Crash Recovery:

Crash recovery occurs when either single-instance database crashes or all instances of a multi-
instance database crash.
In crash recovery, an instance must first open the database and then execute recovery
operations. In general, the first instance to open the database after a crash or SHUTDOWN
ABORT automatically performs crash recovery.

Media Recovery:

Unlike crash and instance recovery, media recovery is executed on your command.

In media recovery, you use online and archived redo logs and (if using RMAN) incremental
backups to make a restored backup current or to update it to a specific time. It is called media
recovery because you usually perform it in response to media failure.

As we know, recovery is the process of applying redo logs to the database to roll it forward. One
can roll-forward until a specific point-in-time, which is before the disaster occurred, or roll-
forward until the last transaction recorded in the log files, so the basic command used for
recovery is:

sql: connect SYS as SYSDBA


sql: RECOVER DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP
CONTROLFILE;

The main tool used for Oracle recovery is Recovery Manager, which is a command line
interface (CLI) that directs an Oracle server process to back up, restore or recover the database
it is connected to.

The Recovery Manager program issues commands to an Oracle server process. The Oracle
server process reads the datafile, control file or archived redo log being backed up, or writes the
datafile, control file or archived redo log being restored or recovered.

When an Oracle server process reads datafiles, it detects any split blocks and re-reads them to
get a consistent block.

Hence, you should not put tablespaces in hot backup mode when using Recovery Manager to
perform open backups

Recovery Manager provides a way to:

Configure frequently executed backup operations


Generate a printable log of all backup and recovery actions
Use the recovery catalog to automate both media restore and recovery operations
Perform parallel and automatic backups and restores
Find datafiles that require a backup based on user-specified limits on the amount of redo that
must be applied
Back up the database, individual tablespaces or datafiles
To use RMAN, a recovery catalog is not necessary. RMAN will always use the control file of the
target database to store backup and recovery operations. To use an Oracle recovery catalog,
you will first need to create a recovery catalog database and create a schema for it.

The catalog (database objects) will be located in the default tablespace of the schema owner.
The owner of the catalog cannot be the SYS user.

The Oracle recovery catalog database should be created on a different host, on different disks,
and in a different database from the target database on which the backup is taken.

The first step is to create a database for the Oracle recovery catalog. Before proceeding, the
database should have the following files installed:

You have access to the SYS password for the database.


A normal tablespace named TOOLS exists and will be used to store the Oracle recovery
catalog.
The database is configured in the same way as all normal databases, for example, catalog.sql
and catproc.sql have been successfully run.

The DBA will start by creating a database schema usually called rman. Assign an appropriate
tablespace to it and grant it the recovery_catalog_owner role. The commands which are used
for this procedure are:

sqlplus sys
SQL: CREATE USER rman IDENTIFIED BY rman;
SQL: ALTER USER rman DEFAULT TABLESPACE tools TEMPORARY TABLESPACE temp;
SQL: ALTER USER rman QUOTA UNLIMITED ON tools;
SQL: GRANT CONNECT,RESOURCE,RECOVERY_CATALOG_OWNER TO rman;
SQL: exit;

Next, log in to rman and create the catalog schema.

rman catalog rman/rman


RMAN: create catalog tablespace tools;
RMAN: exit;

The DBA will now continue by registering the databases in the catalog:
rman catalog rman/rman target backdba/backdba
RMAN: register database;

Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the
user with privileges to maintain and query the recovery catalog:

SQL: GRANT RECOVERY_CATALOG_OWNER TO rman;

Grant other desired privileges to the RMAN user:

SQL: GRANT CONNECT, RESOURCE TO rman;

After creating the catalog owner, now create the catalog itself by using the CREATE CATALOG
command within the RMAN interface.

This command will create the catalog in the default tablespace of the catalog owner.

rman catalog rman/rman@catdb

RMAN: create catalog;

Before letting RMAN use a recovery catalog, register the target database(s) in the recovery
catalog.

RMAN will obtain all information it needs to register the target database from the database itself.
As long as each target database has a distinct DBID, it is possible to register more than one
target database in the same recovery catalog.

Each database registered in a given catalog must have a unique database identifier (DBID), but
not necessarily a unique database name.

It is also possible to remove or unregister a target database from the recovery catalog. It can be
done by running the following procedure from the while logged into the recovery catalog:

SQL: execute dbms_rcvcat.unregisterdatabase(db_key, db_id)

OR

To unregister a database, do the following:

1. Identify the database that you want to unregister. Run the following query from the
recovery catalog using Server Manager or SQL*Plus (connected as the RMAN user):
2. SQL: SELECT * FROM rc_database;
3. Remove the backup sets that belong to the database that you want to unregister:
o Find the backup sets of the database that you want to unregister: RMAN: list backup
set of database;
o Remove the backup sets that belongs only to the database you want to unregister.
o RMAN: allocate channel for delete type disk; RMAN: change backup set XXX delete;
Oracle indexes : Adding database performance

Oracle indexes : Proper database indexing is a crucial factor for your database performance.

Most Oracle databases have hundreds or even thousands of indexes. This large number of
indexes and their complexity make index tuning and monitoring a difficult task for the DBA.

As time goes, even originally efficient indexes may become inefficient due to various index
distortions caused by data changes in the indexed tables.

The question is: How to manage Oracle indexes and what different options are available to use
them?

Indexes are logically and physically independent of the data in the associated table. The DBA
can create or drop an index at anytime without affecting the base tables or other indexes. If the
DBA drops an index, all applications continue to work.

However, access to previously indexed data might be slower.

Indexes, being independent structures, require storage space.

Oracle automatically maintains and uses indexes after they are created. Oracle automatically
reflects changes to data, such as adding new rows, updating rows, or deleting rows, in all
relevant indexes with no additional action by users.

Oracle Text supports the creation of three types of Oracle indexes depending on Oracle
application and text source. The DBA uses the CREATE INDEX statement to create all Oracle
Text index types.

Context index:

Use this index to build a text retrieval application when the database consists of large coherent
documents. The DBA can index documents of different formats such as MSWord, HTML, XML,
or plain text.

With a context index, it is possible to customize the index in a variety of ways.

Ctxact index:

Use this index type to improve mixed query performance. Suitable for querying small text
fragments with structured criteria like dates, item names, and prices that are stored across
columns.

Ctxrule Index:
Use a CTXRULE index to build a document classification application. The CTXRULE index is an
index created on a table of queries, where each query has a classification. Single documents
(plain text, HTML, or XML) can be classified using the MATCHES operator.

Setting the environment for indexing:

1. Create a user to be used for the following examples:

CREATE USER ctx_demo


IDENTIFIED BY ctx_demo
DEFAULT TABLESPACE ctx_demod
TEMPORARY TABLESPACE temp;

GRANT CONNECT,RESOURCE,CTXAPP,DBA TO ctx_demo;

2. Set the instance parameter 'text_enable = FALSE'. Ensure that the parameter now is set
to FALSE.

3. Include $ORACLE_HOME/ctx/bin in your PATH variable. Set the environment variable


to:
/lib:/ctx/lib
where is the explicit full path for oracle home. Do not use the $ORACLE_HOME
environment variable. The variable can also be set in the ENVS section of the
listener.ora file (note: version-specific patch names):

SID_LIST_listener=
(SID_LIST=
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/8.1.7)
(ENVS=LD_LIBRARY_PATH =
/u01/app/oracle/product/8.1.7/ctx/lib:/u01/app/oracle/product/8.1.7/lib)
(PROGRAM = extproc)
)
(SID_DESC=
(SID_NAME=OEM1DB)
(ORACLE_HOME=/u01/app/oracle/product/8.1.7)
)
)

Managing Oracle Indexes:

The DBA creates Oracle indexes for a table after inserting or loading data into the table. When
an index is created on a table that already has data, Oracle must use sort space.
Oracle uses the sort space in memory allocated for the creator of the index (the amount per
user is determined by the initialization parameter SORT_AREA_SIZE). To create an Index :

CREATE INDEX emp_ename ON emp(ename)


TABLESPACE users
STORAGE (INITIAL 20K
NEXT 20k
PCTINCREASE 75)
PCTFREE 0;

The DBA can also create an index using an existing index as the data source. Re-creating
Oracle indexes based on an existing data source also removes intra-block fragmentation.

In fact, compared to dropping the index and using the CREATE INDEX command, re-creating
an existing index offers better performance. Issue the following statement to re-create an
existing index:

ALTER INDEX index name REBUILD;

To alter an index, the database schema must contain the index or it must have the ALTER ANY
INDEX system privilege.

The DBA can alter Oracle indexes only to change the transaction entry parameters or to change
the storage parameters but it is not possible to change its column structure.

ALTER INDEX emp_ename


INITRANS 5
MAXTRANS 10
STORAGE (PCTINCREASE 50);

altering an index while maintaining constraints:

ALTER TABLE emp


ENABLE PRIMARY KEY USING INDEX
PCTFREE 5;

The DBA can monitor an index's efficiency of space usage at regular intervals by first analyzing
the index's structure and then querying the INDEX_STATS view:

SELECT pct_used FROM sys.index_stats WHERE name = 'indexname';

To drop an index, the index must be contained in the database schema and the DBA will follow
the procedure as given below:

DROP INDEX emp_ename;

Detecting / Viewing Errors during Index Creation


There are times when index creation operations fail. Whenever the system encounters an error
indexing a row, it logs the error into an Oracle Text view.

The DBA should be connected to the database as the user who created the index and query the
view CTX_USER_INDEX_ERRORS. The DBA may also view errors on ALL indexes in the
database by connecting as CTXSYS and querying the view

CTX_INDEX_ERRORS:

SELECT err_timestamp, err_text


FROM ctx_user_index_errors
ORDER BY err_timestamp DESC;

DML indexing:

DML operations to the base table refer to when documents are inserted, updated or deleted
from the base table.

When documents in the base table are inserted, updated, or deleted, their ROWIDs are held in
a DML queue until you synchronize the index. You can view this queue with
the CTX_USER_PENDING view.

For example, to view pending DML on all your indexes, issue the following statement:

SELECT pnd_index_name, pnd_rowid,


TO_CHAR(pnd_timestamp, 'dd-mon-yyyyhh24:mi:ss') timestamp
FROM ctx_user_pending;

Synchronizing the Index:

Synchronizing the index involves processing all pending updates, inserts, and deletes to the
base table.

You can do this in PL/SQL with the CTX_DDL.SYNC_INDEX procedure.

The following example synchronizes the index with 2 megabytes of memory:

BEGIN
ctx_ddl.sync_index('myindex', '2M');
END;

Setting Background DML:

You can set CTX_DDL.SYNC_INDEX to run automatically at regular intervals using the
DBMS_JOB.SUBMIT procedure. Oracle Text includes a SQL script you can use to do this. The
location of this script is:

$ORACLE_HOME/ctx/sample/script/drjobdml.sql
To use this script, The DBA has to be the index owner and he must have execute privileges on
the CTX_DDL package. He/she should also set the job_queue_ processes parameter in the
Oracle initialization file.

For example, to set the index synchronization to run every 360 minutes on myindex, the DBA
can issue the following in SQL*Plus:

SQL: @drjobdml myindex 360

Function indexes:

The DBA and the programmer are using index functions easily and efficiently. This capability
allows you to have case insensitive searches or sorts, search on complex equations, and
extend the SQL language efficiently by implementing your own functions and operators and
then searching on them.

The following is a list of what needs to be done to use function based Oracle indexes:

The DBA must have the system privilege query rewrite to create function based indexes on
tables in your own schema.
The DBA must have the system privilege global query rewrite to create function based Oracle
indexes on tables in other schemas.
For the optimizer to use function based Oracle indexes, the following session or system
variables must be set:

QUERY_REWRITE_ENABLED=TRUE
QUERY_REWRITE_INTEGRITY=TRUSTED

The DBA may enable these at either the session level with ALTER SESSION or at the system
level via ALTER SYSTEM or by setting them in the init.ora parameter file.

The meaning of query_rewrite_enabled is to allow the optimizer to rewrite the query allowing it
to use the function based index. The meaning of is to tell the optimizer to trust that the code
marked deterministic by the programmer is in fact deterministic.

If the code is in fact not deterministic, the resulting rows from the index may be incorrect.

Use the Cost Based Optimizer. Function based indexes are only visible to the Cost Based
Optimizer and will not be used by the Rule Based Optimizer ever.
Use SUBSTR() to constrain return values from user written functions that return VARCHAR2
or RAW types.

Once the above list has been satisfied, it is as easy as CREATE INDEX from there on in. The
optimizer will find and use the Oracle indexes at runtime for the user.
Oracle Export : Extracting data.

Oracle Export provides a simple way for you to transfer data objects between Oracle databases,
even if they reside on platforms with different hardware and software configurations.

Oracle Export extracts the object definitions and table data from an Oracle database and stores
them in an Oracle binary-format Export dump file located on disk or tape.

Such files can then be FTPed or physically transported to a different site and used, with the
Import utility, to transfer data between databases that are on machines not connected via a
network or as backups in addition to normal backup procedures.

There are a few points regarding the enforcement of Oracle security policies.

For any tables protected by an Oracle security policy, only rows with labels authorized for read
access will be exported; unauthorized rows will not be included in the export file.
Consequently, to perform an Oracle export of all the data in protected tables, you must have a
privilege (such as FULL or READ) which gives you complete access.
SQL statements to reapply policies are exported along with tables and schemas that are
exported. These statements are executed during import to reapply policies with the same
enforcement options as in the original database.
The HIDE property is not exported. When protected tables are exported, the label columns in
those tables are also exported (as numeric values). However, if a label column is hidden, it is
exported as a normal, unhidden column.
The LBACSYS schema cannot be exported due to the use of opaque types in Oracle Label
Security. To export an entire database, you must individually specify all of the schemas and/or
tables (except for the LBACSYS schema). Use standard Oracle backup techniques to back up
the LBACSYS schema.

If a user attempts to access rows containing invalid numeric labels, the operation will fail.

To use Oracle Export, you must run the script CATEXP.SQL or CATALOG.SQL (which runs
CATEXP.SQL) after the database has been created.

CATEXP.SQL or CATALOG.SQL needs to be run only once on a database. You do not need to
run it again before you perform the export. The script performs the following tasks to prepare the
database for Export:

Creates the necessary export views,


assigns all necessary privileges to the EXP_FULL_DATABASE role
assigns the EXP_FULL_DATABASE role to the DBA role
Before you run Oracle Export, ensure that there is sufficient disk or tape storage space to write
the export file.

If there is not enough space, Oracle Export terminates with a write-failure error.

You can use table sizes to estimate the maximum space needed. Table sizes can be found in
the USER_SEGMENTS view of the Oracle data dictionary. The following query displays disk
usage for all tables:

SELECT SUM(bytes) FROM user_segments


WHERE segment_type='TABLE';

The result of the query does not include disk space used for data stored in LOB (large object) or
VARRAY columns or partitions.

You can invoke Oracle Export in one of the following ways:

Enter the following command: exp username/password PARFILE=filename. PARFILE is a


file containing the export parameters you typically use. If you use different parameters for
different databases, you can have multiple parameter files. This is the recommended method.
Enter the command: exp username/passwordfollowed by the parameters you need. Note:
The number of parameters cannot exceed the maximum length of a command line on the
system... (Which shows that some parts of Oracle are still VERY old...)
Enter only the command exp username/password to begin an interactive session and let
Export prompt you for the information it needs. The interactive method provides less
functionality than the parameter-driven method. It exists for backward compatibility.

You can use a combination of the first and second options. That is, you can list parameters both
in the parameters file and on the command line.

In fact, you can specify the same parameter in both places. The position of the PARFILE
parameter and other parameters on the command line determines what parameters override
others.

For example, assume the parameters file params.dat contains the parameter INDEXES=Y and
Oracle Export is invoked with the following line:

exp system/manager PARFILE=params.dat INDEXES=N

In this case, because INDEXES=N occurs after PARFILE=params.dat, INDEXES=N overrides


the value of the INDEXES parameter in the PARFILE.

You can specify the username and password in the parameter file, although, for security
reasons, this is not recommended. If you omit the username/password combination, Oracle
Export prompts you for it.
Oracle Import : Loading data into the database

Oracle Import inserts the data objects extracted from one Oracle database by the Oracle
Export utility into another Oracle database. Oracle Export dump files can only be read by Import.

Oracle Import reads the object definitions and table data that the Oracle Export utility extracted
from an Oracle database and stored in an Oracle binary-format Oracle Export dump file located
typically on disk or tape.

Such files are transported to a different site and used, with the Import utility, to transfer data
between databases that are on machines not connected via a network or as backups in addition
to normal backup procedures.

The question which comes to mind is: How can we perform import and export functions?

To use the Oracle Import utility, prepare the import database and ensure that the import user
has the proper authorizations.

Before you can use the Import utility, you must prepare the import database, as follows:

1. Create any security policies, which protect the data to be imported. The policies must
use the same column names as in the export database.
2. Define in the import database all of the label components and individual labels used in
tables being imported. Tag values assigned to the policy labels in each database must
be the same.

Verifying Import User Authorizations:

To successfully import data under Oracle security, the user running the Oracle Import operation
must be authorized for all of the labels required to insert the data and labels contained in the
export file.

Errors will be raised upon import if the following requirements are not met:

Requirement 1:

To assure that all rows can be imported, the user must have the policy_DBA role for all policies
with data being imported.

After each schema or table is imported, any policies from the export database are reapplied to
the imported objects.

Requirement 2:

The user must also have the ability to write all rows that have been exported. This can be
accomplished by one of the following methods:
The user can be granted the FULL privilege.
A user-defined labeling function can be applied to the table.
The user can be given sufficient authorization to write all labels contained in the import file.

Defining Data Labels for Import:

The label definitions at the time of import must include all of the policy labels used in the export
file.

You can use the views

DBA_SA_LEVELS,
DBA_SA_COMPARTMENTS,
DBA_SA_GROUPS, and
DBA_SA_LABELS

in the export database to design SQL scripts that re-create the label components and labels for
each policy in the import database.

The following example (in SQL*Plus) shows how to generate a PL/SQL block that re-creates the
individual labels for the HR policy:

set serveroutput on

BEGIN
dbms_output.put_line('BEGIN');
FOR l IN
(SELECT label_tag, label
FROM dba_sa_labels
WHERE policy_name='HR'
ORDER BY label_tag) LOOP
dbms_output.put_line
(' SA_LABEL_ADMIN
CREATE_LABEL(''HR'', ' || l.label_tag || ', ''' || l.label || ''');');
END LOOP;
dbms_output.put_line ('END;');
dbms_output.put_line ('/');
END;

If the individual labels do not exist in the import database with the same numeric values and the
same character string representations as in the export database, then the label values in the
imported tables will be meaningless.

The numeric label value in the table may refer to a different character string representation, or it
may be a label value that has not been defined at all in the import database.

One of the most important issues for an Oracle administrator is tracking the execution of an
Oracle import.
For very large tables, the Oracle Import utility can take many hours, and the DBA needs to know
the rate at which the utility is adding rows to the table. To monitor how fast rows are imported
from a running import job, try the following method.

SELECT SUBSTR(sql_text, INSTR(sql_text,'INTO "'),30) table_name


, rows_processed
, ROUND( (sysdate-TO_DATE(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes
, TRUNC(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60))
rows_per_minute
FROM sys.v_$sqlarea WHERE sql_text like 'INSERT %INTO "%'
AND command_type = 2
AND open_versions NOT 0;
Oracle Partitions : Divide and increase performance!

Oracle partitions addresses the key problem of supporting very large tables and indexes by
allowing you to decompose them into smaller and more manageable pieces called partitions.

Once partitions are defined, SQL statements can access and manipulate the partitions rather
than entire tables or indexes.

Partitions are especially useful in data warehouse applications, which commonly store and
analyze large amounts of historical data.

How do we create and manage partitions?

Oracle partitions using DML:

The following DML statements contain an optional partition specification for non-remote
partitioned tables:

INSERT
UPDATE
DELETE
LOCK TABLE
SELECT

For example:

SELECT * FROM schema.table PARTITION(part_name);

This syntax provides a simple way of viewing individual partitions as tables: A view can be
created which selects from just one partition using the partition-extended table name, and this
view can be used in lieu of a table.

With such views you can also build partition-level access control mechanisms by granting
(revoking) privileges on these views to (from) other users or roles.

The use of partition-extended table names has the following restrictions:

A partition-extended table name cannot refer to a remote schema object.


The partition-extended table name syntax is not supported by PL/SQL.
A partition extension must be specified with a base table. No synonyms, views, or any other
schema objects are allowed.

In order to provide partition independence for DDL and utility operations, Oracle supports DML
partition locks.
Partition independence allows you to perform DDL and utility operations on selected partitions
without disturbing activities on other partitions.

The purpose of a partition lock is to protect the data in an individual partition while multiple users
are accessing that partition or other partitions in the table concurrently.

Managing Oracle partitions:

Create a partitioned table:

Creating Oracle partitions is very similar to creating a table or index. You must use the CREATE
TABLE statement with the PARTITION CLAUSE.

The first step to create a partitioned table would be to identify the column(s) to partition on and
the range of values which go to each partition. Then you determine the tablespaces where each
partition should go.

Here is a script to create a simple partitioned table:

CREATE TABLE AA_GENERAL_ATTENDANCE


(GL_MARKS_MONTH NUMBER (4),
GL_BATCH VARCHAR2(4),
GL_JIB VARCHAR2(1),
... ... ... ... GLR_OVER_UNDER_IND VARCHAR2(1))
PCTFREE 0 PCTUSED 40 INITRANS 1
STORAGE(INITIAL 250M NEXT 10M MINEXTENTS 1
MAXEXTENTS 1000 PCTINCREASE 0 )
PARTITION BY RANGE (GL_MARKS_MONTH)
(PARTITION SSTN7912 VALUES LESS THAN (8000)
TABLESPACE SSTN 7912
STORAGE (INITIAL 100M NEXT 10M PCTINCREASE 0)
, PARTITION SSTN 8012 VALUES LESS THAN (8100)
TABLESPACE SSTN 8012,
PARTITION SSTN 8112 VALUES LESS THAN (8200)
TABLESPACE SSTN 8112,
PARTITION SSTN 8212 VALUES LESS THAN (8300)
TABLESPACE SSTN 8212,
... ... ... ...
PARTITION SSTN 9712 VALUES LESS THAN (9800)
TABLESPACE SSTN 9712,
PARTITION SSTN 9801 VALUES LESS THAN (MAXVALUE)
TABLESPACE SSTN 9801
STORAGE (INITIAL 50M NEXT 5M PCTINCREASE 0)
);

Moving Oracle partitions:

You can use the MOVE PARTITION clause to move a partition. For example, a DBA wishes to
move the most active partition to a tablespace that resides on its own disk (in order to balance
I/O).
The DBA can issue the following statement:

ALTER TABLE aaa MOVE PARTITION bbb


TABLESPACE rrr NOLOGGING;

This statement always drops the partition's old segment and creates a new segment, even if
you don't specify a new tablespace.

When the partition you are moving contains data, MOVE PARTITION marks the matching
partition in each local index, and all global index partitions as unusable. You must rebuild these
index partitions after issuing MOVE PARTITION.

Adding Oracle partitions:

You can use the ALTER TABLE ADD PARTITION statement to add a new partition to the "high"
end.

If you wish to add a partition at the beginning or in the middle of a table, or if the partition bound
on the highest partition is MAXVALUE, you should instead use the SPLIT PARTITION
statement.

When the partition bound on the highest partition is anything other than MAXVALUE, you can
add a partition using the ALTER TABLE ADD PARTITION statement.

ALTER TABLE edu


ADD PARTITION jan99 VALUES LESS THAN ( '990201' )
TABLESPACE tsjan99;

When there are local indexes defined on the table and you issue the ALTER TABLE ... ADD
PARTITION statement, a matching partition is also added to each local index.

Since Oracle assigns names and default physical storage attributes to the new index partitions,
you may wish to rename or alter them after the ADD operation is complete.

Dropping Oracle partitions:

You can use the ALTER TABLE DROP PARTITION statement to drop Oracle partitions.

If there are local indexes defined for the table, ALTER TABLE DROP PARTITION also drops
the matching partition from each local index.

You cannot explicitly drop a partition for a local index. Instead, local index partitions are dropped
only when you drop a partition from the underlying table.

If, however, the partition contains data and global indexes, and you leave the global indexes in
place during the ALTER TABLE DROP PARTITION statement which marks all global index
partitions unusable, you must rebuild them afterwards.

Truncating Partitioned Tables:


You can use the ALTER TABLE TRUNCATE PARTITION statement to remove all rows from a
table partition with or without reclaiming space.

If there are local indexes defined for this table, ALTER TABLE TRUNCATE PARTITION also
truncates the matching partition from each local index.

Splitting Oracle partitions:

You can split a table partition by issuing the ALTER TABLE SPLIT PARTITION statement.

If there are local indexes defined on the table, this statement also splits the matching partition in
each local index.

Because Oracle assigns system-generated names and default storage attributes to the new
index partitions, you may wish to rename or alter these index partitions after splitting them.

If the partition you are splitting contains data, the ALTER TABLE SPLIT PARTITION statement
marks the matching partitions (there are two) in each local index, as well as all global index
partitions, as unusable.

You must rebuild these index partitions after issuing the ALTER TABLE SPLIT PARTITION
statement.

Exchanging Table Partitions:

You can convert a partition into a non-partitioned table, and a table into a partition of a
partitioned table by exchanging their data and index segments.

Exchanging table partitions is most useful when you have an application using non-partitioned
tables which you want to convert to partitions of a partitioned table.

Converting a Partition View into a Partitioned Table:

This part describes how to convert a partition view into a partitioned table. The partition view is
defined as follows:

CREATE VIEW students


SELECT * FROM students_jan95
UNION ALL
SELECT * FROM students_feb95
UNION ALL
...
SELECT * FROM students_dec95;

Initially, only the two most recent partitions, students_NOV95 and students_DEC95, will be
migrated from the view to the table by creating the partition table.

Each partition gets a temporary segment of 2 blocks (as a placeholder).


CREATE TABLE accounts_new (...)
TABLESPACE ts_temp STORAGE (INITIAL 2)
PARTITION BY RANGE (opening_date)
(PARTITION jan95 VALUES LESS THAN ('950201'),
...
PARTITION dec95 VALUES LESS THAN ('960101'));

Use the EXCHANGE command to migrate the tables to the corresponding partitions.

ALTER TABLE students_new


EXCHANGE PARTITION nov95
WITH TABLE students_95
WITH VALIDATION;

ALTER TABLE students_new


EXCHANGE PARTITION dec95 WITH TABLE students_dec95
WITH VALIDATION;

So now the placeholder data segments associated with the NOV95 and DEC95 partitions have
been exchanged with the data segments associated with the students_NOV95 and
students_DEC95 tables.

Redefine the students view:

CREATE OR REPLACE VIEW accounts


SELECT * FROM students_jan95
UNION ALL
SELECT * FROM students_feb_95
UNION ALL
...
UNION ALL
SELECT * FROM students_new PARTITION (nov95)
UNION ALL
SELECT * FROM students_new PARTITION (dec95);

Drop the students_NOV95 and students_DEC95 tables, which own the placeholder segments
that were originally attached to the NOV95 and DEC95 partitions.

After all the tables in the UNIONALL view are converted into partitions, drop the view and
rename the partitioned table as the view.

DROP VIEW students;


RENAME students_new TO accounts;br>

Rebuilding Index Partitions:

Some operations, such as ALTER TABLE DROP PARTITION, mark all Oracle partitions of a
global index unusable. You can rebuild global index partitions in two ways:
1. Rebuild each partition by issuing the ALTER INDEX REBUILD PARTITION statement
(you can run the rebuilds concurrently).
2. Drop the index and re-create it (probably the easiest method).

Merging Oracle partitions:

Partition-level Export and Import provide a way to merge Oracle partitions in the same table,
even though SQL does not explicitly support merging partitions.

A DBA can use partition-level Import to merge a table partition into the next highest partition on
the same table. To merge partitions, do an export of the partition you would like to merge, delete
the partition and do an import.
Oracle Scheduler : Putting tasks on autopilot

The Oracle Scheduler enables database administrators and application developers to control
when and where various tasks take place.

The Scheduler uses three main components:

A schedule specifies when and how many times a job is executed. Similar to programs,
schedules are database entities and can be saved in the database. The same schedule can be
used by multiple jobs.

A program is a collection of metadata about what will be run by the scheduler. This includes
information such as the program name, the type of program, and information about arguments
passed to the program.

A job specifies what needs to executed and when. For example, the "what" could be a PL/SQL
procedure, an executable C program, a java application, a shell script, or client-side PL/SQL.
You can specify the program (what) and schedule (when) as part of the job definition, or you
can use an existing program or schedule instead.

The Oracle Scheduler provides complex enterprise scheduling functionality that enables an o
rganization to easily and effectively manage database maintenance and other routine tasks.

The Oracle Scheduler enables limited computing resources to be allocated appropriately among
competing jobs, thus aligning job processing with your business needs.

It leverages the reliability and scalability of the Oracle database to provide a robust environment
for running jobs.

BENEFITS:

The Oracle Scheduler provides a number of benefits:

Easy to use
Minimum development time is required since jobs can be easily defined and scheduled using
simple mouse operations.
Scheduler objects are modular and can be shared with other users thus reducing the
development time for new jobs.
The graphical interface makes it easy for users to manipulate existing Oracle Scheduler
objects. Object properties can be modified to create new objects.
The same operation can be performed on multiple jobs. For example, multiple jobs can be
stopped in one call.
Easy to manage
Jobs can be easily moved from one system to another, for example from a development
environment to production, by using the EXPORT or IMPORT utility in the database.
Exception based management enables administrators to quickly focus on jobs with errors
without having to wade through all the jobs.
Jobs can be managed as a group.
All Oracle Scheduler activities can be logged, providing an audit trail of all scheduler activities.
There is support for time zones, which makes it easy to manage jobs in any time zone.
The Scheduler can be accessed and controlled from anywhere, providing the utmost flexibility.
All Scheduler activity can be carried out from the same graphical interface.
Jobs can be filtered and sorted for easy viewing.
Existing database knowledge can be leveraged, therefore eliminating the need to learn a new
system and syntax.
Since the Oracle Scheduler is part of the database, it is platform independent, therefore jobs
can be managed similarly on all platforms.
There is no extra licensing or cost that is required for the Scheduler because it is a feature of
the database.
The Scheduler can immediately exploit new database features.
The Scheduler inherits all the database features: high security, high availability, and high
scalability.

The Oracle Scheduler uses the supplied PL/SQL package DBMS_SCHEDULER to handle
almost all scheduling Here is an example of scheduling a task with DBMS_JOB:

VARIABLE jobno NUMBER;


BEGIN
DBMS_JOB.SUBMIT (
job => :jobno
,what => 'BEGIN DBMS_STATS.
GATHER_SCHEMA_STATS(''HR'');END;'
,next_date => '09/01/2004 21:00:00'
,interval => 'TRUNC(SYSDATE) + 1 + 21/
24');
COMMIT;
END;
Monitoring and managing is a key activity in a job system. Jobs can be managed at a group
level making it easy to manage large number of jobs.

The GUI provides a central overview of all scheduler objects, enabling administrators to easily
monitor the progress of jobs. It enables them to quickly identify and rectify the malfunctions in
Scheduler activities.

Jobs can be filtered and sorted by any attribute of the job, making it easy to identify jobs that are
in an error state. Jobs can be viewed, altered, stopped or killed without having to go to another
system or screen, by simply clicking on the job name, making it easy to resolve problems.

Here is an example of scheduling that same task with DBMS_SCHEDULER. The parameters
now make sense when compared to DBMS_JOB. And gone at last is that wacky INTERVAL
parameter.

BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'HR_STATS_REFRESH'
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN DBMS_STATS.GATHER_SCHEMA_STATS(''HR'');
END;'
,start_date => '09/01/2004 09:00 PM'
,repeat_interval => 'FREQ=DAILY'
,enabled => TRUE
,comments => 'Refreshes the HR Schema every night at 9 PM'
);
END;

To create a simple, self-contained job where attributes are specified in the job itself, perform the
following:

1. Select the Jobs link.


2. Click Create to create a new job
3. Enter the following information, then click OK.
Name: ALTER_INDX001
Owner: HR
Enabled:Yes
Description: This job will coalesce index HR.EMP_NAME_IX on the EMPLOYEES table.
Logging Level: log job runs only (RUNS)
Job Class:DEFAULT_JOB_CLASS
Auto Drop:FALSE
Restartable:TRUE
Command: EXECUTE IMMEDIATE 'alter index HR.EMP_NAME_IX coalesce';

To create a saved schedule:

1. Creating a Saved Schedule.


2. Select the Schedules link.
3. Click Create to create a new schedule.
Enter the following information, then click OK.
Name: SCHED001
Owner: HR
Description: Run at 11:00PM every night for the next year
Start: Later
Date:Today's date
Time:11:00PM
Frequency:1 Days
Repeat Until: Custom
Date: One year from today's date
Time 11:00PM
Creating a job that uses the program:

1. Click Jobs and then create.


2. Enter the following information, then click Change Command Type.
Name: LOADDATA_JOB1
Schema: HR
Enabled: Yes
Description: This job uses the program loaddata
Logging Level: log job runs only (RUNS)
Job Class:DEFAULT_JOB_CLASS
Auto Drop:FALSE
Restartable:TRUE
3. Select Program Name and click the search light.
4. Select LOADDATA from the list and click Select
5. Scroll down to the Arguments heading
6. Select User defined from the drop down list for the Option column. Enter /(wkdir
path)/loaddata1.dat in the Value column and click OK.
Oracle log files : An introduction

The Oracle server maintains the redo Oracle log files to minimize the loss of data in the
Database in case of an uncontrolled shutdown.

Online redo Oracle log files are filled with redo records. A redo record, also called a redo entry,
is made up of a group of change vectors, each of which is a description of a change made to a
single block in the database.

For example, if you change a salary value in an employee table, you generate a redo record
containing change vectors that describe changes to the data segment block for the table, the
rollback segment data block, and the transaction table of the rollback segments.

The question here is how are the Oracle log files maintained, and what information do we
have?

A couple of interesting Oracle views:

a)To view information on log files:

SELECT * FROM v$log;

b)To view information on log file history:

SELECT thread#, first_change#,


TO_CHAR(first_time,'MM-DD-YY HH12:MIPM'),
next_change#
FROM v$log_history;
The above shows you what log state your system is in. Read more about ARCHIVELOG in the
article on Oracle Backup.
Consider the parameters that can limit the number of online redo Oracle log files before setting
up or altering the configuration of an instance's online redo log.

The following parameters limit the number of online redo Oracle log files that you can add to a
database:

1. The MAXLOGFILES parameter used in the CREATE DATABASE statement determines


the maximum number of groups of online redo Oracle log files for each database.

Group values can range from 1 to MAXLOGFILES.

The only way to override this upper limit is to re-create the database or its control file.
Thus, it is important to consider this limit before creating a database.

If MAXLOGFILES is not specified for the CREATE DATABASE statement, Oracle uses
an operating system specific default value.

The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement


determines the maximum number of members for each group.

As with MAXLOGFILES, the only way to override this upper limit is to re-create the
database or control file. Thus, it is important to consider this limit before creating a
database.
If no MAXLOGMEMBERS parameter is specified for the CREATE DATABASE
statement, Oracle uses an operating system default value.

At any given time, Oracle uses only one of the online redo log files to store redo records written
from the redo log buffer.

The online redo log file that Log Writer (LGWR) is actively writing to is called the current online
redo log file. Online redo Oracle log files that are required for instance recovery are called active
online redo log files. Online redo log files that are not required for instance recovery are called
inactive.

If you have enabled archiving (ARCHIVELOG mode), Oracle cannot reuse or overwrite an
active online log file until ARCn has archived its contents.

If archiving is disabled (NOARCHIVELOG mode), then the last online redo log file fills writing
continues by overwriting the first available active file. The best way to determine the appropriate
number of online redo log files for a database instance is to test different configurations.

The optimum configuration has the fewest groups possible without hampering LGWR's writing
redo log information.

In some cases, a database instance may require only two groups. In other situations, a
database instance may require additional groups to guarantee that a recycled group is always
available to LGWR.

During testing, the easiest way to determine if the current online redo log configuration is
satisfactory is to examine the contents of the LGWR trace file and the database's alert log.

If messages indicate that LGWR frequently has to wait for a group because a checkpoint has
not completed or a group has not been archived, add groups.

LGWR writes to online redo log files in a circular fashion. When the current online redo log file
fills, LGWR begins writing to the next available online redo log file.

When the last available online redo log file is filled, LGWR returns to the first online redo log file
and writes to it, starting the cycle again. The numbers next to each line indicate the sequence in
which LGWR writes to each online redo log file.

Filled online redo log files are available to LGWR for reuse depending on whether archiving is
enabled or disabled:

If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once
the changes recorded in it have been written to the datafiles.
If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR
once the changes recorded in it have been written to the datafiles and once the file has been
archived.

Operations on Oracle log files :


1. Forcing log file switches:
ALTER SYSTEM switch logfile;
or
ALTER SYSTEM checkpoint;
2. Clear A Log File If It Has Become Corrupt:
ALTER DATABASE CLEAR LOGFILE GROUP group_number;
3. This statement overcomes two situations where dropping redo logs is not possible: If
there are only two log groups and if the corrupt redo log file belongs to the current group:
ALTER DATABASE CLEAR LOGFILE GROUP 4;
4. Clear A Log File If It Has Become Corrupt And Avoid Archiving:
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP group_number;
5. Use this version of clearing a log file if the corrupt log file has not been archived:
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
6. Privileges Related To Managing Log Files:
ALTER DATABASE
ALTER SYSTEM
7. Init File Parameters Related To Log Files:
log_checkpoint_timeout ... set to 0
8. Managing Log File Members:
ALTER DATABASE
ADD LOGFILE MEMBER 'log_member_path_and_name'
TO GROUP group_number;
9. Adding log file group members:
ALTER DATABASE
ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;
10. Droping log file group members:
ALTER DATABASE
DROP LOGFILE MEMBER log_member_path_and_name';
ALTER DATABASE
DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';
11. To create a new group of online redo log files, use the SQL statement ALTER
DATABASE with the ADD LOGFILE clause:

The following statement adds a new group of redo Oracle log files to the database:

ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/


oracle/dbs/log2c.rdo') SIZE 500K;
Redefining Oracle objects : Changing database objects in
real-time.

Redefining Oracle objects : Background:

Chris Lawson and Roger Schrag of DatabaseSpecialists, Inc., write in their paper, "Don't Shut
Down That Database! Use Oracle 9i Online Object Redefinition Instead" that:

"The concept of dynamic instance parameters (the init.ora file) allowed DBAs to adjust certain
instance parameters such as sort_area_size without having to boot all users off the system and
restart the instance.

Oracle 8i introduced online index rebuilds and online table moves. However, the ability to make
substantial changes by redefining Oracle objects without restricting user access was still
lacking.

With the first release of Oracle 9i, a new concept called "online redefinition" was introduced. By
calling a supplied PL/SQL package called DBMS_REDEFINITION, it is now possible to perform
many types of table maintenance without taking away read or write access from users.

By redefining Oracle objects online, you can move a table to a new tablespace, change its
storage parameters, add columns, remove columns, rename columns, change data types,
change index and constraint definitions... and the list goes on.""

The question is: How to do it?

The key to redefining Oracle objects online in Oracle 9i (and later versions) is a supplied
PL/SQL package called DBMS_REDEFINITION.

DBMS_REDEFINITION is used to redefine table columns and column names. Tables that can
not be redefined are:

Tables that have materialized views and materialized view logs defined on them cannot be
redefined online
Tables that are materialized view container tables and AQ tables cannot be redefined online
The overflow table of an IOT table cannot be redefined online

Procedures and the privileges defined in DBMS_REDEFINITION package:

DBMS_REDEFINITION : System privileges required for redefining Oracle objects:

GRANT create session TO uwclass;


GRANT create materialized view TO uwclass;
GRANT create table TO uwclass;
GRANT create trigger TO uwclass;
GRANT create view TO uwclass;

GRANT execute ON dbms_redefinition TO uwclass;

CAN_REDEF_TABLE: Determines if a given table can be redefined online:

dbms_redefinition.can_redef_table (
uname IN VARCHAR2,
tname IN VARCHAR2,
options_flag IN BINARY_INTEGER := 1);

exec dbms_redefinition('REORG','EMP', cons_use_pk);

If the procedure completes successfully without raising an exception, then the table is eligible
for online redefinition.

If the table is not eligible, then the procedure will raise an exception describing the problem.

CREATE AN INTERIM TABLE:

In order to avoid interfering with the production table, the online redefinition process makes use
of an interim or staging table.

Instead of redefining Oracle objects directly on the production table, the changes are made to
the interim table and data is copied from the production table into the interim table.

At the end of the redefinition process when all data has been loaded into the interim table and
you are satisfied with the results, the production table and interim table will be swapped.

Be sure to give the table the same primary key as the existing production table.

However, do not create any indexes or declare any constraints on the interim table other than
the primary key. The interim table and its primary key should be created with the exact definition
and storage characteristics that are desired in the final, redefined table.

If column definitions will be changed (such as column names or data types), the interim table
should use the final column names and definitions.

If the redefined table is to be index-organized or partitioned, then the interim table should be
created that way.

SYNC_INTERIM_TABLE:

Maintains synchronization between the original and interim table:

dbms_redefinition.sync_interim_table (
uname IN VARCHAR2, -- schema name
orig_table IN VARCHAR2, -- original table
int_table IN VARCHAR2); -- interim table
START_REDEF_TABLE:

Starts the redefinition process.

Once the interim table has been created, the next step is to link the production table to it and
copy the data.

To do this we use the START_REDEF_TABLE procedure in the DBMS_REDEFINITION


package.

When calling this procedure, we simply supply the schema name along with the names of the
two tables. If changes are being made to column mapping as part of the redefinition, then it will
be necessary to supply an additional parameter to explain the column mapping.

dbms_redefinition.start_redef_table (
uname IN VARCHAR2, -- schema name
orig_table IN VARCHAR2, -- table to redefine
int_table IN VARCHAR2, -- interim table
col_mapping IN VARCHAR2 := NULL, -- column mapping
options_flag IN BINARY_INTEGER := 1, -- redefinition type
orderby_cols IN VARCHAR2); -- col list and ASC/DESC

ADD CONSTRAINTS, INDEXES, TRIGGERS, AND GRANTS TO THE INTERIM TABLE:

All of the rows in the production table have been copied to the interim table.

It is now time to add any constraints, indexes, database triggers and grants to the interim table
that you wish to be present on the production table at the conclusion of the process of redefining
Oracle objects.

Note that any foreign keys that you declare on the interim table at this point should be created
with the DISABLE keyword.

The foreign key constraints will be enabled later in the redefinition process. Actually, foreign
keys can be quite tricky.

COPY_TABLE_DEPENDENTS:

Copies the dependant objects of the original table to the interim table:

dbms_redefinition.copy_table_dependents()

FINISH_REDEF_TABLE:

Registers a dependent object (index, trigger or constraint):

dbms_redefinition.finish_redef_table

REGISTER_DEPENDENT_OBJECT:
Completes the redefinition process:

dbms_redefinition.register_dependent_object(
uname IN VARCHAR2, -- schema name
orig_table IN VARCHAR2, -- table to redefine
int_table IN VARCHAR2, -- interim table
dep_type IN PLS_INTEGER, -- type of dependent object
dep_owner IN VARCHAR2, -- owner of dependent object
dep_orig_name IN VARCHAR2, -- name of orig dependent object
dep_int_name IN VARCHAR2); -- name of interim dependent object

ABORT_REDEF_TABLE:

Cleans up errors from the redefinition process:

dbms_redefinition.abort_redef_table (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2);

exec dbms_redefinition('REORG','EMP','INT_EMP');

Limitations when redefining Oracle objects:

Not Fully Online


Renames happen after Redefinition is complete
Old table is dropped before Renames
Enables disabled referencing constraints
Does not preserve original state of constraints
Does not handle invalid triggers in 9i
Does not support LONG/LONG RAWs
Does not support individual partitions/subpartitions
Does not support NoLoggingmode for interim table
Cannot handle referential constraints to different schema

These limitations make the function highly risky, in my opinion.

Redefining Oracle objects: Error Handling:

Recoverable Errors: These errors can be removed by either fixing the problem or by editing
the script to restart from point of failure
Unrecoverable Errors:

1. Run ABORT_REDEF_TABLE
2. Turn materialized view back into interim table
3. Manually drop interim objects
4. Restart Redefinition from the beginning

S-ar putea să vă placă și