Sunteți pe pagina 1din 74

Manual Database up gradation from 9.2.0 to 10.1.0 Filed under: Upgradation from 9.2.0 to 10.1.

0 Manual Database up gradation from 9.2.0 to 10.1.0 in Same server Step : 1 Pre-request in the 9i Database. SQL> select name from v$database; NAME TEST SQL> select count(*) from dba_objects; COUNT(*) 29511 SQL> @C:\oracle\ora92\rdbms\admin\utlrp.sql PL/SQL procedure successfully completed. Table created. Table created. Table created. Index created. Table created. Table created. View created. View created.

Package created. No errors.

Package body created. No errors. PL/SQL procedure successfully completed. PL/SQL procedure successfully completed. SQL> select count(*) from dba_objects; COUNT(*) 29511 SQL> select count(*),object_name from dba_objects where status=INVALID OBJECT_NAME; no rows selected Spool the output of the below query and do the modification as mentioned after backing up the DB SQL> @E:\oracle\product\10.1.0\db_1\RDBMS\ADMIN\utlu101i.sql Oracle Database 10.1 Upgrade Information Tool . ************************************************************************* Database: > name: TEST > version: 9.2.0.1.0 > compatibility: 9.2.0.0.0 . ************************************************************************* 08-22-2009 21:29:58 GROUP BY

Logfiles: [make adjustments in the current environment] The existing log files are adequate. No changes are required.

. ************************************************************************* Tablespaces: [make adjustments in the current environment]


> SYSTEM tablespace is adequate for the upgrade. . owner: SYS . minimum required size: 577 MB > CWMLITE tablespace is adequate for the upgrade. . owner: OLAPSYS . minimum required size: 9 MB > DRSYS tablespace is adequate for the upgrade. . owner: CTXSYS . minimum required size: 10 MB > ODM tablespace is adequate for the upgrade. . owner: ODM . minimum required size: 9 MB > XDB tablespace is adequate for the upgrade. . owner: XDB . minimum required size: 48 MB . ************************************************************************* Options: [present in existing database] > Partitioning > Spatial

> OLAP > Oracle Data Mining WARNING: Listed option(s) must be installed with Oracle Database 10.1 . ************************************************************************* Update Parameters: [Update Oracle Database 10.1 init.ora or spfile] WARNING: > shared_pool_size needs to be increased to at least 150944944 > pga_aggregate_target is already at 25165824 calculated new value is 25165824 > large_pool_size is already at 8388608 calculated new value is 8388608 WARNING: > java_pool_size needs to be increased to at least 50331648 . ************************************************************************* Deprecated Parameters: [Update Oracle Database 10.1 init.ora or spfile] No deprecated parameters found. No changes are required. . ************************************************************************* Obsolete Parameters: [Update Oracle Database 10.1 init.ora or spfile] > hash_join_enabled > log_archive_start . *************************************************************************

Components: [The following database components will be upgraded or installed] > Oracle Catalog Views > Oracle Packages and Types [upgrade] VALID [upgrade] VALID

> JServer JAVA Virtual Machine [upgrade] VALID The JServer JAVA Virtual Machine JAccelerator (NCOMP) is required to be installed from the 10g Companion CD. > Oracle XDK for Java > Oracle Java Packages > Oracle XML Database > Oracle Workspace Manager > Oracle Data Mining [upgrade] VALID [upgrade] VALID [upgrade] VALID [upgrade] VALID

[upgrade] [upgrade]

> OLAP Analytic Workspace > OLAP Catalog > Oracle OLAP API > Oracle interMedia

[upgrade] [upgrade] [upgrade]

The Oracle interMedia Image Accelerator is required to be installed from the 10g Companion CD. > Spatial > Oracle Text > Oracle Ultra Search . ************************************************************************* [upgrade] [upgrade] VALID [upgrade] VALID

. ************************************************************************* SYSAUX Tablespace: [Create tablespace in Oracle Database 10.1 environment] > New SYSAUX tablespace . minimum required size for database upgrade: 500 MB Please create the new SYSAUX Tablespace AFTER the Oracle Database 10.1 server is started and BEFORE you invoke the upgrade script. . ************************************************************************* Oracle Database 10g: Changes in Default Behavior This page describes some of the changes in the behavior of Oracle Database 10g from that of previous releases. In some cases the default values of some parameters have changed. In other cases new behaviors/requirements have been introduced that may affect current scripts or applications. More detailed information is in the documentation.

SQL OPTIMIZER The Cost Based Optimizer (CBO) is now enabled by default. * Rule-based optimization is not supported in 10g (setting OPTIMIZER_MODE to RULE or CHOOSE is not supported). See Chapter 12, Introduction to the Optimizer, in Oracle Database Performance Tuning Guide. * Collection of optimizer statistics is now performed by default,

automatically for all schemas (including SYS), for pre-existing databases upgraded to 10g, and for newly created 10g databases. Gathering optimizer statistics on stale objects is scheduled by default to occur daily during the maintenance window. See Chapter 15, Managing Optimizer Statistics in Oracle Performance Tuning Guide. * See the Oracle Database Upgrade Guide for changes in behavior for the COMPUTE STATISTICS clause of CREATE INDEX, and for behavior changes in SKIP_UNUSABLE_INDEXES. UPGRADE/DOWNGRADE * After upgrading to 10g, the minimum supported release to downgrade to is Oracle 9i R2 release 9.2.0.3 (or later), and the minimum value for COMPATIBLE is 9.2.0. The only supported downgrade path is for those users who have kept COMPATIBLE=9.2.0 and have an installed 9i R2 (release 9.2.0.3 or later) executable. Users upgrading to 10g from prior releases (such as Oracle 8, Oracle 8i or 9iR1) cannot downgrade to 9i R2 unless they first install 9i R2. When upgrading to10g, by default the database will remain at 9i R2 file format compatibility, so the on disk structures that 10g writes are compatible with 9i R2 structures; this makes it possible to downgrade to 9i R2. Once file format compatibility has been explicitly advanced to 10g (using COMPATIBLE=10.x.x), it is no longer possible to downgrade.

See the Oracle Database Upgrade Guide. * A SYSAUX tablespace is created upon upgrade to 10g. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default tablespace for many Oracle features and products that previously required their own tablespaces, it reduces the number of tablespaces required by Oracle that you, as a DBA, must maintain. MANAGEABILITY * Database performance statistics are now collected by the Automatic Workload Repository (AWR) database component, automatically upon upgrade to 10g and also for newly created 10g databases. This data is stored in the SYSAUX tablespace, and is used by the database for automatic generation of performance recommendations. See Chapter 5, Automatic Performance Statistics in the Oracle Database Performance Tuning Guide. * If you currently use Statspack for performance data gathering, see section 1. of the Statspack readme (spdoc.txt in the RDBMS ADMIN directory) for directions on using Statspack in 10g to avoid conflict with the AWR. MEMORY * Automatic PGA Memory Management is now enabled by default (unless PGA_AGGREGATE_TARGET is explicitly set to 0 or WORKAREA_SIZE_POLICY is explicitly set to MANUAL).

PGA_AGGREGATE_TARGET is defaulted to 20% of the SGA size, unless explicitly set. Oracle recommends tuning the value of PGA_AGGREGATE_TARGET after upgrading. See Chapter 14 of the Oracle Database Performance Tuning Guide. * Previously, the number of SQL cursors cached by PL/SQL was determined by OPEN_CURSORS. In 10g, the number of cursors cached is determined by SESSION_CACHED_CURSORS. See the Oracle Database Reference manual. * SHARED_POOL_SIZE must increase to include the space needed for shared pool overhead. * The default value of DB_BLOCK_SIZE is operating system specific, but is typically 8KB (was typically 2KB in previous releases). TRANSACTION/SPACE * Dropped objects are now moved to the recycle bin, where the space is only reused when it is needed. This allows undropping a table using the FLASHBACK DROP feature. See Chapter 14 of the Oracle Database Administrators Guide. * Auto tuning undo retention is on by default. For more information, see Chapter 10, Managing the Undo Tablespace, in the Oracle Database Administrators Guide. CREATE DATABASE * In addition to the SYSTEM tablespace, a SYSAUX tablespace is

always created at database creation, and upon upgrade to 10g. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default tablespace for many Oracle features and products that previously required their own tablespaces, it reduces the number of tablespaces required by Oracle that you, as a DBA, must maintain. See Chapter 2, Creating a Database, in the Oracle Database Administrators Guide. * In 10g, by default all new databases are created with 10g file format compatibility. This means you can immediately use all the 10g features. Once a database uses 10g compatible file formats, it is not possible to downgrade this database to prior releases. Minimum and default logfile sizes are larger. Minimum is now 4 MB, default is 50MB, unless you are using Oracle Managed Files (OMF) when it is 100 MB. PL/SQL procedure successfully completed. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Archive Mode Enabled C:\oracle\oradata\test\archive 91

Next log sequence to archive 93 Current log sequence 93

SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Backup complete database. (Cold backup) Step :2 Check the space needed and stop the listner and delete the sid. C:\Documents and Settings\Administrator>set oracle_sid=test C:\Documents and Settings\Administrator>sqlplus /nolog SQL*Plus: Release 9.2.0.1.0 Production on Sat Aug 22 21:36:52 2009 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. SQL> conn /as sysdba Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 135338868 bytes Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. 453492 bytes 109051904 bytes 25165824 bytes 667648 bytes

SQL> desc sm$ts_avail Name Null? Type

TABLESPACE_NAME BYTES SQL> select * from sm$ts_avail; TABLESPACE_NAME CWMLITE DRSYS EXAMPLE INDX ODM SYSTEM TOOLS UNDOTBS1 USERS XDB 10 rows selected. SQL> select * from sm$ts_used; TABLESPACE_NAME CWMLITE 9764864 BYTES 20971520 20971520 155975680 26214400 20971520 419430400 10485760 209715200 26214400 39976960 BYTES VARCHAR2(30) NUMBER

DRSYS EXAMPLE ODM SYSTEM TOOLS UNDOTBS1 XDB 8 rows selected.

10092544 155779072 9699328 414908416 6291456 9814016 39714816

SQL> select * from sm$ts_free; TABLESPACE_NAME CWMLITE DRSYS EXAMPLE INDX ODM SYSTEM TOOLS UNDOTBS1 USERS XDB 10 rows selected. SQL> ho LSNRCTL 11141120 10813440 131072 26148864 11206656 4456448 4128768 199753728 26148864 196608 BYTES

LSNRCTL> start Starting tnslsnr: please wait Failed to open service <OracleoracleTNSListener>, error 1060. TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 Production System parameter file is C:\oracle\ora92\network\admin\listener.ora Log messages written to C:\oracle\ora92\network\log\listener.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee6e78e526295)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) STATUS of the LISTENER Alias Version Start Date Uptime Trace Level Security SNMP LISTENER TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 Production 22-AUG-2009 22:00:00 0 days 0 hr. 0 min. 16 sec off OFF OFF

Listener Parameter File C:\oracle\ora92\network\admin\listener.ora Listener Log File C:\oracle\ora92\network\log\listener.log

Listening Endpoints Summary (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521))) Services Summary

Service TEST has 1 instance(s). Instance TEST, status UNKNOWN, has 1 handler(s) for this service The command completed successfully LSNRCTL> stop Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) The command completed successfully LSNRCTL> start Starting tnslsnr: please wait TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 Production System parameter file is C:\oracle\ora92\network\admin\listener.ora Log messages written to C:\oracle\ora92\network\log\listener.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee6e78e526295)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) STATUS of the LISTENER Alias Version Start Date Uptime Trace Level Security SNMP LISTENER TNSLSNR for 32-bit Windows: Version 9.2.0.1.0 Production 22-AUG-2009 22:00:48 0 days 0 hr. 0 min. 0 sec off OFF OFF

Listener Parameter File C:\oracle\ora92\network\admin\listener.ora Listener Log File C:\oracle\ora92\network\log\listener.log

Listening Endpoints Summary (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dee-6e78e526295)(PORT=1521))) Services Summary Service TEST has 1 instance(s). Instance TEST, status UNKNOWN, has 1 handler(s) for this service The command completed successfully LSNRCTL> exit SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.1.0 Production C:\Documents and Settings\Administrator>lsnrctl stop LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 Production on 22-AUG-2009 22:03:14 copyright (c) 1991, 2002, Oracle Corporation. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dee6e78e526295)(PORT=1521))) The command completed successfully C:\Documents and Settings\Administrator>oradim -delete -sid test

Step: 3 Install ORACLE 10g Software in different Home. Starting the DB with 10g instance and upgradation Process. SQL> startup pfile=E:\oracle\product\10.1.0\admin\test\pfile\init.ora.73200934649 nomount ORACLE instance started. Total System Global Area 239075328 bytes Fixed Size Variable Size Database Buffers Redo Buffers 788308 bytes 212859052 bytes 25165824 bytes 262144 bytes

SQL> create spfile from pfile=E:\oracle\product\10.1.0\admin\test\pfile\init.ora.73200934649 ; File created. SQL> shut immediate ORA-01507: database not mounted ORACLE instance shut down. SQL> startup upgrade ORACLE instance started. Total System Global Area 239075328 bytes Fixed Size Variable Size Database Buffers Redo Buffers 788308 bytes 212859052 bytes 25165824 bytes 262144 bytes

ORA-01990: error opening password file (create password file)

SQL> conn /as sysdba Connected. SQL> @C:\Documents and Settings\Administrator\Desktop\sys.sql.txt (Sys.sql.txt contains sysaux tablespace script as shown below) create tablespace SYSAUX datafile sysaux01.dbf size 70M reuse extent management local segment space management auto online; Tablespace created. SQL> @E:\oracle\product\10.1.0\db_1\RDBMS\ADMIN\u0902000.sql DOC>###################################################################### DOC>###################################################################### DOC> The following statement will cause an ORA-01722: invalid number DOC> error if the database server version is not correct for this script. DOC> Shutdown ABORT and use a different script or a different server. DOC>###################################################################### DOC>###################################################################### DOC># no rows selected DOC>####################################################################### DOC>####################################################################### DOC> The following statement will cause an ORA-01722: invalid number

DOC> error if the database has not been opened for UPGRADE. DOC> DOC> Perform a SHUTDOWN ABORT and DOC> restart using UPGRADE. DOC>####################################################################### DOC>####################################################################### DOC># no rows selected DOC>####################################################################### DOC>####################################################################### DOC> The following statements will cause an ORA-01722: invalid number DOC> error if the SYSAUX tablespace does not exist or is not DOC> ONLINE for READ WRITE, PERMANENT, EXTENT MANAGEMENT LOCAL, and DOC> SEGMENT SPACE MANAGEMENT AUTO. DOC> DOC> The SYSAUX tablespace is used in 10.1 to consolidate data from DOC> a number of tablespaces that were separate in prior releases. DOC> Consult the Oracle Database Upgrade Guide for sizing estimates. DOC> DOC> Create the SYSAUX tablespace, for example, DOC> DOC> create tablespace SYSAUX datafile sysaux01.dbf DOC> size 70M reuse

DOC> DOC> DOC> DOC>

extent management local segment space management auto online;

DOC> Then rerun the u0902000.sql script. DOC>####################################################################### DOC>####################################################################### DOC># no rows selected no rows selected no rows selected no rows selected no rows selected Session altered. Session altered. The script will run according to the size of the database All packages,scripts,synonyms will be upgraded At last it will show the message as follows TIMESTAMP 1 row selected. PL/SQL procedure successfully completed. COMP_ID COMP_NAME STATUS VERSION

- CATALOG CATPROC JAVAVM XML Oracle Database Catalog Views VALID 10.1.0.2.0 10.1.0.2.0 10.1.0.2.0

Oracle Database Packages and Types VALID JServer JAVA Virtual Machine VALID 10.1.0.2.0 VALID

Oracle XDK

VALID

CATJAVA XDB OWM ODM APS AMD XOQ ORDIM SDO

Oracle Database Java Packages

10.1.0.2.0

Oracle XML Database Oracle Workspace Manager Oracle Data Mining OLAP Analytic Workspace OLAP Catalog Oracle OLAP API Oracle interMedia Spatial Oracle Text

VALID VALID VALID VALID VALID VALID VALID VALID

10.1.0.2.0 10.1.0.2.0

10.1.0.2.0 10.1.0.2.0

10.1.0.2.0 10.1.0.2.0 10.1.0.2.0

10.1.0.2.0 10.1.0.2.0 10.1.0.2.0

CONTEXT WK

VALID VALID

Oracle Ultra Search

15 rows selected. DOC>####################################################################### DOC>####################################################################### DOC> DOC> The above query lists the SERVER components in the upgraded DOC> database, along with their current version and status. DOC>

DOC> Please review the status and version columns and look for DOC> any errors in the spool log file. If there are errors in the spool DOC> file, or any components are not VALID or not the current version, DOC> consult the Oracle Database Upgrade Guide for troubleshooting DOC> recommendations. DOC> DOC> Next shutdown immediate, restart for normal operation, and then DOC> run utlrp.sql to recompile any invalid application objects. DOC> DOC>####################################################################### DOC>####################################################################### DOC># PL/SQL procedure successfully completed. COMP_ID COMP_NAME STATUS VERSION

- CATALOG CATPROC JAVAVM XML Oracle Database Catalog Views VALID 10.1.0.2.0 10.1.0.2.0 10.1.0.2.0

Oracle Database Packages and Types VALID JServer JAVA Virtual Machine VALID 10.1.0.2.0 VALID

Oracle XDK

VALID

CATJAVA XDB OWM ODM

Oracle Database Java Packages

10.1.0.2.0

Oracle XML Database Oracle Workspace Manager Oracle Data Mining

VALID VALID VALID

10.1.0.2.0 10.1.0.2.0

10.1.0.2.0

APS AMD XOQ ORDIM SDO

OLAP Analytic Workspace OLAP Catalog Oracle OLAP API Oracle interMedia Spatial Oracle Text

VALID VALID VALID VALID

10.1.0.2.0

10.1.0.2.0 10.1.0.2.0 10.1.0.2.0

VALID

10.1.0.2.0 10.1.0.2.0 10.1.0.2.0

CONTEXT WK

VALID VALID

Oracle Ultra Search

15 rows selected. DOC>####################################################################### DOC>####################################################################### DOC> DOC> The above query lists the SERVER components in the upgraded DOC> database, along with their current version and status. DOC> DOC> Please review the status and version columns and look for DOC> any errors in the spool log file. If there are errors in the spool DOC> file, or any components are not VALID or not the current version, DOC> consult the Oracle Database Upgrade Guide for troubleshooting DOC> recommendations. DOC> DOC> Next shutdown immediate, restart for normal operation, and then DOC> run utlrp.sql to recompile any invalid application objects. DOC>

DOC>####################################################################### DOC>####################################################################### DOC># TIMESTAMP COMP_TIMESTAMP DBUPG_END 2009-08-22 22:59:09 1 row selected. SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 239075328 bytes Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. SQL> select count(*) from dba_objects where status=INVALID; COUNT(*) 788308 bytes 212859052 bytes 25165824 bytes 262144 bytes

776 1 row selected. SQL> @E:\oracle\product\10.1.0\db_1\RDBMS\ADMIN\utlu101s.sql PL/SQL procedure successfully completed. Oracle Database 10.1 Upgrade Status Tool 22-AUG-2009 11:18:36 > Oracle Database Catalog Views Normal successful completion

> Oracle Database Packages and Types Normal successful completion > JServer JAVA Virtual Machine > Oracle XDK Normal successful completion

Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion Normal successful completion

> Oracle Database Java Packages > Oracle XML Database > Oracle Workspace Manager > Oracle Data Mining > OLAP Analytic Workspace > OLAP Catalog > Oracle OLAP API > Oracle interMedia > Spatial > Oracle Text > Oracle Ultra Search

No problems detected during upgrade PL/SQL procedure successfully completed. SQL> @E:\oracle\product\10.1.0\db_1\RDBMS\ADMIN\utlrp.sql

TIMESTAMP COMP_TIMESTAMP UTLRP_BGN 2009-08-22 23:19:07 1 row selected. PL/SQL procedure successfully completed. TIMESTAMP COMP_TIMESTAMP UTLRP_END 2009-08-22 23:20:13 1 row selected. PL/SQL procedure successfully completed. PL/SQL procedure successfully completed. SQL> select count(*) from dba_objects where status=INVALID; COUNT(*) 0 1 row selected. SQL> select * from V$version; BANNER Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 Prod PL/SQL Release 10.1.0.2.0 Production CORE 10.1.0.2.0 Production

TNS for 32-bit Windows: Version 10.1.0.2.0 Production

NLSRTL Version 10.1.0.2.0 Production 5 rows selected. Check the Database that everything is working fine. Comment

Duplicate Database With RMAN Without Connecting To Target Database


Filed under: Duplicate database without connecting to target database using backups taken from RMAN on alternate host. by Deepak 3 Comments February 24, 2010 Duplicate Database With RMAN Without Connecting To Target Database from metalink Id 732624.1 hi, Just wanted to share this topic How to do duplicate database without connecting to target database using backups taken from RMAN on alternate host. Solution Follow the below steps 1)Export ORACLE_SID=<SID Name as of production> create init.ora file and give db_name=<dbname of production> and control_files=<location where you want controlfile to be restored> 2)Startup nomount pfile=<path of init.ora>; 3)Connect to RMAN and issue command : RMAN>restore controlfile from <backuppiece of controlfile which you took on production>; controlfile should be restored. 4) Issue alter database mount Make sure that backuppieces are on the same location where it were there on production db. If you dont have the same location, then make RMAN aware of the changed location using catalog command.

RMAN>catalog backuppiece <piece name and path>; If there are more backuppieces, than they can be cataloged using command : RMAN>catalog start with <path where backuppieces are stored>; 5) After catalogging backuppiece, issue restore database command. If you need to restore datafiles to a location different to the one recorded in controlfile, use SET NEWNAME command as below: run { set newname for datafile 1 to /newLocation/system.dbf; set newname for datafile 2 to /newLocation/undotbs.dbf; restore database; switch datafile all; Comment

Features introduced in the various Oracle server releases


Filed under: Features Of Various release of Oracle Database by Deepak Leave a comment February 2, 2010

Features introduced in the various server releases


Submitted by admin on Sun, 2005-10-30 14:02 This document summarizes the differences between Oracle Server releases. Most DBAs and developers work with multiple versions of Oracle at any particular time. This document describes the high level features introduced with each new version of the Oracle database. It is intended to be used as a quick reference as to whether a feature can be implemented, or if a upgrade is required. Oracle 10g Release 2 (10.2.0) September 2005
y y y y y

Transparent Data Encryption Async commits CONNECT ROLE can not only connect Passwords for DB Links are encrypted New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (10.1.0)


y y

Grid computing an extension of the clustering feature (Real Application Clusters) Manageability improvements (self-tuning features)

y y y y y y y y y y y y y

Performance and scalability improvements Automated Storage Management (ASM) Automatic Workload Repository (AWR) Automatic Database Diagnostic Monitor (ADDM) Flashback operations available on row, transaction, table or database level Ability to UNDROP a table from a recycle bin Ability to rename tablespaces Ability to transport tablespaces across machine types (E.g Windows to Unix) New drop database statement New database scheduler DBMS_SCHEDULER DBMS_FILE_TRANSFER Package Support for bigfile tablespaces that is up to 8 Exabytes in size Data Pump faster data movement with expdp and impdp.

Oracle 9i Release 2 (9.2.0)


y y y y y y y y y

Locally Managed SYSTEM tablespaces Oracle Streams new data sharing/replication feature (can potentially replace Oracle Advance Replication and Standby Databases) XML DB (Oracle is now a standards compliant XML database) Data segment compression (compress keys in tables only when loading data) Cluster file system for Windows and Linux (raw devices are no longer required). Create logical standby databases with Data Guard Java JDK 1.3 used inside the database (JVM) Oracle Data Guard Enhancements (SQL Apply mode logical copy of primary database, automatic failover Security Improvements Default Install Accounts locked, VPD on synonyms, AES, Migrate Users to Directory

Oracle 9i Release 1 (9.0.1) June 2001


y

y y

Traditional rollback segments (RBS) are still available, but can be replaced with automated System Managed Undo (SMU). Using SMU, Oracle will create its own Rollback Segments and size them automatically without any DBA involvement. Flashback query (dbms_flashback.enable) one can query data as it looked at some point in the past. This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore. Use Oracle Ultra Search for searching databases, file systems, etc. The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed. Oracle Nameserver is still available, but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server). A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server. Oracle Parallel Servers (OPS) scalability was improved now called Real Application Clusters (RAC). Full Cache Fusion implemented. Any application can scale in a database cluster. Applications doesnt need to be cluster aware anymore.

y y y y

y y y y y y y y y

The Oracle Standby DB feature renamed to Oracle Data Guard. New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations. The Data Guard Broker allows single step fail-over when disaster strikes. Scrolling cursor support. Oracle9i allows fetching backwards in a result set. Dynamic Memory Management Buffer Pools and shared pool can be resized on-the-fly. This eliminates the need to restart the database each time parameter changes were made. On-line table and index reorganization. VI (Virtual Interface) protocol support, an alternative to TCP/IP, available for use with Oracle Net (SQL*Net). VI provides fast communications between components in a cluster. Build in XML Developers Kit (XDK). New data types for XML (XMLType), URIs, etc. XML integrated with AQ. Cost Based Optimizer now also consider memory and CPU, not only disk access cost as before. PL/SQL programs can be natively compiled to binaries. Deep data protection fine grained security and auditing. Put security on DB level. SQL access do not mean unrestricted access. Resumable backups and statements suspend statement instead of rolling back immediately. List Partitioning partitioning on a list of values. ETL (eXtract, transformation, load) Operations with external tables and pipelining. OLAP Express functionality included in the DB. Data Mining Oracle Darwins features included in the DB.

Oracle 8i (8.1.7)
y y y y y y y y

Static HTTP server included (Apache) JVM Accelerator to improve performance of Java code Java Server Pages (JSP) engine MemStat A new utility for analyzing Java Memory footprints OIS Oracle Integration Server introduced. PLSQL Gateway introduced for deploying PL/SQL based solutions on the Web Enterprise Manager Enhancements including new HTML based reporting and Advanced Replication functionality included. New Database Character Set Migration utility included.

Oracle 8i (8.1.6)
y y y y y y

PL/SQL Server Pages (PSPs) DBA Studio Introduced Statspack New SQL Functions (rank, moving average) ALTER FREELISTS command (previously done by DROP/CREATE TABLE) Checksums always on for SYSTEM tablespace allowing many possible corruptions to be fixed before writing to disk

y y y y

XML Parser for Java New PLSQL encrypt/decrypt package introduced User and Schemas separated Numerous Performance Enhancements

Oracle 8i (8.1.5)
y y y y y y y y y y y y y y y y y y y y y

y y y y

Fast Start recovery Checkpoint rate auto-adjusted to meet roll forward criteria Reorganize indexes/index only tables which users accessing data Online index rebuilds Log Miner introduced Allows on-line or archived redo logs to be viewed via SQL OPS Cache Fusion introduced avoiding disk I/O during cross-node communication Advanced Queueing improvements (security, performance, OO4O support User Security Improvements more centralisation, single enterprise user, users/roles across multiple databases. Virtual private database JAVA stored procedures (Oracle Java VM) Oracle iFS Resource Management using priorities resource classes Hash and Composite partitioned table types SQL*Loader direct load API Copy optimizer statistics across databases to ensure same access paths across different environments. Standby Database Auto shipping and application of redo logs. Read Only queries on standby database allowed. Enterprise Manager v2 delivered NLS Euro Symbol supported Analyze tables in parallel Temporary tables supported. Net8 support for SSL, HTTP, HOP protocols Transportable tablespaces between databases Locally managed tablespaces automatic sizing of extents, elimination of tablespace fragmentation, tablespace information managed in tablespace (i.e moved from data dictionary) improving tablespace reliability Drop Column on table (Finally !!!!!) DBMS_DEBUG PL/SQL package, DBMS_SQL replaced by new EXECUTE IMMEDIATE statement Progress Monitor to track long running DML, DDL Functional Indexes NLS, case insensitive, descending

Oracle 8.0 June 1997


y y y y y

Object Relational database Object Types (not just date, character, number as in v7 SQL3 standard Call external procedures LOB >1 per table

y y y y y y y y y y y y y y y

y y y y y y y

Partitioned Tables and Indexes export/import individual partitions partitions in multiple tablespaces Online/offline, backup/recover individual partitions merge/balance partitions Advanced Queuing for message handling Many performance improvements to SQL/PLSQL/OCI making more efficient use of CPU/Memory. V7 limits extended (e.g. 1000 columns/table, 4000 bytes VARCHAR2) Parallel DML statements Connection Pooling ( uses the physical connection for idle users and transparently reestablishes the connection when needed) to support more concurrent users. Improved STAR Query optimizer Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM in v7). Performance improvements in OPS global V$ views introduced across all instances, transparent failover to a new node Data Cartridges introduced on database (e.g. image, video, context, time, spatial) Backup/Recovery improvements Tablespace point in time recovery, incremental backups, parallel backup/recovery. Recovery manager introduced Security Server introduced for central user administration. User password expiry, password profiles, allow custom password scheme. Privileged database links (no need for password to be stored) Fast Refresh for complex snapshots, parallel replication, PL/SQL replication code moved in to Oracle kernel. Replication manager introduced. Index Organized tables Deferred integrity constraint checking (deferred until end of transaction instead of end of statement). SQL*Net replaced by Net8 Reverse Key indexes Any VIEW updateable New ROWID format

Oracle 7.3
y y y y y y y y y y y y

Partitioned Views Bitmapped Indexes Asynchronous read ahead for table scans Standby Database Deferred transaction recovery on instance startup Updatable Join Views (with restrictions) SQLDBA no longer shipped. Index rebuilds db_verify introduced Context Option Spatial Data Option Tablespaces changes Coalesce, Temporary Permanent,

y y y y y y y y y

Trigger compilation, debug Unlimited extents on STORAGE clause. Some init.ora parameters modifiable TIMED_STATISTICS HASH Joins, Antijoins Histograms Dependencies Oracle Trace Advanced Replication Object Groups PL/SQL UTL_FILE

Oracle 7.2
y y y y y y y y y y y y

Resizable, autoextend data files Shrink Rollback Segments manually Create table, index UNRECOVERABLE Subquery in FROM clause PL/SQL wrapper PL/SQL Cursor variables Checksums DB_BLOCK_CHECKSUM, LOG_BLOCK_CHECKSUM Parallel create table Job Queues DBMS_JOB DBMS_SPACE DBMS Application Info Sorting Improvements SORT_DIRECT_WRITES

Oracle 7.1
y y y y y y y y

ANSI/ISO SQL92 Entry Level Advanced Replication Symmetric Data replication Snapshot Refresh Groups Parallel Recovery Dynamic SQL DBMS_SQL Parallel Query Options query, index creation, data loading Server Manager introduced Read Only tablespaces

Oracle 7.0 June 1992


y y y y y y y y

Database Integrity Constraints (primary, foreign keys, check constraints, default values) Stored procedures and functions, procedure packages Database Triggers View compilation User defined SQL functions Role based security Multiple Redo members mirrored online redo log files Resource Limits Profiles

y y y y y y y y y

Much enhanced Auditing Enhanced Distributed database functionality INSERTS, UPDATES,DELETES, 2PC Incomplete database recovery (e.g SCN) Cost based optimiser TRUNCATE tables Datatype changes (i.e VARCHAR2 CHAR, VARCHAR) SQL*Net v2, MTS Checkpoint process Data replication Snapshots

Oracle 6.2
y

Oracle Parallel Server

Oracle 6 July 1988


y y y

Row-level locking On-line database backups PL/SQL in the database

Oracle 5.1
y

Distributed queries

Oracle 5.0 1986


y

Supporting for the Client-Server model PCs can access the DB on remote host

Oracle 4 1984
y

Read consistency

Oracle 3 1981
y y y

Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions) Nonblocking queries (no more read locks) Re-written in the C Programming Language

Oracle 2 1979
y y

First public release Basic SQL functionality, queries and joins

Tags: http://www.orafaq.com/faq/features_introduced_in_the_various_server_releases Comment

Schema Referesh
Filed under: Schema refresh by Deepak 1 Comment December 15, 2009 Steps for sehema refresh

Schema refresh in oracle 9i Now we are going to refresh SH schema. Steps for schema refresh before exporting Spool the output of roles and privileges assigned to the user .use the query below to view the role s and privileges and spool the out as .sql file. 1. SELECT object_type,count(*) from dba_objects where owner=SHTEST group by object_type; 2. Verify total no of objects from above query. 3. write a dynamic query as below 4. select grant || privilege || to sh; from session_privs; 5. select grant || role || to sh; from session_roles; 6. query the default tablespace and size 7. select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner=SH group by tablespace_name; Export the sh schema exp usernmae/password file=/location/sh_bkp.dmp log=/location/sh_exp.log owner=SH direct=y steps to drrop and recreate schema Drop the SH schema 1. 2. 3. 4. Create the SH schema with the default tablespace and allocate quota on that tablespace. Now run the roles and privileges spooled scripts. Connect the SH and verify the tablespace, roles and privileges. then start importing

Importing The SH schema Imp usernmae/password file=/location/sh_bkp.dmp log=/location/sh_imp.log

Fromuser=SH touser=SH SQL> SELECT object_type,count(*) from dba_objects where owner=SHTEST group by object_type; Compiling and analyzing SH Schema exec dbms_utility.compile_schema(SH); execdbms_utility.analyze_schema(SH,'ESTIMATE,ESTIMATE_PERCENT=>20); Now connect the SH user and check for the import data. Schema refresh by dropping objects and truncating objects Export the sh schema Take the schema full export as show above Drop all the objects in SH schema To drop the all the objects in the Schema Connect the schema Spool the output SQL>set head off SQL>spool drop_tables.sql SQL>select drop table ||table_name|| cascade constraints purge; from user_tables; SQL>spool off SQL>set head off SQL>spool drop_other_objects.sql SQL>select drop ||object_type|| ||object_name||; from user_objects; SQL>spool off Now run the script all the objects will be dropped, Importing THE SH schema

Imp usernmae/password file=/location/sh_bkp.dmp log=/location/sh_imp.log Fromuser=SH touser=SH SQL> SELECT object_type,count(*) from dba_objects where owner=SHTEST group by object_type; Compiling and analyzing SH Schema exec dbms_utility.compile_schema(SH); execdbms_utility.analyze_schema(SH,'ESTIMATE,ESTIMATE_PERCENT=>20); Now connect the SH user and check for the import data. To enable constraints use the query below SELECT ALTER TABLE ||TABLE_NAME||ENABLE CONSTRAINT ||CONSTRAINT_NAME||;'FROM USER_CONSTRAINTS WHERE STATUS=DISABLED; Truncate all the objects in SH schema To truncate the all the objects in the Schema Connect the schema Spool the output SQL>set head off SQL>spool truncate_tables.sql SQL>select truncate table ||table_name from user_tables; SQL>spool off SQL>set head off SQL>spool truncate_other_objects.sql SQL>select truncate ||object_type|| ||object_name||; from user_objects; SQL>spool off

Now run the script all the objects will be truncated. Disabiling the reference constraints If there is any constraint violation while truncating use the below query to find reference key constraints and disable them. Spool the output of below query and run the script. Select constraint_name,constraint_type,table_name FROM ALL_CONSTRAINTS where constraint_type=R and r_constraint_name in (select constraint_name from all_constraints where table_name=TABLE_NAME) Importing THE SH schema Imp usernmae/password file=/location/sh_bkp.dmp log=/location/sh_imp.log Fromuser=SH touser=SH SQL> SELECT object_type,count(*) from dba_objects where owner=SHTEST group by object_type; Compiling and analyzing SH Schema exec dbms_utility.compile_schema(SH); exec dbms_utility.analyze_schema(SH,'ESTIMATE,ESTIMATE_PERCENT=>20); Now connect the SH user and check for the import data. Schema refresh in oracle 10g Here we can use Datapump Exporting the SH schema through Datapump expdp username/password dumpfile=sh_exp.dmp directory=data_pump_dir schemas=sh Dropping the SH user Query the default tablespace and verify the space in the tablespace and drop the user. SQL>Drop user SH cascade;

Importing the SH schema through datapump impdp username/password dumpfile=sh_exp.dmp directory=data_pump_dir schemas=sh If you are importing to different schema use remap_schema option. Check for the imported objects and compile the invalid objects.

Comment

JOB SCHEDULING
Filed under: JOB SCHEDULING by Deepak Leave a comment December 15, 2009 CRON JOB SCHEDULING IN UNIX
y y

To run system jobs on a daily/weekly/monthly basis To allow users to setup their own schedules

The system schedules are setup when the package is installed, via the creation of some special directories:
/etc/cron.d /etc/cron.daily /etc/cron.hourly /etc/cron.monthly /etc/cron.weekly

Except for the first one which is special, these directories allow scheduling of system-wide jobs in a coarse manner. Any script which is executable and placed inside them will run at the frequency which its name suggests. For example if you place a script inside /etc/cron.daily it will be executed once per day, every day. The time that the scripts run in those system-wide directories is not something that an administration typically changes, but the times can be adjusted by editing the file /etc/crontab. The format of this file will be explained shortly. The normal manner which people use cron is via the crontab command. This allows you to view or edit your crontab file, which is a per-user file containing entries describing commands to execute and the time(s) to execute them. To display your file you run the following command:

crontab -l root can view any users crontab file by adding -u username, for example: crontab -u skx -l # List skx's crontab file.

The format of these files is fairly simple to understand. Each line is a collection of six fields separated by spaces. The fields are: 1. 2. 3. 4. 5. 6. The number of minutes after the hour (0 to 59) The hour in military time (24 hour) format (0 to 23) The day of the month (1 to 31) The month (1 to 12) The day of the week(0 or 7 is Sun, or use name) The command to run

More graphically they would look like this:


* * * * * Command to be executed | | | | | | | | | +----- Day of week (0-7) | | | +------- Month (1 - 12) | | +--------- Day of month (1 - 31) | +----------- Hour (0 - 23) +------------- Min (0 - 59)

(Each of the first five fields contains only numbers, however they can be left as * characters to signify any value is acceptible). Now that weve seen the structure we should try to ru na couple of examples. To edit your crontabe file run:
crontab -e

This will launch your default editor upon your crontab file (creating it if necessary). When you save the file and quit your editor it will be installed into the system unless it is found to contain errors. If you wish to change the editor used to edit the file set the EDITOR environmental variable like this:
export EDITOR=/usr/bin/emacs crontab -e

Now enter the following:

* /bin/ls

When youve saved the file and quit your editor you will see a message such as:
crontab: installing new crontab

You can verify that the file contains what you expect with :
crontab -l

Here weve told the cron system to execute the command /bin/ls every time the minute equals 0, ie. Were running the command on the hour, every hour. Any output of the command you run will be sent to you by email, if you wish to stop this then you should cause it to be redirected, as follows:
0 * * * * /bin/ls >/dev/null 2&>1

This causes all output to be redirected to /dev/null meaning you wont see it. Now well finish with some more examples:
# Run the `something` command every hour on the hour 0 * * * * /sbin/something # Run the `nightly` command at ten minutes past midnight every day 10 0 * * * /bin/nightly # Run the `monday` command every monday at 2 AM 0 2 * * 1 /usr/local/bin/monday

One last tip: If you want to run something very regularly you can use an alternate syntax: Instead of using only single numbers you can use ranges or sets. A range of numbers indicates that every item in that range will be matched, if you use the following line youll run a command at 1AM, 2AM, 3AM, and 4AM:
# Use a range of hours matching 1, 2, 3 and 4AM * 1-4 * * * /bin/some-hourly

A set is similar, consisting of a collection of numbers seperated by commas, each item in the list will be matched. The previous example would look like this using sets:
# Use a set of hours matching 1, 2, 3 and 4AM * 1,2,3,4 * * * /bin/some-hourly

JOB SCHEDULING IN WINDOWS

Cold backup scheduling in windows environment Create a batch file as cold_bkp.bat @echo off net stop OracleServiceDBNAME net stop OracleOraHome92TNSListener xcopy /E /Y E:\oracle\oradata\HRMS D:\daily_bkp_\coldbackup\hrms xcopy /E /Y E:\oracle\ora92\database D:\daily_bkp \registry\database net start OracleServiceDBNAME net start OracleOraHome92TNSListener Save the file as cold_bkp.batGoto start -> control panel -> scheduled tasks. 1. 2. 3. 4. 5. Click on add a scheduled tasks. Click next and browse your cold_bkp.bat file. Give a name for the backup and schedule the timings. It will ask for o/s user name and password. Click next and finish the scheduling.

Note: Whenever the o/s user name and password are changed reschedule the scheduled tasks. If you dont reschedule it the job wont run. So edit the scheduled tasks and enter the new password. Comment

Steps to switchover standby to primary


Filed under: Switchover primary to standby in 10g by Deepak 1 Comment December 15, 2009 SWITCHOVER PRIMARY TO STANDBY DATABASE Primary =PRIM

Standby = STAN I. Before Switchover: 1. As I always recommend, test the Switchover first on your testing systems before working on Production. 2. Verify the primary database instance is open and the standby database instance is mounted. 3. Verify there are no active users connected to the databases. 4. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out: SQL>select sequence#, applied from v$archvied_log; Perform SWITCH LOGFILE if necessary. In order to apply redo data to the standby database as soon as it is received, use Real-time apply. II. Quick Switchover Steps 1. Initiate the switchover on the primary database PRIM: SQL>connect /@PRIM as sysdba SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN; 2. After step 1 finishes, Switch the original physical standby db STAN to primary role; Open another prompt and connect to SQLPLUS: SQL>connect /@STAN as sysdba SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY; 3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM: SQL>SHUTDOWN IMMEDIATE; SQL>STARTUP MOUNT; 4. After step 3 completes: - If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN. SQL>SHUTDOWN IMMEDIATE; SQL>STARTUP; - If you are using Oracle Database 10g release 2, you can open the new Primary database STAN: SQL>ALTER DATABASE OPEN; STAN is now transitioned to the primary database role.

5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM. SQL>ALTER SYSTEM SWITCH LOGFILE; Comment

Encryption with Oracle Data Pump


Filed under: Encryption with Oracle Datapump by Deepak Leave a comment December 14, 2009 Encryption with Oracle Data Pump - from Oracle White paper

Introduction The security and compliance requirements in todays business world present manifold challenges. As incidences of data theft increase, protecting data privacy continues to be of paramount importance. Now a de facto solution in meeting regulatory compliances, data encryption is one of a number of security tools in use. The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access. Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database. Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set. The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works. Please note that this paper does not apply to the Original Export/Import utilities. For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set, refer to the Oracle Data Pump Encrypted Dump File Support whitepaper. The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word. Once a table column is marked with this keyword, encryption and decryption are performed automatically, without the need for any further user or application intervention. The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column. When an authorized user inserts new data into such a column, TDE column encryption encrypts this data prior to storing it in the database. Conversely, when the user selects the column from the database, TDE column encryption transparently decrypts this data back to its original clear text format. Column data encrypted using TDE remains protected while it resides in the database. However, the protection offered by TDE does not extend beyond the database and so this

protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database. Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it, using a dump file encryption key derived from a userprovided password, before it is written to the export dump file set.. Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set. Whenever Oracle Data Pump unloads or loads tables containing encrypted columns, it uses the external tables mechanism instead of the direct path mechanism. The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer. The steps involved in exporting a table with encrypted columns are as follows: 1. Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database. 2. As part of the SELECT operation, TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key. 3. Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set. To load an export dump file set containing encrypted column data into a target database, the same encryption password used at export time must be provided to Oracle Data Pump import. After verifying that the correct password has been given, the corresponding dump file decryption key is derived from this password. The steps involved in importing a table with encrypted columns are as follows: 1. Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key. 2. Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column. 3. As part of the INSERT operation, TDE automatically encrypts the column data using the column encryption key and then writes it to the database. Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job. Although the data being processed is stored in memory buffers, encryption and decryption are typically CPU intensive operations. Furthermore, additional disk I/O is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks. Keep in mind that in Oracle Data Pump 10g release 2, the ENCRYPTION_PASSWORD parameter applies only to TDE encrypted columns. Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section.

Creating a Table with Encrypted Columns Before using TDE to create and export encrypted columns, it is first necessary to create an Oracle Encryption Wallet, which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys. The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key. In the following example, the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the wallet.Next, create a table with an encrypted column. The password used below in the IDENTIFIED BY clause is optional and TDE uses it to derive the tables column encryption key. If the IDENTIFIED BY clause is omitted, then TDE creates the tables column encryption key based on random data. SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY wallet_pwd SQL> CREATE TABLE DP.EMP (empid NUMBER(6),empname VARCHAR2(100),salary NUMBER(8,2) ENCRYPT IDENTIFIED BY column_pwd Using Oracle Data Pump to Export Encrypted Columns Oracle Data Pump can now be used to export the table. In the following example, the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key. Oracle Data Pump re-encrypts the column data in the dump file using this dump file key. When re-encrypting encrypted column data, Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128).Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements. Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error. This is shown in the following example, in which the Oracle Wallet is manually closed and then the export command is re-issued. Although the ENCRYPTION_PASSWORD is an optional parameter, it is always prudent to export encrypted columns using a password. In the event that the password is not specified, Oracle Data Pump writes the encrypted column data as clear text in the dump file. In such a case, a warning message (ORA-39173) is displayed, as shown in the following example. $ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \ TABLES=emp ENCRYPTION_PASSWORD=dump_pwd SQL> ALTER SYSTEM SET WALLET CLOSE;

$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp \ ENCRYPTION_PASSWORD=dump_pwd Export: Release 10.2.0.4.0 Production on Monday, 09 July, 2009 8:21:23 Copyright (c) 2003, 2007, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 Production With the Partitioning, Data Mining and Real Application Testing options ORA-39001: invalid argument value ORA-39180: unable to encrypt ENCRYPTION_PASSWORD ORA-28365: wallet is not open Restriction with Transportable Tablespace Export Mode Exporting encrypted columns is not limited to table mode exports, as used in the previous examples. If a schema, tablespace, or full mode export is performed, then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set. This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter. There is, however, one exception; transportable tablespace export mode does not support encrypted columns. An attempt to perform an export using this mode when the tablespace contains tables with encrypted columns yields the following error: $ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp Export: Release 10.2.0.4.0 Production on Wednesday, 09 July, 2009 8:48:43 Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 Production With the Partitioning, Data Mining and Real Application Testing options Starting DP.SYS_EXPORT_TABLE_01 : dp/******** directory=dpump_dir dumpfile=emp tables=emp Estimate in progress using BLOCKS method Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 16 KB Processing object type TABLE_EXPORT/TABLE/TABLE . . exported DP.EMP 6.25 KB 3 rows ORA-39173: Encrypted data has been stored unencrypted in dump file set. Master table DP.SYS_EXPORT_TABLE_01 successfully loaded/unloaded ********************************************************************* Dump file set for DP.SYS_EXPORT_TABLE_01 is: /ade/jkaloger_lx9/oracle/work/emp.dmp Job DP.SYS_EXPORT_TABLE_01 completed with 1 error(s) at 08:48:57 $ expdp system/password DIRECTORY=dpump_dir DUMPFILE=dp.dmp \ TRANSPORT_TABLESPACES=dp Export: Release 10.2.0.4.0 Production on Thursday, 09 July, 2009 8:55:07 Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 Production With the Partitioning, Data Mining and Real Application Testing options Starting SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 : system/******** directory=dpump_dir dumpfile=dp transport_tablespaces=dp ORA-39123: Data Pump transportable tablespace job aborted ORA-29341: The transportable set is not self-contained Job SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 stopped due to fatal error at 08:55:25 The ORA-29341 error in the previous example is not very informative. If the same transportable tablespace export is executed using Oracle Database 11g release 1, that version does a better job at pinpointing the problem via the information in the ORA-39929 error: Using Oracle Data Pump to Import Encrypted Columns Just as when exporting encrypted column data, an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data. Otherwise, an ORA-28365: wallet not open error is returned. Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place. Of course, the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export. If the encryption attributes for all columns do not exactly match between the source and target tables, then an ORA-26033 exception is raised when you try to import the export dump file set. In the example of the DP.EMP table, the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed. For example, assume in the following example that the DP.EMP table on the target system has been created exactly as it is on the source system except that the ENCRYPT attribute has not been assigned to the SALARY column. The output and resulting error messages would look as follows: $ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd $ expdp system/password DIRECTORY=dpump_dir dumpfile=dp.dmp \ TRANSPORT_TABLESPACES=dp Export: Release 11.1.0.7.0 Production on Thursday, 09 July, 2009 9:09:00 Copyright (c) 2003, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 Production With the Partitioning, Data Mining and Real Application Testing Options Starting SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 : system/******** directory=dpump_dir dumpfile=dp transport_tablespaces=dp ORA-39123: Data Pump transportable tablespace job aborted ORA-39187: The transportable set is not self-contained, violation list is ORA-39929: Table DP.EMP in tablespace DP has encrypted columns which are not supported. Job SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 stopped due to fatal error at 09:09:21 Restriction Using Import Network Mode A network mode import uses a database link to extract data from a remote database and load it into the connected database instance. There are no export dump files involved in a network mode import and therefore there is no re-encrypting of TDE column data. Thus the use of the ENCRYPTION_PASWORD parameter is prohibited in network mode imports, as shown in the following example:

$ impdp dp/dp TABLES=dp.emp DIRECTORY=dpump_dir NETWORK_LINK=remote \ TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd Import: Release 10.2.0.4.0 Production on Friday, 09 July, 2009 11:00:57 Copyright (c) 2003, 2007, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 Production With the Partitioning, Data Mining and Real Application Testing options ORA-39005: inconsistent arguments ORA-39115: ENCRYPTION_PASSWORD is not supported over a network link $ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp \ ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND Import: Release 10.2.0.4.0 Production on Thursday, 09 July, 2009 10:55:40 Copyright (c) 2003, 2007, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 Production With the Partitioning, Data Mining and Real Application Testing options Master table DP.SYS_IMPORT_TABLE_01 successfully loaded/unloaded Starting DP.SYS_IMPORT_TABLE_01 : dp/******** directory=dpump_dir dumpfile=emp.dmp tables=emp encryption_password=******** table_exists_action=append

Processing object type TABLE_EXPORT/TABLE/TABLE ORA-39152: Table DP.EMP exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append Processing object type TABLE_EXPORT/TABLE/TABLE_DATA ORA-31693: Table data object DP.EMP failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-26033: column EMP.SALARY encryption properties differ for source or target table Job DP.SYS_IMPORT_TABLE_01 completed with 2 error(s) at 10:55:48 Oracle White PaperEncryption with Oracle Data Pump By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import. However, it is important to understand that any TDE column data will be transmitted in clear-text format. If you are concerned about the security of the information being transmitted, then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption. When the ENCRYPTION_PASSWORD Parameter Is Not Needed It should be pointed out that when importing from an export dump file set that includes encrypted column data, the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed. The following are cases in which the encryption password and Oracle Wallet are not needed:
y y y y

A full metadata-only import A schema-mode import in which the referenced schemas do not include tables with encrypted columns A table-mode import in which the referenced tables do not include encrypted columns Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database. An external table definition is created using the SQL

syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause. The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data. Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp). As is always the case when dealing with TDE columns, the Oracle Wallet must first be open before creating the external table. The following example creates an external table called DP.XEMP and populates it using the data in the DP.EMP table. Notice that datatypes for the columns are not specified. This is because they are determined by the column datatypes in the source table in the SELECT subquery. SQL> CREATE TABLE DP.XEMP ( empid, empname, salary ENCRYPT IDENTIFIED BY column_pwd) ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY dpump_dir LOCATION (xemp.dmp) ) REJECT LIMIT UNLIMITED AS SELECT * FROM DP.EMP; The steps involved in creating an external table with encrypted columns are as follows: 1. The SQL engine selects the data for the table DP.EMP from the database. If any columns in the table are marked as encrypted, as the salary column is for DP.EMP, then TDE decrypts the column data as part of the select operation. 2. The SQL engine then inserts the data, which is in clear text format, into the DP.XEMP table. If any columns in the external table are marked as encrypted, as one of its columns is, then TDE encrypts this column data as part of the insert operation.

3. Because DP.XEMP is an external table, the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file. The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed. However, the data in the external table can be selected any number of times using a simple SQL SELECT statement: The steps involved in selecting data with encrypted columns from an external table are as follows: 1. The SQL engine initiates a select operation. Because DP.XEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file. 2. The data is passed back to the SQL engine. If any columns in the external table are marked as encrypted, as one of its columns is, then TDE decrypts the data as part of the select operation. The use of the encryption password in the IDENTIFIED BY clause is optional, unless you plan to move the dump file to another database. In that case, the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file. Encryption Parameter Change in 11g Release 1 As previously discussed, in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD. So, by default, if the ENCRYPTION_PASSWORD is present on the command line, then it applies only to TDE encrypted columns (if there are no such columns being exported, then the parameter is ignored). SQL> SELECT * FROM DP.XEMP; Beginning in Oracle Database 11g release 1, the ability to encrypt the entire export dump file set is introduced and with it, several new encrypted-related parameters. A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set. Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption. The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted. To encrypt only TDE columns using Oracle Data Pump 11g, it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY. So, the 10g example previously shown becomes the following in 11g:
$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \ TABLES=emp ENCRYPTION_PASSWORD=dump_pwd \ ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Comment

DATAPUMP
Filed under: DATAPUMP, Oracle 10g by Deepak Leave a comment December 14, 2009 DATAPUMP IN ORACLE For using DATAPUMP through DB CONSOLE http://www.oracle.com/technology/obe/obe10gdb/storage/datapump/datapump.htm There are two new concepts in Oracle Data Pump that are different from original Export and Import. Directory Objects Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. These server processes access files for the Data Pump jobs using directory objects that identify the location of the files. The directory objects enforce a security model that can be used by DBAs to control access to these files. Interactive Command-Line Mode Besides regular operating system command-line mode, there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations. Changing from Original Export/Import to Oracle Data Pump Creating Directory Objects In order to use Data Pump, the database administrator must create a directory object and grant privileges to the user on that directory object. If a directory object is not specified, a default directory object called data_pump_dir is provided. The default data_pump_dir is available only to privileged users unless access is granted by the DBA. In the following example, the following SQL statement creates a directory object named dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles. Create a directory. 1. SQL> CREATE DIRECTORY dpump_dir1 AS /usr/apps/datafiles;

After a directory is created, you need to grant READ and WRITE permission on the directory to other users. For example, to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1, you must execute the following command:

1. SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO scott;

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories. Once the directory access is granted, the user scott can export his database objects with command arguments: 1. >expdp username/password DIRECTORY=dpump_dir1 dumpfile=scott.dmp

Comparison of command-line parameters from Original Export and Import to Data Pump Data Pump commands have a similar look and feel to the original Export and Import commands, but are different. Below are a few examples that demonstrate some of these differences. 1) Example import of tables from scotts account to jims account Original Import: > imp username/password FILE=scott.dmp FROMUSER=scott TOUSER=jim TABLES=(*) Data Pump Import: > impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp TABLES=scott.emp REMAP_SCHEMA=scott:jim Note how the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA option. 2) Example export of an entire database to a dump file with all GRANTS, INDEXES, and data > exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y > expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL Data Pump offers much greater metadata filtering than original Export and Import. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job. Both parameters work with Data Pump Import as well, and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file. 3) Tuning Parameters Unlike original Export and Import, which used the BUFFER, COMMIT, COMPRESS, CONSISTENT, DIRECT, and RECORDLENGTH parameters, Data Pump needs no tuning to achieve maximum performance. Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner. Initialization parameters should be sufficient upon installation. 4) Moving data between versions The Data Pump method for moving data between different database versions is different from the method used by original Export and Import. With original Export, you had to run an older version of Export to produce a dump file that was compatible with an older database version.With Data Pump, you use the current Export version and simply use the VERSION parameter to specify the target database version. You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g). Example: > expdp username/password TABLES=hr.employees VERSION=10.1 DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp Data Pump Import can always read dump file sets created by older versions of Data Pump Export. Note that Data Pump Import cannot read dump files produced by original Export. Maximizing the Power of Oracle Data Pump Data Pump works great with default parameters, but once you are comfortable with Data Pump, there are new capabilities that you will want to explore.

Parallelism Data Pump Export and Import operations are processed in the database as a Data Pump job, which is much more efficient that the client-side execution of original Export and Import. Now Data Pump operations can take advantage of the servers parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database.) The number of parallel processes can be changed on the fly using Data Pumps interactive command-line mode. You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa). For best performance, you should do the following: Make sure your system is well balanced across CPU, memory, and I/O. Have at least one dump file for each degree of parallelism. If there arent enough dump Files, performance will not be optimal because multiple threads of execution will be trying to access the same dump file. Put files that are members of a dump file set on separate disks so that they will be written and read in parallel. For export operations, use the %U variable in the DUMPFILE parameter so multiple dump files can be automatically generated. Example: > expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=par_exp%u.dmp PARALLEL=4 REMAP REMAP_TABLESPACE This allows you to easily import a table into a different tablespace from which it was originally exported. The databases have to be 10.1 or later. Example: > impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpumpdir1 DUMPFILE=employees.dmp

REMAP_DATAFILES This is a very useful feature when you move databases between platforms that have different file naming conventions. This parameter changes the source datafile name to the target datafile name in all SQL statements where the source datafile is referenced. Because the REMAP_DATAFILE value uses quotation marks, its best to specify the parameter within a parameter file. Example: The parameter file, payroll.par, has the following content: DIRECTORY=dpump_dir1 FULL=Y DUMPFILE=db_full.dmp REMAP_DATAFILE=C:\DB1\HRDATA\PAYROLL\tbs6.dbf:/db1/hrdata/payroll/tbs6.dbf You can then issue the following command: > impdp username/password PARFILE=payroll.par Even More Advanced Features of Oracle Data Pump Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable. A couple of prominent features are described here.Interactive Command-Line Mode You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode. Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress. Here are some of the things you can do while in this mode: See the status of the job. All of the information needed to monitor the jobs execution is available.
y y y y y y

Add more dump files if there is insufficient disk space for an export file. Change the default size of the dump files. Stop the job (perhaps it is consuming too many resources) and later restart it (when more resources become available). Restart the job. If a job was stopped for any reason (system failure, power outage), you can attach to the job and then restart it. Increase or decrease the number of active worker processes for the job. (Enterprise Edition only.) Attach to a job from a remote site (such as from home) to monitor status.

Network Mode Data Pump gives you the ability to pass data between two databases over a network (via a database link), without creating a dump file on disk. This is very useful if youre moving data between databases, like data marts to data warehouses, and disk space is not readily available. Note that if you are moving large volumes of data, Network mode is probably going to be slower than file mode. Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance. Network export gives you the ability to export read-only databases. (Data Pump Export cannot run locally on a readonly instance because the job requires write operations on the instance.) This is useful when there is a need to export data from a standby database. Generating SQLFILES In original Import, the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script. With Data Pump, its a lot easier to get a workable DDL script. When you run Data Pump Import and specify the SQLFILE parameter, a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types, not just tables and indexes. Although this output file is ready for execution, the DDL statements are not actually executed, so the target system will not be changed. SQLFILEs can be particularly useful when pre-creating tables and objects in a new database. Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output. For example, if you want to create a database that contains all the tables and indexes of the source database, but that does not include the same constraints, grants,and other metadata, you would issue a command as follows: >impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql INCLUDE=TABLE,INDEX The SQL file named expfull.sql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired. Comment

Clone Database using RMAN


Filed under: Clone database using RMAN by Deepak Leave a comment December 10, 2009 Clone database using Rman

Target db : test Clone db : clone In target database; 1.Take full backup using Rman. SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Archive Mode Enabled c:\oracle\ora92\RDBMS 14

Next log sequence to archive 16 Current log sequence SQL> ho rman Recovery Manager: Release 9.2.0.1.0 Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved. RMAN> connect target connected to target database: TEST (DBID=1972233550) RMAN> show all; using target database controlfile instead of recovery catalog RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP ON; 16

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO %F; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO C:\ORACLE\ORA92\DATABASE\SNCFTEST.ORA; # default RMAN> backup database plus archivelog; Starting backup at 23-DEC-08 current log archived allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=17 devtype=DISK channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set input archive log thread=1 sequence=14 recid=1 stamp=674240935 input archive log thread=1 sequence=15 recid=2 stamp=674240997 input archive log thread=1 sequence=16 recid=3 stamp=674242208 channel ORA_DISK_1: starting piece 1 at 23-DEC-08 channel ORA_DISK_1: finished piece 1 at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE4K307L0_1_1 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 Finished backup at 23-DEC-08 Starting backup at 23-DEC-08

using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00001:\ORACLE\ORADATA\TEST\SYSTEM01.DBF input datafile fno=00002:\ORACLE\ORADATA\TEST\UNDOTBS01.DBF input datafile fno=00005:\ORACLE\ORADATA\TEST\EXAMPLE01.DBF input datafile fno=00010:\ORACLE\ORADATA\TEST\XDB01.DBF input datafile fno=00006:\ORACLE\ORADATA\TEST\INDX01.DBF input datafile fno=00009:\ORACLE\ORADATA\TEST\USERS01.DBF input datafile fno=00003:\ORACLE\ORADATA\TEST\CWMLITE01.DBF input datafile fno=00004:\ORACLE\ORADATA\TEST\DRSYS01.DBF input datafile fno=00007:\ORACLE\ORADATA\TEST\ODM01.DBF input datafile fno=00008:\ORACLE\ORADATA\TEST\TOOLS01.DBF channel ORA_DISK_1: starting piece 1 at 23-DEC-08 channel ORA_DISK_1: finished piece 1 at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE5K307L5_1_1 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:56 Finished backup at 23-DEC-08 Starting backup at 23-DEC-08 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archive log backupset channel ORA_DISK_1: specifying archive log(s) in backup set

input archive log thread=1 sequence=17 recid=4 stamp=674242270 channel ORA_DISK_1: starting piece 1 at 23-DEC-08 channel ORA_DISK_1: finished piece 1 at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE6K307MU_1_1 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02 Finished backup at 23-DEC-08 Starting Control File and SPFILE Autobackup at 23-DEC-08 piece handle=C:\ORACLE\ORA92\DATABASE\C-1972233550-20081223-00 comment=NONE Finished Control File and SPFILE Autobackup at 23-DEC-08 RMAN> exit Recovery Manager complete. SQL> select name from v$database; NAME TEST SQL> select dbid from v$database; DBID 1972233550 In clone database; 1.create service,password file,and put entries in tnsnames.ora and lsnrctl.ora files. Create all the folders neeeded for a database. 2.edit the pfile and add following commands,

Db_file_name_convert=target db oradata path,clone db oradata path Log_file_name_convert=target db oradata path,clone db oradata path 3.startup the listner using lsnrctl cmd and then startup the clone db in nomount using pfile. SQL> conn /as sysdba Connected to an idle instance. SQL> startup pfile=C:\oracle\admin\clone\pfile\initclone.ora nomount ORACLE instance started. Total System Global Area 135338868 bytes Fixed Size Variable Size Database Buffers Redo Buffers SQL> ho lsnrctl status SQL> ho lsnrctl stop SQL> ho lsnrctl start 4.connect rman. 5.rman>connect target sys/sys@test(TARGET DB) 6. rman>connect auxiliary sys/sys 7. rman>duplicate target database to clone;(CLONE DBNAME) SQL> ho rman RMAN> connect target sys/sys@test connected to target database: TEST (DBID=1972233550) RMAN> connect auxiliary sys/sys 453492 bytes 109051904 bytes 25165824 bytes 667648 bytes

connected to auxiliary database: CLONE (not mounted) RMAN> duplicate target database to clone; Scripts will be running SQL> select name from v$database; select name from v$database * ERROR at line 1: ORA-01507: database not mounted SQL> ho rman SQL> alter database mount; alter database mount * ERROR at line 1: ORA-01100: database already mounted 8.it will run for a while and exit from rman and open the database using reset logs.. SQL> alter database open resetlogs; Database altered. 9. check for dbid. 10.create temporary tablespace. SQL> select name from v$database; NAME CLONE

SQL> select dbid from v$database; DBID 1972233550 Comment

step by step standby database configuration in 10g


Filed under: Dataguard - creation of standby database in 10g by Deepak Leave a comment December 9, 2009 Oracle 10g Manual Creation of Physical STANDBY Database Using Data Guard Step-by-step instructions on how to create a Physical STANDBY Database on Windows and UNIX servers,and maintenance tips on the databases in a Data Guard Environment. Oracle 10g Data Guard is a great tool to ensure high availability, data protection and disaster recovery for enterprise data. I have been working on Data Guard/STANDBY databases using both Grid control and SQL command line for a couple of years, and my latest experience with Data Guard was manually creating a Physical STANDBY Database for a Laboratory Information Management System (LIMS) half a year ago. I maintain it daily and it works well. I would like to share my experience with the other DBAs. In this example the database version is 10.2.0.3.. The PRIMARY database and STANDBY database are located on different machines at different sites. The PRIMARY database is called PRIMARY and the STANDBY database is called STANDBY. I use Flash Recovery Area, and OMF. I. Before you get started: 1. Make sure the operating system and platform architecture on the PRIMARY and STANDBY systems are the same; 2. Install Oracle database software without the starter database on the STANDBY server and patch it if necessary. Make sure the same Oracle software release is used on the PRIMARY and STANDBY databases, and Oracle home paths are identical. 3. Test the STANDBY Database creation on a test environment first before working on the Production database. II. On the PRIMARY Database Side:

1. Enable forced logging on your PRIMARY database: SQL> ALTER DATABASE FORCE LOGGING; 2. Create a password file if it doesnt exist. 1) To check if a password file already exists, run the following command: SQL> select * from v$pwfile_users; 2) If it doesnt exist, use the following command to create one: - On Windows: $cd %ORACLE_HOME%\database $orapwd file=pwdPRIMARY.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with the password for the SYS user.) - On UNIX: $Cd $ORACLE_HOME/dbs $Orapwd file=pwdPRIMARY.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with your actual password for the SYS user.) 3. Configure a STANDBY Redo log. 1) The size of the STANDBY redo log files should match the size of the current PRIMARY database online redo log files. To find out the size of your online redo log files: SQL> select bytes from v$log; BYTES 52428800 52428800 52428800 2) Use the following command to determine your current log file groups: SQL> select group#, member from v$logfile; 3) Create STANDBY Redo log groups. My PRIMARY database had 3 log file groups originally and I created 3 STANDBY redo log groups using the following commands: SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M; 4) To verify the results of the STANDBY redo log groups creation, run the following query: SQL>select * from v$STANDBY_log; 4. Enable Archiving on PRIMARY. If your PRIMARY database is not already in Archive Log mode, enable the archive log mode: SQL>shutdown immediate; SQL>startup mount;

SQL>alter database archivelog; SQL>alter database open; SQL>archive log list; 5. Set PRIMARY Database Initialization Parameters Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new PRIMARY role parameters. 1) Create pfile from spfile for the PRIMARY database: - On Windows: SQL>create pfile=\database\pfilePRIMARY.ora from spfile; (Note- specify your Oracle home path to replace ). - On UNIX: SQL>create pfile=/dbs/pfilePRIMARY.ora from spfile; (Note- specify your Oracle home path to replace ). 2) Edit pfilePRIMARY.ora to add the new PRIMARY and STANDBY role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly) db_name=PRIMARY db_unique_name=PRIMARY LOG_ARCHIVE_CONFIG=DG_CONFIG=(PRIMARY,STANDBY) LOG_ARCHIVE_DEST_1= LOCATION=F:\Oracle\flash_recovery_area\PRIMARY\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARY LOG_ARCHIVE_DEST_2= SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 remote_login_passwordfile=EXCLUSIVE FAL_SERVER=STANDBY FAL_CLIENT=PRIMARY STANDBY_FILE_MANAGEMENT=AUTO # Specify the location of the STANDBY DB datafiles followed by the PRIMARY location; DB_FILE_NAME_CONVERT=E:\oracle\product\10.2.0\oradata\STANDBY\DATAFILE,'E:\ oracle\product\10.2.0\oradata\PRIMARY\DATAFILE # Specify the location of the STANDBY DB online redo log files followed by the PRIMARY location LOG_FILE_NAME_CONVERT=E:\oracle\product\10.2.0\oradata\STANDBY\ONLINELOG,

E:\oracle\product\10.2.0\oradata\PRIMARY\ONLINELOG,F:\Oracle\flash_recovery_area\ST ANDBY\ONLINELOG,F:\Oracle\flash_recovery_area\PRIMARY\ONLINELOG 6. Create spfile from pfile, and restart PRIMARY database using the new spfile. Data Guard must use SPFILE. Create the SPFILE and restart database. - On windows: SQL> shutdown immediate; SQL> startup nomount pfile=\database\pfilePRIMARY.ora; SQL>create spfile from pfile=\database\pfilePRIMARY.ora; Restart the PRIMARY database using the newly created SPFILE. SQL>shutdown immediate; SQL>Startup; (Note- specify your Oracle home path to replace ). - On UNIX: SQL> shutdown immediate; SQL> startup nomount pfile=/dbs/pfilePRIMARY.ora; SQL>create spfile from pfile=/dbs/pfilePRIMARY.ora; Restart the PRIMARY database using the newly created SPFILE. SQL>shutdown immediate; SQL>Startup; (Note- specify your Oracle home path to replace ). III. On the STANDBY Database Site: 1. Create a copy of PRIMARY database data files on the STANDBY Server: On PRIMARY DB: SQL>shutdown immediate; On STANDBY Server (While the PRIMARY database is shut down): 1) Create directory for data files, for example, on windows, E:\oracle\product\10.2.0\oradata\STANDBY\DATAFILE. On UNIX, create the directory accordingly. 2) Copy the data files and temp files over. 3) Create directory (multiplexing) for online logs, for example, on Windows, E:\oracle\product\10.2.0\oradata\STANDBY\ONLINELOG and F:\Oracle\flash_recovery_area\STANDBY\ONLINELOG. On UNIX, create the directories accordingly. 4) Copy the online logs over. 2. Create a Control File for the STANDBY database: On PRIMARY DB, create a control file for the STANDBY to use: SQL>startup mount;

SQL>alter database create STANDBY controlfile as STANDBY.ctl; SQL>ALTER DATABASE OPEN; 3. Copy the PRIMARY DB pfile to STANDBY server and rename/edit the file. 1) Copy pfilePRIMARY.ora from PRIMARY server to STANDBY server, to database folder on Windows or dbs folder on UNIX under the Oracle home path. 2) Rename it to pfileSTANDBY.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly) *.audit_file_dest=E:\oracle\product\10.2.0\admin\STANDBY\adump *.background_dump_dest=E:\oracle\product\10.2.0\admin\STANDBY\bdump *.core_dump_dest=E:\oracle\product\10.2.0\admin\STANDBY\cdump *.user_dump_dest=E:\oracle\product\10.2.0\admin\STANDBY\udump *.compatible=10.2.0.3.0 control_files=E:\ORACLE\PRODUCT\10.2.0\ORADATA\STANDBY\CONTROLFILE\STA NDBY.CTL,'F:\ORACLE\FLASH_RECOVERY_AREA\STANDBY\CONTROLFILE\STAN DBY.CTL db_name=PRIMARY db_unique_name=STANDBY LOG_ARCHIVE_CONFIG=DG_CONFIG=(PRIMARY,STANDBY) LOG_ARCHIVE_DEST_1= LOCATION=F:\Oracle\flash_recovery_area\STANDBY\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY LOG_ARCHIVE_DEST_2= SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 FAL_SERVER=PRIMARY FAL_CLIENT=STANDBY remote_login_passwordfile=EXCLUSIVE # Specify the location of the PRIMARY DB datafiles followed by the STANDBY location DB_FILE_NAME_CONVERT=E:\oracle\product\10.2.0\oradata\PRIMARY\DATAFILE,E:\ oracle\product\10.2.0\oradata\STANDBY\DATAFILE # Specify the location of the PRIMARY DB online redo log files followed by the STANDBY location LOG_FILE_NAME_CONVERT=E:\oracle\product\10.2.0\oradata\PRIMARY\ONLINELOG, E:\oracle\product\10.2.0\oradata\STANDBY\ONLINELOG,F:\Oracle\flash_recovery_area\PRI MARY\ONLINELOG,F:\Oracle\flash_recovery_area\STANDBY\ONLINELOG STANDBY_FILE_MANAGEMENT=AUTO

(Note: Not all the parameter entries are listed here.) 4. On STANDBY server, create all required directories for dump and archived log destination: Create directories adump, bdump, cdump, udump, and archived log destinations for the STANDBY database. 5. Copy the STANDBY control file STANDBY.ctl from PRIMARY to STANDBY destinations ; 6. Copy the PRIMARY password file to STANDBY and rename it to pwdSTANDBY.ora. On Windows copy it to \database folder, and on UNIX copy it to /dbs directory. And then rename the password file. 7. For Windows, create a Windows-based service (optional): $oradim NEW SID STANDBY STARTMODE manual 8. Configure listeners for the PRIMARY and STANDBY databases. 1) On PRIMARY system: use Oracle Net Manager to configure a listener for PRIMARY and STANDBY. Then restart the listener. $lsnrctl stop $lsnrctl start 2) On STANDBY server: use Net Manager to configure a listener for PRIMARY and STANDBY. Then restart the listener. $lsnrctl stop $lsnrctl start 9. Create Oracle Net service names. 1) On PRIMARY system: use Oracle Net Manager to create network service names for PRIMARY and STANDBY. Check tnsping to both services: $tnsping PRIMARY $tnsping STANDBY 2) On STANDBY system: use Oracle Net Manager to create network service names for PRIMARY and STANDBY. Check tnsping to both services: $tnsping PRIMARY $tnsping STANDBY 10. On STANDBY server, setup the environment variables to point to the STANDBY database. Set up ORACLE_HOME and ORACLE_SID. 11. Start up nomount the STANDBY database and generate a spfile. - On Windows: SQL>startup nomount pfile=\database\pfileSTANDBY.ora;

SQL>create spfile from pfile=\database\pfileSTANDBY.ora; Restart the STANDBY database using the newly created SPFILE. SQL>shutdown immediate; SQL>startup mount; - On UNIX: SQL>startup nomount pfile=/dbs/pfileSTANDBY.ora; SQL>create spfile from pfile=/dbs/pfileSTANDBY.ora; Restart the STANDBY database using the newly created SPFILE. SQL>shutdown immediate; SQL>startup mount; (Note- specify your Oracle home path to replace ). 12. Start Redo apply 1) On the STANDBY database, to start redo apply: SQL>alter database recover managed STANDBY database disconnect from session; If you ever need to stop log apply services: SQL> alter database recover managed STANDBY database cancel; 13. Verify the STANDBY database is performing properly: 1) On STANDBY perform a query: SQL>select sequence#, first_time, next_time from v$archived_log; 2) On PRIMARY, force a logfile switch: SQL>alter system switch logfile; 3) On STANDBY, verify the archived redo log files were applied: SQL>select sequence#, applied from v$archived_log order by sequence#; 14. If you want the redo data to be applied as it is received without waiting for the current STANDBY redo log file to be archived, enable the real-time apply. To start real-time apply: SQL> alter database recover managed STANDBY database using current logfile disconnect; 15. To create multiple STANDBY databases, repeat this procedure. IV. Maintenance: 1. Check the alert log files of PRIMARY and STANDBY databases frequently to monitor the database operations in a Data Guard environment. 2. Cleanup the archive logs on PRIMARY and STANDBY servers.

I scheduled weekly Hot Whole database backup against my PRIMARY database that also backs up and delete the archived logs on PRIMARY. For the STANDBY database, I run RMAN to backup and delete the archive logs once per week. $rman target /@STANDBY; RMAN>backup archivelog all delete input; To delete the archivelog backup files on the STANDBY server, I run the following once a month: RMAN>delete backupset; 3. Password management The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on PRIMARY database, you will have to update the password file for STANDBY database accordingly, otherwise the logs wont be shipped to the STANDBY server. Refer to section II.2, step 2 to update/recreate password file for the STANDBY Sdatabase.

S-ar putea să vă placă și