Sunteți pe pagina 1din 102



Xmanager is used to work with with GUI applications of Linux server from client node.
Steps to work with Xmanager:
1. In the client, on desktop [or from the menu] we can find Xmanager 2.0 
Xmanager Passive. Double click on it.
2. In the Telnet or Putty window login to linux and export or set the DISPLAY
variable. Ex: export DISPLAY= [ is IP address of
client machine]
3. Test Xwindow is opening or not using ‘xclock’ command in telnet/putty window


Pre-requstics for installilng oracle

• We must have o/s user
• O/S user must belong to dba group
• We must have staging area [Where Oracle S/W dump exist]
• We need adequate free space [ 3 to 4 GB]

Steps of install oracle

• Start x-manager passive
• Login into OS Account
• Go to stage area (i.e. open the location of software)
• Export or set DISPLAY
• To perform installation,we need to invoke OUI by executing setup file[ OUI –
Oracle Universal Installer]
• Run the executable file ‘./runInstaller’

For installing oracle , we have to specify oracle home dir which can be created using mkdir

While performing installation we may face same error like

(OUI-10094 problem in saving invertory installation ,connot contain
/home/oracle/product/10.2.0/dbs/inventory/contents XML/oraclehome/props (perimisson
Because while installing the oracle 10g first time some information will be stored in
/etc/orainventory/contents/inventory.xml. In this xml file the link for some files are stored.
To access this link files while installing the next time we have to give the perimision for
that files by using chmod command.
Chmod 777 /etc/orainventory/contents/inventory.xml
It is better to give file perimission to mountpoints because these are so many files in the
xml file,while installing it succeeds or not the path of oracle home is stored in that file.
We need to access these files because while installing it reaches for the file link in
xml file
While installing it ask us to execute this is mandatory because if we execute this
file it copies files like oraenv files

Page 1 of 102

This file ( must be executed as root user only

[root@linux ~] cd /oraDB/kittu/ohome/

Then it asks questions like

( the following env variable are set as
ORACLE_HOME=kittu, ORACLE_HOME=/oraDB/kittu/home
Enter full path of local bin dir [/user/local/bin]
The file “dbname,oraenv,coraenv” already exe in /user/local/bin overwrite it(y/n)
All file are extracte from source file called praductions.jar(jarjava archeive)

script this command will capture all the acivites that done by terminal or user
Syn:- script filename
Script abc
All the activities done by the user after running script will be copied into file called abc

To exit from script ,type exit

Or stop capturing
To see the activities use more command
All these activites are stored in file on server
The default name for script file is typescript
Specific prerequestic checks
Summary oracle db10g

In real time envi ronment, we need to seen all the activities performed by us to client.what
we need to send to client must be given by the client in documentation. This is also called
as ticket. These files are called log files

The naming conversion for log file is


For sending this report, we need to make a clear view of activities in a file on local pc

Steps to place logfile in local pc

• Right click on session and select changesettings
• Select session logging and choose log all session output and browse a location to
store information and then click apply. From then whatever we fire on that
session will be stored into that file including output generated by command .after
finishing executing all commands and completion of work follow the next step
• Again click on session and select change setting
• Select sessionlogging and then choose one and then click apply

Page 2 of 102

2nd Method Of Installation using VNC [Vertual Network Computing]

Vnc server is present on server and vnc viwer is present on client

We need to start vnc server on server through our session and must open vnc viwer on

To start vnc server

Syn:- vncserver
Then vnc server is started on server. When we start vnc server it asks for password
for frist time. Enter password whatever you like. After entering the password it create
hidden directory called .vnc. This file is created under home directory of user. This
directory consists of files like password,files,startup files,log and pid(process id) files .
They are

In server,every vnc connection is created with portnumber its starts from ‘1’ next
connection is ‘2’
We identified this port number from a line where vnc is started .this file is
new’linux6:1(kittu)’ desktop is linux6:1

To search whether vnc started or not

Ps –of |grep vnc

To kill process:-
For this we must have the process id of vnc server. This pid is stored in file
 more
ps –ef |grep vnc
kill -9 6470
Then vnc process is stopped or killed

How to start vnc server from client and how to acces vnc server from client?
 open o/s user
type vnc server and press enter
enter passwd if we open vnc server for first type.
identified the portnumber of vnc server
open vnc viewer and type ip address of server along with portnumber
Enter the passwd and press enter
Then we connect to x-server through vnc viwer

Page 3 of 102

What is difference between vnc server and x-manager

In x-manager, we have to set display. In this if we close the session in the middle of
installation or the system is shut down .then installation stops in middle. But in vnc server
the installation is being done on server. So if we close the session in the middle of
installation or system (pc) is shut down ,the process of installation does not stop because
this is done at server level. we can get back the old session by opening vnc viewer in
shared mode by entering ip address with portnumber and passwd .we can open this session
on another pc also.

Screening is the concept of maintaining the session ‘s data available when close the
For exemple ,when we are working with file in ‘vi’ editor we modified 100 lines and
close the session with out saving the file. Actually the file will not be modified at this
situation. In this case ,if we use screening ,we can retrive the modification as usual what we
had done.
This is possible by following the below steps
type screen command
do the modification to the file whatever we want
close the session
its given with name screen 0
each and every screen is identified by socket number
we list the screen by using command
syn: screen -ls
9501.pts -4 .linux6  socket number
=> the local screen is attached to the session by using the following command
Syn: screen –x socketnumber
this screen retains the session and we can do the activites what we want to do in previous
actually vnc is used for gui mode and screen is used for cui mode

oracle 9i software occupies 1.6gb

oracle 10g software occupies 1.26gb

Page 4 of 102


The installation of database is done in 2 ways

 Manual process
 DBCA (Database Configuration Assistant)

The requirements for installing database:

 We must have oracle software installed

 We must have user account
 We must have adequate freespace

Installation of database using manual process:-

1. We must set the environments for database ie., we must set values to environment
variables like ORACLE_SID, ORACLE_HOME, PATH .
Fallow the below steps to set environment

$export ORACLE_SID= dkittu

$export ORACLE_HOME=/u001/kitty/mysore (software installed

ORALCE_SID will be database name. So SID must be same as of the database name we
want to create.

2. create the oracle initialization file init+sid.ora in $ORACLE_HOME/dbs directory. In

this file we defined some parameters required to create and manage a database. The file
name should be in the format init+$ORACLE_SID.ora . Ex: initdkittu.ora
The file should be created with following parameters
 db_name = dkittu
 db_cache_size=500m or 50000000
 shared_pool_size = 50m
 log_buffer =10000
Undo_tablespace = undotso1
 undo_management = /oraDB/kittu/kittudb/c1.ctl
 compatible=

It is better to save and create database files in some particular location

Page 5 of 102

3. After completion of above process

 connect to sqlplus as sysdba and issue command startup nomount.

SQL>enter username: sys as sysdba
SQL> startup nomount

4. Now create the database using the following command

Sql> create database dkittu

Datafile '/oraDB/kittu/kittu_db/system01.dbf' size 500m,
Logfile group 1 '/oraDB/kittu/kittu_db/redo01.rdo' size 5m,
group 2 '/oraDB/kittu/kittu_db/redo02.rdo' size 5m,
Undo tablespace undots01
Datafile '/oraDB/kittu/kittu_db/undo01.dbf' size 50m;

After executing above statement, database is created is displayed. Then execute the below
post scripts

5. Post steps: (scripts)

SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
Instead of writing $ORACLE_HOME we may use ‘?’
This script creates all the dictionary views
After the completion of above script, run below script

SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
This script creates all scripts for all procedural scripts

 now connect to system/manager. Type below one

sql>connect system/manager
SQL> @$ORACLE_HOME/sqlplus/admin/pupbld.sql
This script creates security files

 sql> conn sys as sysdba

 sql> shut immediate
Database closed
 sql> exit

Now we exited from SQL PLUS


Page 6 of 102

Same process is required to create log some modifications required in initialization file and
create database.

1) $export ORACLE_SID= dbcherry

$export ORACLE_HOME=/u001/kitty/myhoome (software
installed local)
Then create initialization file

2) initdbcherry.ora
db_name = dbcherry
db_cache_size=500m or 50000000
shared_pool_size = 50m
log_buffer =10000
undo_tablespace = undotso1
control_files= /oraDB/dittu/database/cs.ctl
undo_management = auto

Then excute the below statements after executing 3 point above executed

create database dbcherry

sysaux Datafile ‘/oraDB/kittu/databases/sysaux.dbf’ size
datafile ‘/oraDB/dittu/database/system.dbf’ size 350m,
Logfile group 1 ‘/oraDB/kittu/database/redo01.rdo’ size 5m,
group2 ‘/oraDB/kittu/database/redo02.rdo’ size 5m,
Undo tablespace undots01
Datafile ‘/oraDB/kittu/database/undo01.dbf size 50m;

Then execute the post scripts

Note: - Database name and Oracle SID must be same

Page 7 of 102

Creating Database Using DBCA( Database Configuration Assistant)

1) For this we had to run X-manager passive and export display

2) Set environment variables like ORACLE_HOME, PATH
Don’t set ORACLE_SID, because we give this SID in creation itself
3) Don’t create initialisation file. It also created during database creation
4) After exporting ENV variables, type dbca and press enter. Then the below steps
takes place

Welcome screen
1) Operation
Select operation you want to perform
Create database
Configure database options is database
Delete database
Manage template
2) Database templates
Select template from following list to creater database
General purpose
Transaction processing
New database
3) Global databasename: ramu
It is consider as
sid: ramu
4) database connection option
Select the mode in which you want your database to operate by default
dedicated server mode (one user)
shared server mode (more user)
5) Initiazation parameters
By default it takes some values if not we can modify those values

o typical charactersets
o custom o use default dbsize
shared pool-5000000 o use unicode sort area size 524288
buffer cache=3k o choose form list
large –o
pga ---1500000

file locations
1)create serverparameter file
Trace file defination
Userprocess-adim /udump
Background process admin/bdump
Core dumps admin/cdump

Page 8 of 102

Database storage
System files

7) create options
create database
save as templates


Then tempfile is displayed which displays complete information of database

Note: for 10g we have some modifications in creation

To indentify whether database is down or up:

Connected to idle instance means database is down
This envinorments varibles values are only with that make that values
permanent,save this value in bash_profile.

How to we maintain multiple database in one o/s user in a server

This is possible by using fuction in bash_profile each function caontain information
envinorment value for one database
Syn: function()
Export oracle_sid=dbname
Export oracle_home=myhome
Define the functions .bash_profile file
When the user login into account,this functions are loaded into memory.
To execute particular function type below commad in shell prompt
Syn: $functionname
Ex:- $database
Then the envinorment variables defined in database are set
To switch into another function ,just type that name and press

 To check whether database process is started or not use below command.

$ps –ef | grep smon
When the database is started the process smon is started.

Homedirectory: - is the location where the files related to that particular user were

Database: - is the location where the database files are navigated.

Page 9 of 102

Generally in oracle there are 2 main users

1) sys as sysdba
2) system.

Sys as sysdba:- This is root login account through this user we perform najor
activites like.

 Starting a database
 Shutdown the database
 Removal of database
 Monitoring the database
 Taking backups

This is recommended to do minor activites like creating tables

 Creating users
 Altering tables,data

There is no password for sys as sysdba. It has the highest privileges of any database.

System :- Through this user we perform the low level activities as we mentioned above.
The password file system is manager.

To connect to system:

Sql> connect system or sql>conn system

To run the script in sql use the below command :-


Generally when we are creating a database we have to perform 2 phases ;-

1) Configuring Instance
2) Configuring Database

1) Configuring Instance : - instance is nothing but memory this activity is done creating

Init + oracle_sid.ora

When instance is started in background two things are started

a) SGA

Page 10 of 102

When we issue startup nomount,it reads initdb.ora and allocates memory to sga and
instance is started
Sga is space reserved for memory(ram) to database
2)configuring database
When we write(create database statements)
Database is created
After this 3 files are created.they are
Redolog files(redo)
After this we have to perform post steps
Sizes for software
Oracle 9i-----1.6gb
Oracle 10-----1.26gb
Q) how to change the accounts of bash shell from k-shell
This can be done in /etc/password file
Change bash from ksh fro the uer which we want to change save the file and exit before
doing any modification in passwd file it is better to maintance a copy of that file
 in bash the autoexe file is bash_profile in ksh the autoexe.file is .profile
When we are changing the shell the data and files in the former shell is available to later

Oracle memory

Oraccle memory is of 2types

.sga (system global area) or (share global area)
.pga (process global area)

Sga is space reserver for oracle database

It is shared memory where the user can share the resource of sga
sga basically contains three parts in fundamental method
later one
shared pool
javapool and large pool

sga contain five components like


Page 11 of 102

db buffer cache we will stored recently used data in buffer cache.if the user request is
present in in buffer cache it sends it to ueser .

sharepool it contain parssed sql information

it translates the sql statement to sql understandable language
logbuffer it contains trasactional data

javapool it is used for jvm operations

sga should be more than 100mb

sga should be minimum 100-200m
the size of sga con’t be more than the 1/3rd size of ram

how to change sga size

sga size can be changed by chaging the parameter values in init.ora

the total size of sga is determined by a parameter sga_max_size

show parameter sga_max_size

we can also know sga size when we startup some data is displayed showing some sizes

total shared global area 165007897

fixed size ----------------------
variable size -----------------------
database buffer -----------------------
redo buffer ------------------------

bouncing the database

shut down the database and again start the database is said to be bouncing the database

Page 12 of 102


Instance is combination of SGA and Oracle Background process.

• when we do startup oracle will allocate SGA and background process are started
which are mandatory to run oracle. This is called Instance.
• Instance opens database files. Each and every thing is performed by Instance.
• All the logical manips like creating, reading, writing, etc., are done by instance.
• All the files are managed by instance.
• User cannot have access to files without instance
• User just cannot to instance not to files.
• User is able to view and perform data(manips) through instance only.
• Making things available to user is done by instance.
• Instance is nothing but ORACLE_SID.
• We can create database with unique.

Three Phases When We Are Starting Databases:-

1. Instance : At this phase Instance will allocate SGA

2. Mount: At this phase Instance opens way to database files
3. Open: Finally database is opened

 When we shutdown the databse, Instance is closed and all the memory (SGA) is
deallocated, now we can open the database files (datafiles,logfiles,control files).

There are many ways to start the database:

1. We use this when we are creating or altering databse. This stage is used for
maintainance of database ie., If we want to increase the size of datafiles, locations
of files and if any issues occurred in database.

Sys: startup nomount;

Alter database mount;
Alter database open

Page 13 of 102

2. Startup Open;
Alter databse open;

3. Startup: It is used to open a database

PGA(Process Global Area)

It is memory reserved for each user process connecting to oracle db. PGA is
sessions private information. It is external to PGA. Every connection has one PGA. This is
private to that connections. Memory is allocated when process is created and deallocated
when process is terminated.


There are 3 types of processes. They are

 Server Processes
 Client Processes
 Background Processes

Server Process : When this session is established, server process is created. Connects to
oracle instance and is started when user establishes a session. To handle the requests of
client process, user process, connected to instance server process is created of behalf of
each users app can perform the one or more of following:

• Parse and execute the sql statements issued through the application (client process)
• Returns result is such a way that the application can process the information

Client Process: started at a time a databse user requests connection to oracle server.
• Client process is a process which is created when client software is started.
• When we execute sql plus from $ prompt ,sqlplus becomes client process.
• The client proces is a process that sends message to a server, requesting the server
to perform a task (service). Client program usually manage the user-interface
portion of the application, validate the data entered by the user, dispatch requests to
server programs and sometimes executes business logic, the client bases process is
front end app that the user sees and interact with

Hand shake: When we start client process (ie., when we five sqlplus on shell prompt)
before this process interact with instance, the server process interact with client process.
This is called handshake.

Parent process: For every process there is a parent process. When we execute sqlplus,
from shell then $prompt ID becomes parent process ID to sqlplus. ie., shell spanned a
process……. Lsnrctl is a software which is executable in ORACLE_HOME/bin

Page 14 of 102


Local Process: When the connection established in server through shell prompt,
Then it is said to be local process.

Non-Local Process: If the connection established out of the server, then it said to be Non-
Local process

DBA provide user/passwd to APP developer to connect to database we have

configure the database which we wish to connect in local pc. In this case client process is
running on pc.
For each client process there is a server process we identify client process on
server, when we start sqlplus as a local client, it is not identified when we start as a non-
local client.

We indentify server process by oracle(It is a Key word)+{ORACLE_SID}

Ex: ps –ef | grep oracledbsidnu
Output will be
Kittu 17148 17147 0 15:20 ? 00:00:00 oracledbsidnu(DESCRIPTION=
Here 17148 is Process ID (PID)
And 17147 is Parent process ID (PPID)

Parent process ID: for non- local client process is 1 init. Init is helpful to establish a
connection. These are client session (or) remote sessions. Oracle is a keyword. It is
established in bin. It is in oracle engine.

* we run exe files in .profile by setting path in profile as PATH=$PATH:.


Starts when oracle instance is started, these are used to run oracle database
There are 2 tyes:
Mandatory  SMON , PMON , CKPT , LGWR , DBWR
Optional  ARCH , Pnnn , Jnnn , LCK

The mandatory process are mandatory to run oracle. These are started automatically
under SGA when we start oracle databse. These must be running in background as long as
database is up
Naming convention for these process:- ora_process_ORACLE_SID
EX: ora_smon_dbsidnu

Page 15 of 102


These process started automatically when oracle Instance is started. Its life time is
till the instance is opened. ORACLESID is the instance name.
*** There is a way to configure database name and Instance name differently

SMON [ System Monitor]

It will be used in crash recovery and temp segment cleaning. It recovers after
instance failure and monitors temporary segments and extensions. It wakes about every 5
min to perform house keeping activities smon must be always running for an instance.

PMON [Process Monitor]

PMON recovers the resources of a failed process. If, MTS(shared server
architecture) is being utilised, PMON monitors and restarts any failed dispaches or server

CKPT [Check Point]

It updates the headers of datafiles, control files with latest SCN(system change
number) number.writes the checkpoint informantion to control files and datafiles headers.

LGWR: [Log Writer]

Flushes the data from log buffer to redolog files, It writes logbuffer out to redolog

DBWR [DB Writer]

Flushes the dba from db buffer cache to data files, we can make multiple
databuffer. It is for writing dirty buffers form the databse block cache to the databse data
files. It only writes blocks back to datafiles on commit or when cache is full add space has
to made for more blockes. We can create multiple dbwiters by defining a parameter in
Parameter: db_writer_processes=2
Data reading from datafiles will be done by server process. Datawriting will be done by
background process.


When we issue a select statements, shared pool Converts the statements to sql
understandable language, Then, it sends it ot db_cache. Then sp server process work is
stopped. Now background processes starts working. If the sql statement related information
is available in db_cache, it sends it to user. If not, it searches for the info in
datafiles and then sends to user.

when we issue a transactional statements like inserting,deletion,updation etc., then

the copy of the data is maintained in 2 locations:

Page 16 of 102

It stores in these 2 locations for 2 reasons

 dbbuffer : for feature maintanence(faster access)
 logbuffer: It is for safety (recovery)

When we say commit then data will be sent to redolog files from log buffer by
lgwr. The data in db_cache also sent to datafiles by dbwr but copy will be maintained in
db_cache for feature maintainence


Arch: For Archeiving

Archeive process writes filled redo logs to the archive log locations. In RAC, the
various arch processes can be utilised to ensure that copies of archieved redologs for each
Instance are available to tha other instance in the RAC setup. They need for recovery
We start optional process by defining the parameters in init.ora.
Log_archive_start=true - it start process
Log_archive_max_process=1 [Now it takes only one process. By default it takes 2

Pnnn: These are parallel store process to perform parallel DML activities. It can be used
for parallel exe of sql statements or recovery. The maximum number of parallel processes
that can be invoked is specified by initialising parameter

To stop these slaves: Parallel_min_servers=0

We provide some hints to oracle to use parallel mechanism.

Jnnn: Job queue processes

To run scheduled jobs in the database
Maximum processes=100

Lck: lock
This is available only in RAC instances. Meant for parallel server setups the
instance lock that are used to shared resources between instances are hold by lock process.

Page 17 of 102


In Oracle there are 2 technologies

 shared server process or MTA(multi threaded architecture)
 dedicated server process

Dedicated server process:

One client process connected to one unique server process is said to be dedicated
server process. Server process is complicated to One client. The server process is there, till
there is client process. It may be idle(or) working. If 100 members are connected to
database 3mb memory is needed. The Architecture we are using is dedicated server

Shared server process:

Multiple clients connected to unique shared servers is said to be shared server
process. It is olderone. The client process will connect to server process through bridge
process called Dispatcher process. Dispatcher is mediator between client process and
shared server process. We can Reduce the burden and saving some resources on the server.
This process is defined in init.ora The advantage of shared server process is saving the
resources of the system.


A single database can be access to multiple instances. We have multiple servers for
instances, but only one database to all instances. This is called RAC.


Maintianing a copy of instance and database is standby database.

The difference is in time log only.

There are 2 types of databases Architectures.
 Physical
Physical Architecture is nothing but O/S level architecture. Files that are at o/s level
are said to by physical architecture.

1. datafiles,min(1)
2. redofiles, min(2)
3. controlfiles, min(1)

To view the set of datafiles, logfiles,controlfiles.

Page 18 of 102

Select name from v$datafiles

Select member from v$logfile
Select name from v$controlfile

Datafile: It stores actual data

Logfile: It stores transactions. The purpose of redolog files is recovery in case of failure.
Controlfile: stores information and status of datafiles and redolog files, the size of control
file is automatically take by system.
 When we inserting data, if redo is filled then oracle starts reading to redo, after
fitting this is again goes back to redo. This cycle goes on repeatedly.
If we enable arching log mode then before over writing the data to previous
redlog files, it takes backup to different location.

How to identify whether a database is available or not?

This is through oratab file in /etc folder. Ex: vi /etc/oratab
It stores database names and oracle homes only databases created through
dbca are loaded manual creation of databases are to be updated manually to /oratab. It
location is changed from one location to another in different O/S.
In solaris: /var/opt/oracle


 Schema Object
 Non – Schema Object
Schema is nothing but a user
Seeded Databases:- Default databases
Ex: sys, system

The objects which reside in the schema are said to be schema objects. Ex:
Table, view, index, synonym, procedure, package, function, database name,
sequence, etc., The objects which are not associated with schema are said to be non-
schema objects.
Ex: Tablespace, Roles

How to refer an object of schema?

1. Either we login to schema and access the object
2. From different login user. First the user has the permission to access that
object, fallow the below syntax
Syn: schema object
Ex: scott

To start database:
Sqlplus “ / as sysdba”

Page 19 of 102

1. Oracle will maintain entire system data into data dictionary of catalog.sql
2. System data is data which is required for functionality of database.
3. Data dictionary or catalog is set of tables, views and synonyms.
4. When we create database, some files and objects are created both physically
& logically.

Physically: control files, redolog files, data files.

Logically: Basetables tab$ fet$ obj$

Vet$ ts$ luster$
File$ idx$ v$

There tables are extracted when we run created data. we connot access these tables
directly. It is very difficult to understand the data in these tables. There are some
views to access these tables. These are created when we run catalog.sql

Tables dba_tables all_tables user_tables
Idx dba_indexes all_indexes user_indexes
Synonyms dba_synonym -- --
Views dba_views -- --
Sequences dba_sequences -- --
Clusters dba_clusters -- --
Database rows dba_db_links -- --
Datafiles dba_datafiles -- --

 Oracle will update the activities in to base tables whatever the ddl activities done
by us. Database engine will take this responsibility. Whole oracle is working based
on there base tables.

 we should use only select statement

 only system and sys has dba privileges

dba_ :- It will display everything in the database (all users info)  every thing

user_ :- It will display only logged in users information  only own

all_ :- It will display logged in users info and the objects which have access to that
user  own+access
Oracle engine will gives access to 9 tables to users by default. All these base tables
are objects to know about the objects and the dba objects.

Page 20 of 102


dba_objects To know the objects in a database

dba_tables T know about tables info
dba_indexes To know about index info
dba_ind_columns To know about the info of index applied in column
dba_synonym To know about synonym info
dba_ views To know about views info
dba_sequences To know about sequences info
dba_clusters To know about source code fro function
procedure,packages etc.,
dba_constraints To know info about constraints
dba_cons_column To know about constraint columns info
dba_tab_column To know about table column info.
session_privs To see session privileges

desc session_privs  only one column

when we give grant rde only one privilege is accessed  create session
Resource: create table, unlimited table space , cluster,sequence, procedure, trigger,
types, operator, index type.

user_tab_privs To see user tables privileges

user_sys_privs To see users privileges
Dba_free_space To see tablespace size and freespace
Dba_tablespace To see tablespace info
Dba_data_files To see datafile info

The views started with v$ are said to be dynamic performers

V$database Database info

V$datafile Datafile info
V$controlfile Controlfile info
V$logfile Log file info
V$version Version
V$session Session info

SQL: Structured Query Language

SQL*PLUS: This will work only in oracle.They are used to format output
i :- To insert a statement in buffer ie., adds new line to the sql statement
a :- Appends new words to sql statement
c :- Change the string and replace it with the required string

Page 21 of 102

syn:- c/<search string>/<replace string>

cl buff: It will clear sql buffer (Clear Buffer)

save: by using this, we can save sql statement. We refer these statements as sql statements.

To run the sql scripts from any location, we had to mention the location of sql scripts ie.,
ORACLE_PATH in bash_profile
Export ORACLE_PATH = /tmp:/oraAPP:/oraAPP/kittu

Usuallly these sql scripts are saved in the location from where we fire sqlplus.

How can we capture some output in sql?

In unix, it is possible by script command
In sql, it is achived by using spool command
Syn:- sql> spool <filename>
sql> statements - - -
sql> spool off
Ex:- sql> spool a.out
sql> select * fromtab;
 o/p along with statements are stored in a.out.
sql> desc abc
sql> spool off

How to format column data in sql

To know line size :- show linesize;
To set line size: set linesize 200
To set column size in numeric format :- column empno format 9999
ie., it displays column empno with 4 digits
To set column size in alphabet format:- column ename format a15

Development :- Designing side is said to be development. When we work on new

project we are said to be working on development side
Production :- After everything is designed and tested then it is deployed into

Query to create user:-

Syn:- create user <username> identified by <password>
Ex:- create user xyz identified by xyz;

Granting privilages to user:-

Syn:- grant connect, resource to xyz;

Droping user:-
Syn:- drop user xyz;

Page 22 of 102

To view the source code if views:-

Syn:- select text from dba_views where view_name=’xyz’;

Granting dba to user:-

Syn:- grant dba to <username>;

Revoke privilage from user:-

Syn:- revoke <privilage> from <username>;

Views:- To see view information

Select view_name,owner,text from dba_views;

Sequences:- To see sequence number

Select sequence_number,sequence_name from dba_sequences;

Synonym:- To see synonym information

Select owner,synonym_name,table_owner,table_name from dba_synonyms;

Indexes:- To see index information

Select owner,index_name,table_name from dba_indexes
To see index column info:-
Select index_name,table_name,column_name from dba_ind_column;
To see row id:-
Select rowid from <tablename>;

Select owner,constraint_name,constraint_type,table_name from dba_constraints;
To see constraint column:-
Select owner,constraint_name,table_name,column_name from dba_cons_column;

Select owner,table_name,tablespace_name,status from dba_tables;

Select username,default_tablespace from dba_users;

Select tablespace_name,status from dba_tablespaces;

Select name,dbid,created,open_mode from v$database;

V$version has only one column  banner
Select * from v$version

Page 23 of 102

Select file_name,tablespace_name,bytes/1024/1024,online_status,autoextensible from
To see datafile in mount stage based on ts index :-
Select name,ts#,status,bytes/1024/1024 from v$datafile where ts# = 0;

Select owner,procedure_name from dba_procedures;
To see source code:-
Select text from dba_source where name=’name’;

Select owner,object_name,object_type from dba_objects;

Select owner,object_name,object_type from dba_objects where object_type =
Source code:-
Select text from dba_source where name = ‘FUNCTIONS‘;

Select owner,object_name,object_type from dba_objects where object_type =
Source code:-
Select text from dba_source where name = ‘---‘;

Select owner,trigger_name,trigger_type,table_name,column_name from dba_triggers;
Source code
Select text from dba_source where name= ‘---‘;

Control file:-
Select name,status from v$controlfile;

Log file:-
Select member,group#,status from v$logfile;

To seee table privilages:-
Select grantee,owner,table_name,grantor,privilage from user_tab_privs;
To see user privilages:-
Select username,privilage,admin_option from user_sys_privs;

Page 24 of 102


Tablespace is a logical structure which binds the objects. Tablespace is a container

for data files i.e., tablespace is a collection of one or more data files. One database has
minimum one tablespace. Tablespace always associated with one or more data files.
Database is a collection of tablespaces. It is one to many relationships.
- Size of database is size of tablespace.
- Size of tablespace is size of data files.
- As we increase tablespaces, database size increases.
- We can’t create tablespace without data file.
- Each tablespace has its own data file.
- Redo logs and control files will never grow in size. So we never consider these
files in the size of tablespaces. These are key structures.
- A data file cannot be shared across tablespace

Syntax to create tablespace:-

Create tablespace <tablespacename> data file <location>
size <size>;
Ex:- create tablespace ts01 data file
‘/oraAPP/kittu/ts01.dbf’ size 10m;

How do we make tablespace offline?

Syn: - alter tablespace ts01 offline;
This means we can’t access the data in the tablespace even we can’t perform select

How do we make tablespace online?

Syn: - alter tablespace ts01 online;

To see tablespace name, filename, size of data file:

Select tablespace_name, file_name, bytes/1024/1024 from

To see size of database:

Select sum (bytes/1024/1024) from dba_data_files;

Dropping a tablespace:
Before dropping tablespace, it is better to make tablespace offline.
Syn:- drop tablespace ts01;
In this case only tablespace is deleted but the data files are maintained in o/s level.
To delete the files in o/s level i.e. contents and data files (contents means objects
i.e. tables, views, etc) in TS.
Syn:- drop tablespace ts01 including contents and data files;

Page 25 of 102

Increasing the size of tablespace:-

We can increase the size of tablespace in two ways.
1) we can increase the size of data file
Syn:- alter database datafile ‘/oraAPP/kittu/ts01.dbf’
resize 50m;
2) we can add a datafile
Syn:- alter tablespace ts01 add datafile ‘----‘ size 10m;

How can we assign tablespace to new user?

Syn:- create user <username> identified by <password> default
tablespace <ts name>
Ex:- create user msb identified by msb default tablespace ts01;

How can we assign tablespace to existing user?

Syn:- alter user <username> default tablespace <ts name>
Ex:- alter user msb default tablespace chinni;

Not important points:-

- If datafile is very big, oracle encounters some issues. So we use datafiles max. of 5GB
- If we doesn’t mention datafile location, it saves that file in ORACLE_HOME/DBS
- When we enter data to a datafile more than its size, it shows below error.
Error:- unable to extend table msb.emp by 128 in tablespace chinni;

To see free space of tablespace:-

Syn:- Select sum(bytes/1024/1024) from dba_free_space where

Nfs mount point:- ( Network file system)

All the mount points that are available to us are said to be local mount points.
Ex:- /oraDB , /oraAPP , /stage , /u001 etc.
A mount point which is placed in another server is said to be NFS mount point.
Syn:- mount –t nfs
This nfs creation is done byunix admin.

Maintenance of datafiles:-
- Physical architectureis maintained by and managed by ORACLE ENGINE.
There are two ways in work nature:
1) Proactive:- the solution before problem exists.
2) Reactive :- the solution after problem exists.
- we can increase the datafile size automatically.
- This is possible by making autoextend on.
Syn:- alter database datafile ‘----‘ autoextend on;

Page 26 of 102

To see datafile and its size dynamically:-

Syn:- select filename, bytes/1024/1024 from dba_data_files
where tablespace_name =’ts’

how can we specify datafile size autoextend upto some size:-

Syn:- alter database datafile ‘---‘ autoextend on maxsize

To see maximum size of datafile:- (autoextend)

Select file_name,autoextensible,maxbytes/1024/1024 from

Increase the size of TS by certain size:-

Alter database datafile ‘---‘ autoextend on next 10m maxsize

Renaming a tablespace:-
It is possible only in 10G
Syn:- alter tablespace chinni rename to babu;
In 9i, it is not possible to rename a tablespace. To perform this we had to follow the
below steps
- create new tablespace
- move all tables from old TS to new TS
Syn:- alter table <tname> move tablespace <new TS>;
Ex:- alter table emp move tablespace venki;

To see table and tablespace name:-

Select table_name,tablespace_name from dba_tables;

To see how many datafiles in tablespace:-

Select count(*) from dba_tables where tablespace_name =
<TS name>’;

-- > When we move table from one TS to another TS, the table will be maintained inuser.

It is new in 10g. by this we can create very big tablespace of terra bytes size.
Maximum size is 4 terrs bytes.
Syn:- create bigfile tablespace ts01 datafile ‘---‘ size 10g;

Renaming a datafile:-
To rename datafile we had to follow the below steps.
- make tablespace offline
- rename datafile in o/s level
syn:- mv < old filename> <new filename>
- rename file in sqllevel
syn:- alter tablespace venki rename datafile ‘<oldname>’ to

Page 27 of 102

- make tablespace online

we rename file in sqllevel to update it in data dictionary.

To see user and tablespace:-

select username,default_tablespace from dba_users;


There are different phases in startup:-
1) Instance allocation:-
Memory for SGA is allocated and background process starts.memory is
allocated by reading parameters from init.ora. this is instance.
Instance started
2) Mount stage:-
Instance opens control file
Database mounted
3) Database openstage:-
Database is opened. i.e. instance opens datafile and redo logs through control
files.because control file contains info of datafiles and redolog files.

Three methods to start database:-

I method:-
sql> startup
In this method all the three phases are executed at a time.

II method:-
sql> startup nomount
In this stage, only instance is allocated.
Sql> alter database mount;
In this stage the database is mounted. i.e second phase.
Sql> alter database open;
In this stage the database is opened. i.e third phase.
We issue startup nomount to perform 2 things:-
1) creation of database
2) creation of control file

III method:-
sql> startup mount
In this stage, phase1 and 2 are executed.
Sql> alter database open;
Database is opened.

Generally we open the database in Mount stage to perform maintenance activities like
renaming datafiles, default tablespaces, redologs etc. In this stage we can’t access dba_
views. We access only v$ views.

Page 28 of 102

We can’t make system tablespace, undo default tablespace offline.

How can we rename datafile of system TS:
We can’t make system TS offline in DB open mode, because if we make it offline
we can’t access dictionary views i.e. we can’t access users, tables etc. so we rename the
datafile of system TS in mount stage only.
 open database in mount stage
startup mount
 move or rename system TS datafile in o/s level
 move or rename system TS datafile in sql level
Syn:- alter database rename file ‘<old>’ to ‘<new>’;
 we can also rename undo TS in mount stage only, because we can’t make it offline.

To know tablespace datafile names in mount stage:

Select ts#,name from v$tablespace where name=’SYSTEM’;
Select name fom v$datafile where ts#=0;

We can rename datafile in 2 levels:-

1) Database level
 mount stage
2) Tablespace level
 open stage
i.e. making it offline and rename it.

Undo tablespace:-
It is for undo operations. It will maintain old data till we issue commit.

There are 4 methods:-
1) shutdown (or) shutdown normal
2) shutdown immediate
3) shutdown abort
4) shutdown transactional

Shutdown Immediate: It is reverse of startup

Phase1Database closed
All the connections(sessions) connected to instance are killed. All the pending
transactions are rolled back.
A check point will happen and dirty buffers will be flushed to datafiles.
Datafiles and redolog files are closed.

Phase2Database dismount stage (Database dismounted)

Control file will be closed.
Phase3Instance deallocation (Instance closed)
Background processes are killed and memory for SGA is deallocated.
It does not wait for the users to disconnect from the DB.

Page 29 of 102

Difference B/W Shutdown and Shutdown Immediate:-

- Shutdown Immediate will kill all the existed users
- Shutdown will wait for the users to get disconnected themselves.

Alert Log File:-

All the startup and shutdown activities will be captured into a file called alert log
file. The extension is <alert_dbname.log>.
By default it is stored in /ORACLE_HOME/RDBMS/LOG/..
Use tail –f <filename> to display alert file content and update in incremental order.

Shutdown normal:- (Shutdown)

When we use this option, all the steps which occurs in Shut Immediate happens, except
the first 2 steps.
 It waits for the logged in users of database to quit.
 It is default option which waits for users to disconnect from the database.
 Further connections are prohibited.
 The database is closed and dismounted.
 The Instance is shutdown and no Instance recovery is needed for the next DB

Shutdown abort:- (Shut abort)

 The fastest possible shutdown of the DB without waiting for calls to complete or
users to disconnect
 Uncommitted transactions are not rolled back
 Sql statements currently being processed are terminated
 All users currently connected to DB are implicitly disconnected and next DB
starting will require Instance recovery.
 We use this option if a background process terminates abnormally and when high
voltage of power occurs
 It just deallocates the Instance

Shutdown transactional
 This option is used to allow active transactions to complete first i.e. it will let the
current transactions to be finished
 It doesn’t allow client to start new transactions
 Attempting to start new transaction results in disconnection
 After completion of all transactions, any client still connected to Instance is
 Now the Instance shuts down
 The next startup of database will not require any Instance recovery.
 It will kill users who are idle
 In real time, we use S.I and S.A

Page 30 of 102

Startup Restrict:-
We use this option to allow only oracle users with the Restricted session system
privilege to connect to database. i.e. only the DBA can have access to DB. We can use
alter command to disable this restrict session feature.
Syn:- alter system disable restricted session
Actually we use this when we are in maintenance. So we can’t give access of the
database to other users.We can enable restrict session feature after logging to database as
sys user.
Syn:- alter system enable restricted session

Startup Force:- (Shutdown abort + startup)

Shutdown the current oracle Instance with shutdown abort mode before restarting.
Force is useful while debugging and under abnormal circumstance. It should not normally
be used.

Read only tablespace:-

Read only tablespace allows users to do only reads from the tables with in it. No
data manipulation is allowed. This read only option causes the database not to work to
these files once their tablespace is altered to read only. This allows users to take advantage
of media that allows for readonly operations. The major purpose of making a tablespace
read only is to eliminate the need to perform backup and recovery of large, static portion of
a database. we can drop items, such as tables, indexes from read only tablespace but we
can’t create (or) alter objects in a read only tablespace.


It is new in Oracle 10g. it is used to store database components that were stored in
system tablespace in prior releases of database. It was installed as an auxiliary TS to
SYSTEM TS. When we create the database, some database components that formerly
created and used separate tablespaces row occupy the SYSAUX TS.
If the SYSAUX TS becomed unavailable, core database functionality will remain
operational. The database features that use the SYSAUX TS could fail or function with
limited capacity.

Page 31 of 102


 Every oracle database has a control file

 A control file is a small binary file that records the physical structure of database.
It includes..
 Database name
 Names and location of associated datafiles and online rdo’s (SCN),
tablespace info, archive log info
 Timestamp of database creation
 Current log sequence number
 Check point information
 The control file must be available for writing by the oracle database server
whenever the database is open
 Without control file, the database cannot be mounted and recovery is difficult
 It is created at the time of creation of database
 We can create maximum of 8 control files
 We cannot have any control over control file and its size will be determined at the
timeof database creation
 It is very small in size and it is static and it can’t be altered
 Actually we need more than one control file when the failure of first control file.
 If we have ‘n’ number of control files, all the files are same in size and contain
same info

How can we add one more control file to database?

Shut down the database using shut immediate mode
Make a copy of existing control file in same location or in another location
Syn:- cp c1.ctl c2.ctl
cp c1.ctl /oraDB/kittu/c2.ctl
Add the newly created control file location using control_files parameter in init.ora
Ex:- Control_files = ‘/oraDB/kittu/c2.ctl’
Start the databse

Note:- If we add control file without copying it, it searches for that file when we start the
database. So, oracle doesn’t read that file.

How can we remove control file?

Remove the control file location in init.ora file (parameter file)
After that remove the file in o/s level.
The dictionary view to get the control file information
Select block_size,file_size_blks from v$controlfile
Block_size=16384 and file_size_blks=370

Page 32 of 102

((blocksize * filesizeblocks)/1024)/1024 -- > we get control file size

By firing ls –l command in unix , we get the bytes
Minimum control files 2 to 4

V$parameter:- Lists status and location of all parameter.

V$controlfile_record_section:- Provides information about the control file record status

Show parameter control files:- Lists names,status,location of control files

Page 33 of 102


Each oracle has redo log files. These redo log files contains all changes made in
The purpose of RDO is if something happens to one of the datafiles, a copy of datafile
is maintained in RDO’s which brings the datafile to the state it had before it
became unavalible. i.e. it is used for recovery of data.
The size of RDO is static. We determine its size in the creation of database. We can’t
change its size unless in the maintenance.
The idea is first to store the transactional data in log buffer to reduce i/o retention.
When a transaction commits (or) check point occurs, the data in log buffer must
be flushed into disk for the recovery. It is done by LGWR
The redolog of database contains one or more redolog files. The database requires a
minimum of two files to guarantee that one is always for writing while the other is
being archived ( if the database is in archivelog mode)

LGWR writes to redolog files in a circular fashion. When the current redolog file fills,
LGWR begins writing to the next available redolog file. When the last available
redolog file is filled, LGWR return to the first redolog file and writes to it starting the
cycle again. In this case if the first RDO is overwritten the data is lost. This happens
when the database is in NOARCHIVELOG mode.

This reading of data into another redolog file after filling the former one is said to be
log switch process.

It is a point at which the database stops writing to one redolog file and begins writing
to another file. Oracle DB assigns each redolog file a new log sequence number
everytime whenever a logswitch occurs and LGWR begins writing to it. When the
database archives redolog files, the archived log retains its LSN. A redolog file that is
cycled back for use is given the next available LSN.

Archive Log Mode:

If the database is in logarchive mode, the database makes sure that online redologs
are not overwritten. The filled redologs are archived (or) saved into another location(where
we specify the location init.ora file)
We set this location in init.ora by using log_archive_dest parameter

Noarchive Log Mode:

If the database is in noarchivelog mode, online redologs can be overwritten without
making sure that taey saved or not. This implies that a database cannot be recovered even if
backups were made.

Page 34 of 102

How can we know the database is in archive log (or) noarchive log mode:
- select log_mode from v$database;
- archive log list

Arch background process:

Arch is the archiver. Its task is to automatically archive online redologs so as to
prevent them from being overwritten.
The archiver background process starts if the database is in archivelog mode and
automatic archiving is enabled. i.e. taking the data in RDO to some other location is said to
be archiving.

How can we convert database to archivelog mode:

To start the archivelog mode
• Shutdown the database
• Define the parameters in init.ora parameter initialized file
• The parameters are –
• log_archive_start=true -- > Necessary in 9i only. Optional in 10g.
• log_archive_dest = ‘—‘ -- > locator where we need to archive log files
• log_archive_format= %s arc
• The files which are archived will be saved in the format as we define in
parameter. Syn:- alter system set log_archive_format = -----
• Start the database in mount stage
• Now fire the command to switch to archivelog mode.
• Ex:- alter database archivelog

Archived to Noarchivelog mode:-

• shutdown immediate
• startup mount
• alter database noarchivelog
• alter database open
• We can do archivelog and noarchivelog when the database is in mont stage only.
Because we must regular the activities to control file and we done this activity in
database level
• We usually refer redolog file group other than redolog file.

Why we need more than one redolog file in a group:-

This is because to safeguard against damage to any single file. When we create
multiple redolog files, LGWR concurrently writes the same redolog information to that
files, thereby eliminating a single point of redolog failure. Other files in a group are said to
be ___________. All are same in size and contains same data.

Page 35 of 102

V$log_history  Contains log history

V$log  It contains group information
V$logfile  It contains logfile or members information

Select members ,group# from v$log;

It gives groupname and no.of files in a group

select member, group# from v$logfile;

It returns groupname,logfilenames

How can we create a Redo log Groups:-

Alter database add
Logfile group ’/oraAPP/redo1.rdo’ size 5m;
Alter database add logfile group 1 ( ‘/oraAPP/redo1.rdo’,
’/oraAPP/redo2.rdo’) size 10m;

How can we drop redo log groups:-

Alter database drop logfile group1;

How can we add a member to existing group:-

Alter database add
Logfile member ‘/oraAPP/redo3.rdo’ to group;

How can we drop a member:-

Alter database drop
Logfile member ‘/oraAPP/redo3.rdo’;

Why we need to drop log groups and members:

To reduce the no. of groups in an instance redolog. In the case of disk failures and
a file must be located in improper location.

Before drop a redolog group (or) member we had to perform the below steps:-
We can make the status of group as inactive (or) active. Because we can’t drop the
current running group.
We get the status of group by
select group#, archived,status from v$log;
So we forced oracle to switch the curren status to another group .
This is possible by using a command like.
alter system switch logfile;
Now current status of group changes.
*we can do this activity when the database is completely opened.
*we can add (or) remove a group and its members is mount stage and open stage also.

Page 36 of 102

How can we change binary data to text formate:

Strings –a c1.ctl>a
***When we are in mount stage and archive log is enabled. But it shows automatic
archival as disabled .when we fire archive log list .

Clearing a redo log file:-

alter database clear logfile group3;
It will reinitialized the damaged group

Clearing unarchived log file:-

 Alter database clear unarchived logfile group3;
This can be done without shutting down the database also

How can we rename (or)relocates redo log file members:-

1) shut down the database
2) copy redo log files to new location (or) rename it
mv old new
3) startup mount
4) alter database rename file ‘---------‘ to’------------‘;
5) alter database open;

Active and inactive redo log files (or) groups:

Redo log files that are currently capturing data are current redo logs.
Redo log files that is not current redo log group. It is needed for crash recovery .
Redo log group that has never been written to is said to be unused rdo.
Redo log files after active are said to be inactive rdo’s.

Adjusting the no.of archiver process:

 alter system set log_archive_max_processes=3;

Q) why we need more than 2 log groups ?

Assume that we have 2 log groups. If one log group is in current state and the other
group is archiving. After first log group is also filled. But the 2 group is still archiving.
Then lgwr doesn’t know to which group it had to write the data generally archiving is
slow. So, inorder to prevent this , we need one more group.

To stop archive log process:-


Page 37 of 102


P-File: (parameter file):

It is init+oracle_sid.ora file. It is present in ORACLE_HOME/dbs. This file is text
file and is editable.
From 9i , oracle introduced a new file, it is SPFILE.

SP-file(server parameter file):

It is binary file and is not editable. If the database contains both pfile and spfile.
Oracle starts using spfile only. Because it provides dynamic allocation.

What is the solution if init file name is not in proper form ?

If the init file is x.ora. then start the database using following command.
Syn:- startup pfile= /ORACLE_HOME/dbs/x.ora

How to create sp file ?

select * from dba_temp_files;
We create spfile from pfile using the below command
Syn:- create spfile from pfile;

Parameter to show spfile ?

Show parameter pfile (or) show parameter spfile
If the value column of this parameter shows some value, it indicates that database
starts using spfile.

How to create pfile from spfile ?

Syn:- create pfile from spfile;

How to edit spfile ?

Oracle will not support to edit spfile, to edit it follow the below steps
1) Drop the existing spfile
2) Edit in init.or
3) Create new spfile from pfile
4) Start the database

How can we delete a database ?

In 9i to drop a database, delete the datafiles, rdo’s controlfiles etc.,
But in 10g , new feature is introduced to drop a database follow the below steps.
1)open database in restrict mode
SQL> startup restrict
2)drop the database
SQL> drop database;
Then automatically all the database related files are deleted except init.ora
ALTER SYSTEM set parameter value scope=[spfile/memory/both]
Spfile the new parameter value will be updated only in spfile

Page 38 of 102

Memory the new parameter value will be updated only in database

Both the new parameter value will be updated in both spfile and database.

Audit files:-
The files contains information, if we start sqlplus as ‘sys as sysdba’. It also updates
when we connected from another user to ‘sys as sysdba’.
Who logged in started the database is stored. It contains o/s user name, database
name, system name, oracle_home, database user, privilege, time etc.,
Whenever “we connect to sys user it creates audit files”.
Ex:- ora_3702. and


1) Permanent tablespaces
2) Undo tablespaces
3) Temporary tablespaces
in 10g ,we cannot have more then 65536 tbs.

Permanent tablespaces:
The tablespaces which are used to store the data permanently are said to be
permanent tablespaces.
Ex:- system,

Undo tablespaces :
Every oracle database must have a method of maintaining information that is
used to rollback (or) undo,changes to the database. Such information consists of record of
actions of transactions, primarly before they are committed. Such records are collectively
referred as undo.
Undo tablespace is used to store undo records of database. i.e., uncommitted
transactions(pending data). We create undo tablespace at the time of database creation.
If there is no undo tablespace available, the instance starts but uses the SYSTEM
tablespace as default undo tablespace. It is not recommended option. So create undo
tablespace at the time of database creation (or) after that by setting parameter

Creating undo tablespace:

 Create undo tablespace undots01
Datafile ‘/oraAPP/kittu/db1/undo1.dbf’ size 50m;
We can create multiple undo tablespaces. But there is no use, because we use only on undo

Page 39 of 102

How the data stored in undo tablespaces?

Old data

1000 updated
Rollback to 5000

User TS

If the table contains the salary 1000 for some employees. If we update the salary
1000 to 5000. Then the records which contains salary 1000 will be stored into undo
tablespace and salary 5000 will be updated into table. If we do commit they remain.
Otherwise 1000 will come back to the table
We can view the tablespace type from dba_tablespaces
Select tablespace_name,contents from dba_tablespaces;
To know which undo tablespace is assigned to database.
Show parameter undo
From dictionary view database_properties
How to set undo tablespace fro sql prompt?
Alter system set undo_tablespace=’UNDOTS01’;
If we set this it is available only for that session. If there is sp file it will be permanent to
database because sp file allows dynamic alloction.
If there is no spfile we need to specify it in.

Dropping undo tablespace:

Drop tablespace undots01;

Resizing undo tablespace:

We can resize undo tablespace in 3 ways.
adding a new datafile.
extend the size of existing datafile
Some process as we done for permanent tablespaces.

Renaming undotablespaces:
Similar to permanent tablespaces.

Temporary Tablespaces
Temporary tablespaces are used to manage space for database sort operations
and for sorting global temporary tables.

Page 40 of 102

If we join 2 large tables, and oracle cannot do the sort in memory(see SORT_AREA-
SIZE) initialization parameters, space will be allocated in a temporary tablespace for doing
the sort operation. Other sql operations that might require disk sorting are
create index,
Select distinct,
Order by,
Group by
The DBA should assign a temporary tablespace to each user in the database to
prevent them from allocating sort space in the SYSTEM tablespace.

Unlike normal datafiles,tempfiles are not fully initialized when you create a temp file,
oracle only writes to the header and last block of the file.
This is why it is much quicker to create a temp file than to create a normal database file.
Temp files are not recorded in database’s control files. The implies that are one can just
recreate them whenever we restore the database (or) after deleting them by accident.
One cannot remove datafiles from a tablespace until we drop entire tablespace.
However, one can remove a tempfile
View:- dba_temp_files
Syn:- alter database tempfile
‘/oraAPP/temp1.dbf’ drop including datafiles;
If we remove all temp files from a temporary tablespace, you may encounter.
Error ORA-25153 temporary tablespace is empty
 Use the below syntax to added temp file to temporary tablespace
Syn:- alter database temp
Add tempfile ‘/oraAPP/temp02.dbf’ size100m;

How can we create a temporary tablespace ?

Create temporary tablespace temp
Tempfile ‘/oraAPP/t1.dbf’ size 50m;

How can we assigns a temporary tablespace to database ?

Alter database default temporary tablespace temp;

How can we know the temporary tablespace information which is assigned to

create user x identified by y Temporary tablespace temp;
alter user x temporary tablespace temp

How can we know to which user, which tablespace is assigned ?

 From dba_users
 select username,temporary_tablespace from dba_users;
It is better to assign temporary tablespace to avoid everytime assigning it to.

Page 41 of 102


We connect to database as user only.

Creating user:
Syn: Create user username identified by password;
Ex:-create user kittu identified by kittu;

Changing password:
Ex:-alter user kittu identified by ramu;

Lock the user:

Syn:-alter user username account lock;

Unlock the user:

Syn:- alter user username account unlock;

Password expire:
Syn:- alter user username password expire;

Assigning default tablespace to user:-

Syn:- alter user kittu Default tablespace chinni;

Assigning temporary tablespace to user:

select * from dba_temp_files;
Syn:- alter user kittu Temporary tablespace chinni;

Dropping user:
Syn:- drop user kittu cascade;


Privilege is a right to execute a particular type of sql statement (or) to access
another user’s object.
Privilege is right to perform a specific activity. A privilege can be assigned to user
(or) a role

Privileges are of two types:-

1)System privileges
2)Object privileges
The fundamental privileges:- create session, create table

Page 42 of 102

System privilege:
A system privilege is right to perform particular action(or) to perform an action
on any schema objects of particular type. For example the privileges to create tablespace
and to delete the rows of any table in a database are system privileges. To perform DDL
 who can grant and revoke system privileges ?
• users who have been granted a specific system privilege with the admin option.
• Users with the system privilege grant any privilege
i.e., DBA can grant system privileges
granting and revoking system privileges.
Syn:Grant create session to kittu;
Grant create table to kittu;
Syn: revoke create session from kittu;

Object privilege:
Object privilege is the permission to perform a particular action on a specific
schema object. To perform DML activities on other user. Some schema objects such a
clusters, indexes, triggers, and database links, do not have associated object privileges.
Their use is controlled with system privileges.
For example, to alter a cluster, a user must own the cluster (or) have the alter any cluster
system privilege.

Who can grant object privileges

Owner of the object
A user with grant any object privilege can grant (or) revoke any specified object
privilege to another user with (or) without grant option of the grant statement

Granting and revoking object privileges

Grant:- Grant select on emp to kittu; (owner)
Grant select on scott.emp to kittu ; (dba)
Revoke:- revoke select on emp from kitttu; (owner)

how to know the privileges of particular user ?

Select * from session_privs;
dba_sys_privs view is used to see system privileges
all_sys_privs, user_sys_privs
dba_tab_privs view is used to see object privileges.

Administrative privileges:
Administrative privileges that are required for administrator to perform basic
database operations are granted through two special system privileges, sysdba and sysoper

SYSDBA can perform following operations:

 Perform startup and shutdowm operations
 After database open,mount ,backup (or) change character set

Page 43 of 102

 Create database, spfile

 Archive log and recovery
 Includes the restricted session privilege

Sysoper can perform following operations:

 Perform startup, shutdown
 Create spfile
 Alter database mount/open/backup
 Archive log and recovery
 Includes the restricted session privilege
DBA_COL_PRIVS is used to view on column privileges.

Other dictionary views:

Column_privileges, table_privileges, all_tab_privs_made, user_tab_privs_made
Dba_tab_privs_made doesn’t exists

Role is a set of privileges
Managing and controlling privileges is made easier by roles, which are named
groups of related privileges that you grant, as a group , to users (or) other roles, within a
database, a role name must be unique, different from usernames and all other role names.
Unlike schema objects, roles are not contained in any schema.
 who can grant (or) revoke roles?
• any user with grant any role system privilege can grant or revoke any role
• any user granted a role with admin option can grant (or) revoke that role to (or) from
other users (or) roles of database.
There are 18 predefined roles.
Ex:- connect,resource, dba,select_catalog_role etc.,

Creating a role:
Syn:- create role rolename;
Ex:- create role abc;

Granting privileges to role:

Syn:- grant create session to abc;

Syn:- revoke create session from abc;

Dictionary views:
dba_roles is used to view total roles information in a database.
dba_role_privs is used to know which roles are assigned to users.
session_roles is used to view the roles for a particular session.
role_role_privs is used to view which roles are assigned to roles.
role_tab_privs is used to view which roles are assigned on tables (or) colums.

Page 44 of 102

role_sys_privs is used to know the privileges of roles.

user_application_roles v$pwfile_users
A privilege is effected immediated after granting. But a role is effected only when we
connects to that session.
There is another way to activate role
Sql> set_role connect;

Quota is some reserved space on tablespaces. This means to limit how much space a
user uses on a tablespace.
Quota can be assigned to user at the time of creation (or) after the creation
1) create user abc quota 10m on system;
2) alter user kittu quota 10m on system;
3) deleting quota alter user kittu quota 0m on system;

dictionary views:
dba_ts_quotas is used to know the how much quota is reserved for a particular


Profile is a set of limits on database resources. Profiles are used to manage the
resources of database.
By default, a profile named default is available in the database.
If we assign profile to user, that user cannot exceed these limits.
To enable resource limits dynamically we need to set resource_limit parameter to true.
 Alter system set resource_limit=true
To see this parameter
Show parameter resource_limit
To view profile information
Select * from DBA_PROFILES;
Profile, resource, resource_name, limit
Actually profiles has 2 types of parameters.
1) resource parameters
Can be viewed by user_resource_limit view
2) password parameters
Can be viewed by user_password_limits
 To create profile,we must have create profile system privilege
Syn:- create profile abc limit sessions_per_user 2
Idle_time 30
Connect_time 10
Failed_login_attempts 2;
 How to alter a profile
Syn:- alter profile abc limit idle_time 10;
 How to drop a profile

Page 45 of 102

Syn:- drop profile abc cascade;

 How to assign profile to user
Syn:-alter user kittu profile abc;

Resource parameters password parameters

Composite _limit failed_login_attempts
Session_per_user password_life_time
Idle_time password_reuse_max
Connect_time password_verify_function
Cpu_per_session password_lock_time
Cpu_per_call password_grace_times

When a resource parameter specified with this, it indicates that a user assigned this
profile can use an unlimited amount of this resource when specified with password
parameter, unlimited indicates that no limit has been set for the parameter.
It specifies no.of concurrent multiple sessions allowed per user
It specifies the allowable connect time per session in minutes.
It specifies allowed continuous idle time before user is disconnected in minutes.
The no.of failed attempts to log into the user account before the account is locked


When oracle itself creates files, that files are said to be oracle managed files.
There is no need to define locations and names of CRD files.
The problem with OMF is only naming convention.
Creating database using OMF:-
For this we need to add 2 parameters in init.ora

In init.ora:-

Page 46 of 102

Db_create_file_dest= ‘/oraAPP/kittu/database1’;
connect as sys as sysdba  startup nomount

Then database is created with creating a directory with database name. in that 3 directories
are created ., They are
Datafile in this all the data files are stored
Online log in this , log files are created
Control file in this control files are created
SYSTEM tablespace with datafile is created with 200mb size and is auto
SYSAUX tablespace with data file is created with 100mb size and is auto
UNDo tablespaces named SYS_undots is created with 120 mb size and is
Auto entensible.
2 redo log groups are created each one with size 100mb .each one contains
Only one member.
it creates one control file.

*In 10g, we can mention more than one destination parameter for redo logs
In this case we mention in init.ora as
Now datafile is created in db1
Two control files are created in db1 and db2
 Two redolog groups are created in db1 and db2
Each group contains 2 members
The control file in 1st dest location is primary one
In pfile indicates for all instances . either we use this RAC instances.
After creation of database, we need to specify control files location in init
i.e., control_ files=/oraAPP//db1/kittu/controlfile/a1_ctr.ctl
then only the control file is opened.

if we don’t mention undo_management is in manual mode, we can’t assign

undo_tablespace to database

Drop database:
Sql> startup mount
Sql>alter system enable restricted session;
Sql>drop database
when we drop database, all the physical structure of database.

Page 47 of 102

when we create any tablespace, datafile, logfiles. Then a directory with database name as
a name is created in db1/kittu/kittu. In this again three directories are created
2) online log
3) control file
All the files we created after the creation of database, will be stored in this locations
The users related datafiles and redo’s created after execution of database will be stored
in these directories.
 Note that OMF default size is 100mb, and the file size can be overridden at any
time. You can specify the filesize only bypass OMF and specify filename and
location in datafile clause.
 Oracle enhanced the oracle 9i alter log to display message about tablespace creation
and data file creation. To see the alert_log, you must go to background_dump
destination directory.
 show parameter background_dump
 The parameter db_recovery_file_dest defines the location of flash recovery area,
which is default file system directory (or) ASM disk group where database creates
RMAN backups, when no format option is used, archived logs when no other local
destination is configured and flashback logs.

Create tablespace:-
Create tablespace ts01;
It alone creates datafile of size 100mb

Adding a file to tablespace:-

Alter tablespace add datafile;
Alter tablespace ts01 add datafile ‘--path--‘ size 10m;

 Drop a tablespace:-
Drop tablespace ts01;

creating log group:-

Alter database add logfile group3;
Alter database add logfile member ‘------‘ to group3;
log member:-
Alter database drop logfile group3;
1) they make the administration of database easier.
2) They reduce corruption caused by administrators specifying wrong file.
3) They reduce wasted disk space consumed by absolute files.
4) They simplify the creation of test and development database.

Page 48 of 102


when we want to change database architecture, then we use alter Database command
when we want to change parameter to specific session (user), we use Alter session
when we want to change parameters to entire database, use alter System command

parameters are 2 types

Dynamic parameter
The parameters whose values can be modifiable dynamically at Run time.
Static parameters
The parameters whose values cannot be modifiable at run time

We can change in init.ora

Static parameters are 2 types
1)tunable parameters:-
Parameters whose values can be modifiable in init.ora . for
Effecting these values, we need to bounce the database.
Ex:- db_file
2)non_tunable parameters:
Parameters whose values cannot be modifiable at any
To know parameter information:- v$parameter
To know whether the parameter is modified for only session and for entire database.
Select name isses_modifiable, issys_modifiable from v$parameter.

Alter command:
Alter system/session set parameter_name= [spfile/memory/both]

Spfile it stores only in spfile

MemoryIt stores only in memory(database)
It erases when we bounce the database
Both both in memory and spfile default
Ex:-alter system set db_cache_size=100m scope=both;
 strings –a spfilekittu.ora/ grep log_archive
It shows log_archive in text format.

Page 49 of 102


Inventory means storage

 oracle inventory is the repository(directory) which stores/records oracle software
products and their oracle_home location on a machine. This inventory now a days in
XML format and called as XML inventory where
As in past it used to be in binary format called as binary inventory.
 there are basically 2 types of inventories.
1) local inventory:- it is also called oracle home inventory, inventory inside each
oracle home is called as oracle_home inventory (or) local inventory. This inventory
holds information to that oracle home only.
Location: oraclehome/inventory
2) global inventory:- it is also called as central inventory. It holds the information
about the oracle homes installed on that server. It contains homes installed on that server. It
contains homes, locations like
Home_name=”ramu” loc=”/oraAPP/kittu” type= “o” idx=”1”/
This global inventory location will be determined by file orainst.loc.
It is in /etc[linux] and /var/opt/orace[solarises] . It contains

If we want to see list of oracle products on machine check for file inventory.xml:-
Location:- /etc/oraInventory/contents XML/

Can we have multiple global inventories on machine?

Yes, you can have multiple global inventory but if you are upgrading (or) applying
patch then change inventory pointer oraInst.loc to respective location.

What to do if my global inventory is corrupted?

No need worry if our global inventory on machine using OUI and attach already
installed oracle home by option
attach home
./runInstallersilentattachhomeinvptvloc $location_to_oraInst.loc

Page 50 of 102


Actually in real time environment, we provide database, to client as a

non_local connection.
Non_local connection means connecting to database server through softwares
like sqlplus, OEM,VB,.NET,JAVA from client system.
There are some softwares called net80, oracle net for configuring tns entry in
client system. They came with oracle installation.
for java,jdbc is used to connect to oracle
for .net,odbc is used to connect to oracle
oracle connectivity components are toad, sqlplus,oem etc,
sqlplus is software which comes with oracle to connect to database. It is cebbased

To access the database from client system, we had to following the following steps:-
Step1:- server side
We need to configure listener on the server listener is a utility which is listening
to database connections
It is an executable file
Its location is ORACLE_HOME/network/admin/listener.ora

In one server we may have more than one listener depending on the load (no of clients
communication to the server)
 Next open listener.ora file. It is readable text file. It is reachable text file
 $ vi listener.ora



While defining listener, we provide

Page 51 of 102

 Listener name
 List of sid’s (database)
 Protocol (tcp/ip)
 Port number (default 1521) We must have different port numbers for different
 Host name (ip address)

After configuring listener in listener.ora open the database and exit and follow below

$lsnrctl start -- it will start all the listener in listener.ora

$lsnrctl start kittu -- it will start listener kittu

Lsnrctl is an executable which is in ORACLE_HOME/bin

Step2: client side

 Install oracle network, protocol adapter for tcp/ip (client side components)
 We need to configure tns entry to connect to database through a listener
 We use sqlplus, toad, oem , isqlplus as the front end to connect to database.
 For configuring tnsentry,we have to know the ORACLE SOFTWARE home in client
 We need to configure tnsentry in tnsnames.ora file present in home
 For sqlplus, we find this by right clicking and select properties and find the home.
 In Toad there are multiple homes we know the home by opening the toad. We can set
toad home by clicking sql .net configuration help selecting the corresponding home ad
click on set as toad home button. Then find the location of tnsnames.ora and configure it.
 We need to define an alias in home
 While defining alias provide
1) Alias name(tns name (or) entry)
2) Target server name (or ip address)
3) Target sid (database_name)
4) Protocol (tcp)
5) Target port number(defined in listener.ora)
 After defining the alias, we check using
C:\tnsping <aliasname>
(tns ping utility for oracle, where as ping is utility for tcp/ip)


Tnsentryname =
(description =

Page 52 of 102

Step 3:

Now we can connect to database through sqlplus (or) oem or toad

 open sqlplus or toad
Sqlplus username kittu [username in database]
Password xxxx
Host xhni [tnsname]
Username kittu/xxxx@xhni

Username (or) databasename chinni (tnsname)
Schema kittu
Password ram

Then click on ok button

Now we can acess to database

To know whether the listener is started or not:-

$pd –ef | grep tns

Stop listener:

$listener stop kittu -- command completed successfully

What happens when a Service “instance name’ has 1 instance (s)

instance “dba”. Status unknown has handler for this service.

The command completed successfully

 One listener can have multiple service handlers for one or more instances
i.e one listener for multiple databases in tnsnames.ora we define 2 tns entries with
same port but with different sids
dba1 dba2



 In case of with different port numbers

 we can define multiple listeners for one database

Page 53 of 102

 in tsnames.ora we defined, multiple tns enter with different port numbers and same

Everything will be same, only we has to create and listener with different port numbers

 using lsnrctl command we can
1) Start
2) Stop
3) Services  to know the services of server dedicated or mts or local or
4) Debug
5) Status
6) Help
7) Reload  it will restart listener
 listener can be started regardless the status of instance.
 if we want to ………..
We need to define TNS_ADMIN environment variable in bash-profile
Export TNS_ADMIN =/home
Then oracle will look for listener.ora in /home directory
 tnsnames.ora file can have ‘n’ number of tns entries
 There is no significance of tns entry name we can give any name
Trc_level admin


 We can know the each sessions information from v$session view

 The commonly used columns in v$session are
Sid, serial#, username, logon_time, status
 In v$session there is no username for background process
 By using sid amd serial# we cam kill a session

To know own session id(statistics)

Select sid,serial# from v$mystat where rownum=1;

To kill a session
Syntax: - alter system session ‘sid,serial#’;
Ex: alter system kill session ‘1,20’;


we know that the data stored in datafiles.

Audit the finest level granularity oracle database stores data in data blocks also called
logical block, oracle blocks or pages)

Page 54 of 102

A segment is a set of extents that contains all the data, for a specific logical
storage structure with a tablespace
For example, for each table, oracle database allocates one or more extents to form the
tables data segement.

An extent is a specific number of contigous data blocks that are allocated
for storing a specific type of information

Oracle Block:
A block a smallest unit of storage in Oracle the size of a datanlock is fixed when
the database is created and cannot be changed except by rebuilding the database from .
This is primary data block sizes of datablocks are 2k,4k,8k,10k,32k
The size of block is determined by the parameter db_block_size in init.ora file.

 In o/s also name stored in blocks only o/s file blocks only o/s file blocks size is 512
bytes or 1k
 When we try to read some data oracle uses db_blocks. Oracle will retranslating while
reading data.
 In 10g default blocksize is 8k
In 9i, default blocksize is 2k

 when we define datafile size as 8m actually 1m datafile as it block size is 8k contains

130 blocks

Page 55 of 102


Block header
Table directory
Row directory

Free space

Row data


Used space

Block header: It contains general block info such as the block address and the type of
segment (table or indexes)

Table directory:- This portion of datafiles contain information about the tables having
rows in the blocks.

Row directory:- This portion of data block contains info about the actual rows in block
(including address of each row piece in the row data area). After the apce has been
allocated in the row chaining of a data block overhead this space is not reclaimed when the
row is deleted therefore, a database that is currently empty but had up to 50 rows at one
time continous to have 100 bytes allocated in the header of row directory oracle databases
reuses this space only when new rows are inserted in the blocks.

Overhead: The data blocks header table directory and row directory are referred to
collectively as overhead some block overhead is fixed in size. The total block overhead
size is variable. On average the fixed and variable portions of data block overhead total 84
to 107 bytes.

Rowdata: This portion of data block contains table or index data. Row can space blocks.

FreeSpace: Free space is allocated for insertion of new rows and updates to rows that
require addition.

Pctused: This parameter sets the minimum percentage of a block that can be used for row
data plus overhead before new rows are added to that block before new row are added to
that block, after a block is filled to the limit determine by pctfree, oracle database consider
the block unavailable for the insertion of new rows until the percentage of that block falls
beneath the parameter pctused until the value is achieved, oracle database uses the free
spaces of the data block only for updates to rows already contains in the data block.

Page 56 of 102

For example:- pctused 40

In this case a data block used for this tables data segments is considered unavailable tables
data segments is considered unavailable for the insertion of any new rows until the amount
of used space in the block falls to 89% or less.

Init trans: This parameter specifies how many transactions can be accessed to the dbblock
at any point of particular time

Freelists : this parameter is used in rac:

Tip: once the primary block size is mentioned you can create new tablespace with alternate
block size for creating table with parameters.

Syntax:- create table abc(a number)

Pctfree 10 , pctused 30, initrans 10;

To obtain object level parameters information:-

Select table_name,pct_free,pct_used,init_trans,max_trans from

dba_tables where table_name=’abc’;

By default :- pctfree-10,pctused-40,initrans-1,max_trans 255

If we specify parameters pctfree - 10, pctused -30, inittrans-30, max_trans -255

Extent Management

An extent is an uninterrupted or contiguous allocation of blocks within a segment.

Extents are assigned to a segment automatically by oracle. Oracle will allocate anything in
the form of extents.
An extent must be an contiguous blocks within a single datafile, so an extent cannot
span multiple datafiles. Oracle will allocate the size of extents based on type of tablespace.
When we create a table, oracle database allocates a segment and a initial extent of a
specified number of data blocks. The size of extent is determined by storage parameter.
These parameters are also called as object level parameter

Storage Parameters:

INITIAL: The parameter specifies the first extent and the size of extent

NEXT: If the datablocks of a segment initial extent became full and more space is
required to hold new data oracle database automatically allocates an incremental extent

Page 57 of 102

for that segment. The size of incremental extent is same or greater than the previously
allocated extent i.e, we specify the extent after initial extent through this parameter.

MINEXTENT: This parameter specifies the total number of extents to be allocates

when we create a (table (segement) or index)

MAXEXTENT: This parameter specifies the up to how many extents a segment can

PCTINCREASE: This parameter specifies the incremental percentage of extent that is

to be created after the NEXT EXTENT. By default its value is 50%.

For example consider

Initial 1m
Next 1m
Minextents 1
Maxextenst 4
Pctincrease 50%

1m 1m 1.5M 2.25M 3.375M

Initial 1m
Next 1m
Minextents 2
Maxextenst 5
Pctincrease 20%

1m 1m 1m 1.2m 1.5m

If we don’t specify storage parameters for a extent. Oracle itself allocates the default
storage parameters.
Segment is creation of extents.
Segment name is nothing but as object name. when we create a table or index it creates
By default each extent contains max of 5 blocks and min of 2 blocks.

Page 58 of 102

Create a tablespace and segment find the storage parameters without specifying

Sql> create tablespace tbs datafile “ “ 10m;

Sql> alter user kittu identified by kittu;
Sql> conn kittu/kittu
Sql> create table ram(a munber);

We can archive the extents information from dba_extents and dba_segments.

Sql> select initial_extent, next_extent, max_extent, min_extent,
pct_increase, blocks, bytes from dba_segments where
Sql> save sess.sql

All the parameters, blocks and their size for extents are allocated as per operating system.

Create a table in the tablespace with some parameters and check the parameters?

Sql> create table chinni(a number)

Storage (initial 1m nect 1m minextents 1 maxextents 5
pctincrease 100%);
Sql> @sess.sql

Now also maxextents will be taken as o/s specific

When database having db_block_size as 8k
If we define extent size as 1m
Then each extent holds 128 blocks

Q) How can we determine table or index size from dba_extenst or dba_segments?

 Select segment_name,bytes from dba_extents where
Select segment_name,bytes from dba_segments where

Extent Management

A tablespace is a logical storage unit.

Why are we saying a tablespace is not visible in the file system. oracle store data physically
in datafile.

How to create tablespace?

Create tablespace ts_name
Datafile ‘………….’ Size 2m
Minimum extents (this ensures that every used extent size in the tbs is a multiple of

Page 59 of 102

Logging: By default tbs have all changes written redo

No logging: Tbs do not have changes written redo
Online: Tablespace is online i.e available
Offline: Tablespace unavailable immediately after creation
Permanent: Tablespace can be used to hold permanent object.
Temporary: Tablespace can hold temp data

Extent Management is of two types

a) Dictionary Extent Management b)Locally Extent management

The tablespace are maintained in dictionary extent management is dictionary managed

The tablespace which are maintained in local extent management is called locally managed

Locally managed tablespace: The extents are managed with in tablespace in locally
managed tablespaces all the tablespace information and extent information is stored in
datafile header of that tablespace and don’t use data dictionary table for storing
Advantage of LMTS is that no DML generate and reduce contention on data
dictionary tables and no undo generated when space allocation or deallocation occurs.


default STORAGE are not valid for segments stored in LMTS.

To create a locally managed tablespace,you specify local in extent management clause of

create tablespace statement.

We have 2 options for lmts: -

1) system or auto allocate
2) uniform

q) how to create lmts

create tablespace tbs datafile ‘star.dbf’ size 10m extent management local;

SYSTEM (or) AUTOALLOCATE: Autoallocate specifies that extent size are system
managed oracle will choose “optimal” next extent sizes starting with 64kb as the segment
grown larger extent size will increase to 1mn,8mb and eventually to 64mb .This is
recommended only for a low or unmanaged environment.
Default autoallocate i.e it takes database default storage.
Syntax:- create tablespace tbs
datafile ‘star.dbf’ size 10m
extent management local autoallocate;

Page 60 of 102

It specifies that the tbs is managed with uniform extents of size bytes. The default size is
1m . The uniform extent size of lmts cannot be over written when a scheme object such as
table or index created

Syntax: - create tablespace tbs

Datafile ‘/oraapps/star.dbf’ size 10m
Extent management local uniform size 128k;

Dictionary Managed Tablespace:

When we are declaring tablespace as dictionary managed tablespace, the data
dictionary manages the extents. The oracle server updates the appropriate tables(sys.fet$
and sys.uet$) in the data dictionary whatever an extent is allocated or deallocated.

Syntax: - create tablespace tbs

Datafile ‘har1.dbf’ size 10m
Extent management dictionary
Default storage(initial 1m next 1m minextent 2
maxextents 121 pctincrease 0)

We can alter all parameters except initial and minextents in dmts i.e if we create dmts then
extent info is stored in dictionary and real data is stored in datafile of that tablespace. In
that case we need more I/O i.e the oracle has to search for extents in dictionary. which
degrades the performance.

In oracle 8i > only dmt available

From 9i> both dmt and lmt (default)
From 10g> both dmt and lmt (default)


Segments are the storage objects within the oracle database. A segment might be table, an
index, a cluster etc.
The level of logical database storage above an extent is called segment.
A segment is a set of extents that contains all the data for a specific logical storage
structure within a tablespace.
For example for each table oracle database allocates one or more extents to form that tables
data segment and for each index, oracle database allocates one or more extents to form its
index segment

There are 11types of segments in oracle

• table
• table partition
• index
• index partition

Page 61 of 102

• rollback
• deferred rollback
• lobindex
• temporary
• cache
• permanent

These types can be grouped into four segment

• data segment
• index segment
• rollback segment
• temporary data segment

Data Segments:
A single data segment in a oracle database holds all of the data for one of the follow..
• A table that is partitioned or clustered
• A partition of partitioned table
• A cluster of table.

Oracle database creates the data segment when you create the table or cluster with
create statement.
The storage parameters for a table cluster determine how its segments extents are allocated
you can set there storage parameters directly with appropriate create or alter the efficiency
of data retrieval and storage for data segment associated with the object.

Index Segment:
Oracle database creates the index segment for an index or an index partition when you
issue the create index statement. In this statement we can specify storage parameters for
creation of index.
The segments of table and index allocated with it do not have to occupy the same
tablespace setting the storage parameters directly affect the efficiency of data retrieval and

Temporary segments: When processing queries oracle database often requires temporary
workspace for intermediate stages of sql statement parsing and execution oracle database
automatically allocates this disk space called a temporary segment. Typically oracle
database requires a temporary segment as a database area for sorting.

Undo segments: Oracle database maintains information to reverse changes made to the
database. This information consists of search of the action of transactions, collectively
known as Undo .undo is stored in undo segments in an undo tablespace.

Page 62 of 102

How extents are allocated:

Oracle database uses different algorithms to allocate extents, depending on whether they
are locally managed or dictionary managed.
With LMTS, oracle database looks for free space to allocate to a new extent by first
determing a candidate datafile in a tbs and the search the datafiles bitmap for the required

When extents are allocated:

In general the extents of a segment do not return to the tablespace until you drop the
schema object where data is stored in the segement.
A dba can deallocate the unused extent using the following sql
Syntax: Alter table table_name deallocate unused ;

Periodically, oracle database modifies the bitmap of the datafile(for lmts) or update the data
dictionary (for dmts) to reflect the regained extents as available space An data in the blocks
of freed extents becomes inaccessible.

Periodically oracle database deallocates one or more extents of a rollback segment if it has
optimal size specified.

If the rollback segment in larger than optimal (i.e it has too many extents) the oracle
database automatically deallocates one or more extents from rollback segment.

How temporary segments are allocated?

Oracle database allocates temporary segments differently for queries and temporary tables.


Oracle allocates space for segments in extents. When existing extents of segment are full
ORACLE allocates another extent for that segment. Because extents are allocated are
needed, the extents of a segment may or my not be contigous on disk, and may or may not
span files.

Segment header is stored in the first block of the first extent

There are 2 choices for segment space management a)manual b)auto

Manual: This option uses free lists for manging free spaces with in segments.

Create tablespace tbs

Datafile kit.dbf size 10m
Segment space management manaual;

Auto : This option uses free lists for manging free space within segments. This is typically
called automatic segment space management it is default.

Page 63 of 102

Freelists: Freelists are lists of data blocks that have space available for inserting

• Even datafile must consist of one or more o/d blocks. Each o/s block may belongs
to one and only datafile.
• Every Tablespace may contain one or more segments. Each segment must exist in
one and only one tablespace.
• Every segment must consist of one or more extents. Each extent must belong to one
and only extent.
• Every extent must consist of one or more oracle blocks. Each oracle block may
belong to one and only one extent.
• Every extent must be located in one and only one datafile.The Space in datafile
may be allocated as one or more extents
• Every oracle block must consist of one or more o/s blocks.Every o/s block may be
part of one and only one oracle block.

How to convert between LMT AND DMT?

The DBMS_SPACE_ADMIN packages allows DBA’s to quickly and easily convert

Sql> exec dbms_space_admin.tablespace_migrate_to_local(‘ts’);
Sql> exec dbms_space_admin.tablespace_migrate_form_local(‘ts’);

Create a tablespace without any option of extent management and create a table amd

Create tablespace tbs Datafile ‘aa.dbs’ size 4m

Initial = 65536
Max_extent= 2147483645
Bytes =65536

The extents are allocated by o/s specific, by default it takes EM as local

 grant dba to hari identified by hari
 alter user hari default tablespace tbs;
 comm. hari/hari

create table a (a number);

Page 64 of 102

select segement_name, initial_extent, next_extent, min

_extent, max_extent, pct_increase, allocation_type, extents,
bytes where segment_name=’A’;

bytes will display size of extents allocated

extents will display how many extents are allocated

 create a table with storage parameter

create table abd(a number)
storage (initial 1m,next 1m,minextents 1,maxextents
3,pctincrease 20);

it takes all values default but only minextent size different

 create LMTS with autoallocate and check:-

create tablespace tbs
datafile ‘/oradb/………..’ size 5m
extent management local auto allocate;

 default system allocation_type

to check allocation_type
select tablespace_name, allocation_type from dba_tablespaces;
allocation_type is system

Create LMTS with uniform amd check

Create tablespace ts04 datafile ‘ts04.dbf’ size 3m extent management local uniform

- create table without mentioning parameters

create table a (a number) storage(initial 1m next 1m
minextents 2 maxextents 4);

initial takes 2m
But while storing it takes extent sizes as uniform
Allocation type – Manual

Create DMTS with no parameter and create table

Create tablespace ts05 datafile ‘ts05.dbf’ size 5m extent
management dictionary;

Obs:- initial – 40960, next 40960, min – 1 , max – 505 , E.M – Dictionary, pct -50,
S.S.N – Manual , Allocation type – User

- When we create table with out parameter same values are effected
- create table a2 (a number)

Page 65 of 102

storage(initial 1m next 1m minextents 1 maxextents 5 pctincrease 50);

- All the values as mentioned per parameters are effected.
- we can alter storage param values except initial and minextents.

Create DMTS with storage parameters

Create tablespace ts07 datafile ‘ts07.dbf’ size 10m
dictionary management dictionary
default storage(initial 1m next 2m minextents 2 maxextents

Note – all the above values are assigned to parameter;

Segments space management

Create DMTS without mentioning S.S.P:

Create tablespace ts08 datafile ‘ts08.dbf’ size 1m extent

management dictionary;
Default S.S.P is manual for DMTS.
We can change DMTS to LMTS id S.S.P is manual.
exec dbms_space_admin.tablespace_migrate_to_local(‘ts08,dbf’);
No auto S.S.P is valid with D.E.M
Create LMTS without mentioning S.S.P:
Create tablespace loc datafile ‘loc.dbf’ size 3m;
Default SSP is auto for LMTS.
In this case we can change LMTS to DMTS. So first convert SSP from auto to
manual, it is not possible. So create tablespace with SSP as manual.
Create tablespace s01 datafile ‘s01.dbf’ size 5m;

 exec.dbms_space_admin. tablespace_migrate_from_local(‘s01’);
Now it will be migrated from local to dictionary
Ie., we can chage LMTS to DMTS when ssp is manual
• the allocation type for DM is user
• to know the source code of tablespace
select dbms_metadata,get_dd1(‘TABLESPACE’,’LOC’,from dual);
dbms_metadata and dbms_space_admin are packages


If you notice poor performance in your oracle database row chaning and migration
may be one of several reasons, but we can present some of them by properly designing
and / or diagnosing the database.
Row migration & Row chaning are two potential problems that can be prevented by
suitable diagnosing, we can improve database performance.
The main considerations are:-
 what is row chaining & row migration?

Page 66 of 102

 how to identify row migration & chaining?

 how to avoid row migration & row chaining?

Row Migration:
We will migrate a row when an update to that row would came it to now fit
on the block anymore (with all the data that exists there currently in that row)
A migration means that entire row will move and we just leave behind the
forwarding address. So, the original block (old block) has the row id of the new block and
the entire row is moved. In this we need more IO

Row Chaining:
A row is too large to fit into a single database block for example, if you use a 4kb
blocksize for your database, and you need to insert a row of 8kb into it, oracle will use 3
blocks and store the row in pieces. Some Conditions that will cause row chaining are

• Tables whose row size exceeds the blocksize

• Tables with LONG and LONG RAW columns are prone to having changed rows
• Tables with more than 255 columnds will have chained rows as oracle break wide
tables up in to pieces. So, Instead of just having a forwarding address on one block and
the data on another we have data on two or more blocks
• Insert and updata statements that came migration and chaning perform poorly,
because they perform additional processing.
• Selects that use an index to select migrated or chained rows must perform
additional I/O

Migrated and chained is a table or cluster can be identified by using the analyze
command with the list chained rows option. This command collects information about each
migrated or chained row and places this information into a specified output table. To create
a table that holds the chained rows, execute script utlchain.sql.


SQL> select * from chained_rows;

 In most cases, chaining is unavoidable, especially when this involves tables with
large columns such al LONG, LOBs etc., when you have a lot of chained rows in different
tables and the average row length of the tables is not that large, then you might consider
rebuilding the database with a larger block size.

Ex:- you have a database with a 2k block size different tables have multiple large varchar
columns with an average row length of more than 2k. Then this means that you will have a
lot of chained rows because your block size is too small rebuilding the db with a larger
block size can give you a significant performance.

Page 67 of 102

Migration is caused by PCTFREE, being set too low, there is no enough room in the
block for updates. To avoid migration all tables that are updated should have there
PCTFREE set so that there is enough space within the block for updates. You need to
increase PCTFREE to avoid migrated rows. If you leave more space available in the block
for updates, then the row will be having more room to grow.

1) Update <tablename> set column=value where

2) Alter table <tablename> add column datatype
3) Alter table <tablename> modify column datatype
4) Create view <viewname> as select col1,col2 from <tablename>
5) Create index <indexname> on tablename(column)
6) Create sequence <seqname> increment by 1
7) Drop index <indexname>
8) Drop view <viewname>
9) Drop table <tablename>
10) Drop sequence <seqname>
11) Create synonym <synname> for object
12) Drop synonym <synname>

 In OS level :- vmstat, iostat

SQL level :- v$version
 In OS level :- getconf LONG_BIT
SQL level :- platform_wave from v$datafile
 SQL level :- address from v$sql


 There are 2 built in commands provided by oracle which are used to start and shut down
the data base
 We use this activity in Emergency Maintainence

This is a script which is located in ORACLE_HOME/bin. This is an
executable file when we execute this file. It will start the Oracle database from
/etc/rcl..locs. It should be only executed asa part of system boot procedure.
This script will start all the databases listed in the ORATAB file whose third
field is ‘Y’. This field is laos referred as monitoring field.There is no need to pass any
arguments. This script will ignore he entries where first field is ‘X’.


Eg:- kittu: /oraDB/kittu:Y

This is an executable file which is located in $ORACLE_HOME/bin. This
will shutdown the database whose third field is Y.

Page 68 of 102

 When we run there scripts for first time. It creates 2 logfiles in ORACLE_HOME
 When we start and shutdown the database startup and shutdown information will be
updated into these files


Backup and recovery is one of the most important aspects of DBA’s life. If you love your
company’s data, you would very well love your job. Hardware and software always be
replaced but your data may be irreplaceable.

 Backup: is taking the copy of data in some other location.

 Restoration means copying backup files from backup. Storage area in Hard disk, Tape,
CDs, Pendrive etc., to Original location.
 Recovery is the process of applying redologs to the database to roll it forward.
Applying Archieve log files to the database to get the data after the backup is taken.

Oracle has its own Backup Methods

-Physical -Logical

Physical Backup:- means making the copies of the files related to physical architecture.
Eg: Datafiles, Control files, Redolog files
Logical Backup:- means taking the copies of logical structure of Database.
Eg: Tables, Schemas, Tablespaces, Database

 In real time , all backup are run as root (maximum)

 There are also third party backup technologies available one of them and the most
fastest is VERITAS

We will be integrating the veritas software & hardware with the database. There must be a
separate admin (veritas admin) to maintain this technology.
Minimum it backups terabytes data just in one hour only!!

 In real time environment we use tar command to take the backup into tape.
$ tar cvf filename *


Page 69 of 102

A whole backup is a backup of all the datafiles, control file and ( if u are using
it) the spfile. Remember that as all multiplexed copies of the control file are identical , it is
necessary to backup only one of them. You do not backup the online redologs. Online
Redolog files are provided by multiplexing and optionally by archiving. Also note that only
datafiles for permanent tablespaces can be backed up. The temp files used for your
temporary tablespaces cant be backed up by RMAN or can they be put into backup mode
for an OS backup.

It will include one or more datafiles and control file. It is copy of just a part of
the database.

A incremental backup is a backup of just some of blocks of datafile. Only the
blocks of that have been changed or added since the last full backup will be included. It is
done by RMAN

Backup which is taken when the database is up or running.

Backup which is taken when the database is shutdown.

Traditional  RMAN
Cold  Cold
Hot  Hot


Backup which is taken when the database is down is said to be cold backup.

Steps to perform the Cold backup:

1) List out the datafiles, Control files and Redolog files by using v$datafile,
v$datafile, v$logfile.
Sql> select name from v$datafile;
Sql> select member from v$logfile;
Sql> select name from v$controlfile;
2) Shut down the database with shut immediate option.
Sql> shut immediate;
3) Now copy the crd files to backup location in OS level.
$cp /oraAPP/app/* /backup/

How can we check whether the cold backup is working properly or not?

Page 70 of 102

 shutdown the database

 copy crd files
 start the database
 create a user and some folder in it.
 Remove crd files in OS level
 Resore from backup location (files must be restored exactly to same location)
 Start database and check the database for mewly created users and tables. [thwy will
not exit,coz you restore the database from old backup]

How do we automate the cold backup?

By writing a shell script and submitting it to cron, we can automate hold
backup . Cron should be scheduled as root.

SHELL SCRIPT for Cold Backup:

# !/bin/bash ------------- this indicates to execute this code

in bash shell
## Name : ##
## Description: This files takes cold backups of crd files ##
## Date: 13.6.08 ##
## Author: Ramesh ##

# Set the environment

Export ORACLE_SID=dbkittu
Export ORACLE_HOME= /oraDB/chinni
Sys as sysdba
Shut immediate;
#copy the crd files to backup location
cp –r /oraAPP/kittu/db1’ /u001/chinni/backup/
#start the data base
Sqlplus <<eof
Sys as sysdba

In the market there are many seheduling software Are available.
Ex: red wood

If we want to copy the crd files to tape

tar cvf /tape/6102008.tar

Page 71 of 102

$echo “select * from tab.”sqlplus system/manager

It will select hit the tables in $prompt by connecting as system and return back to
$ prompt after displaying output.

$echo select * from tab;| sqlplus -s system/manager it will just display output.

how can we make cold backup fast?

By copying crd files in parallel sessions i.e, copy some files in one session and copy files in
one session.
in cold backup we take entire backup of all crd files.

Online redo lof files:

Redologs are absolutely necessary for recovery. For example, imagine that power
outage occurs,it prevents oracle to write modify data to datafiles. In this situation an old
data in datafiles can be combined with recent changes records in the online redo log to
reconstruct what wsas lost every oracle database contains a set of two ir more online redo
log files.

Oracle assign every redo logfile to with a log sequence number to uniquely identified it.
the of redo’s for a database is collectively know as database’s redo log.
Oracle uses redolog to record all changes made to data base. oracle record every changes in
redo record. an entry in redo buffer describes what has changed assumes a user updates a
payroll table from 5 to 7 oracle records the old value in undo and new valuesw in the redo
Since the redo log stored every changes to db the redo record for this transitioncontaions
three paths.
Changes to the transation table of undo
Changes to undo data block
Changes to payroll table data block

If the user commit then update to permanent table to make change permanent oracle
generate another redo record.

Archived redo log files:

• if archiving is disable a filled online redo log is available once the changes
recorded in the log have been saved to the data file.
• If archiving is enabling a filled online redo log is available once the changes have
been saved to the data files and the file has been archived.

Archived log files are redologs that are oracle has filled redo entries(rendered in active)
and copied to one or more log archive destinations oracle can be run in either 2 modes.

*archive log:
Oracle archives the filled online redo files before reusing thenm in the cycle.
*No archiving::

Page 72 of 102

Oracle does not archiving the filled online redo log fikes before reusing the in the
Running the database in archiving mode has the following benefits:
The database can be completely recovered from both instance and media failure
The user perform online backups i.e is backup the ts when database is open and
available for use.
Archive redologs can be transmitted applied to stand by database.
Oracle supports multiplexed archive logs ro avoid any possible single point of failure
or the archive log .
The user has more options, such as the ability to perform tablespace-point-in –time

Running the database in noarchivelog mode has the following consequences.

The user can only back up the database while it is completely closed after a clean
Typically the onlyu media recovery option is to restore the whole database which
causes the loss of all transactions issued since the last backup
These archived logs should be hosted on separated physically disk.

Back up that is taken while the database iis up and running.

Prerequisites for hot backup:

Database must be up and running

Database nust be in archive log mode

Steps for hot backup:

Check whether the database is running.
Sql>select open mode from v$database.

Check whether the database is in archivelog mode

Sql> select log-mode from v$database.

Sql>archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /stage/vijay/10g/dbs/arch
Oldest online log sequence 46
Next log sequence to archive 48
Current log sequence 48
Get the list of tablespace and datafiles from dba_data_files
Sql>select filename,tablesapce_name from dba_data_files;

Page 73 of 102

For each tablespace:

Put tablespace in hot backup mode(begin backup mode)
Sql>alter tablespace system begin backup

Copy each datafile of that tablespace To backup location

Sql>cp /oraAPP/kittu/system.dbf /stage/backup/.

Put tablespace in end backup mode.

sql>alter tablespace system end backup;

backup the control file.

Sql>alter database backup controlfile tp ‘/stage/backup/c1.ctl’

Conform all table spaces returned to normal mode.

Sql>select file # ,status ,change#,time from v$backup where
status!=’Not Active’;

Perform an archive log switch:

Sql>alter system switch logfile;
Sql>alter system archive log current;

Backup archive redo log file:

What happens when we put tablespace in begin backup mode?

Sql>alter tablespace system begin backup;
The below process os going on when ew issue the archive command.
Oracle has it all under control
Remember that every time oracle datafile has an scn. That is changed every time when an
update is made to a datafile.
Also remember that every time oracle makes a change to a datafile it records that changes
in the redolog.

When a tablespace is in being backup mode the following steps occur:

Oracle check points the tablespace (ie a check point is occurred for the tablespace) now all
changes om db_cache will be flushed to datafile.

Any ware updates the table (or) indexes ….. that datafile ,all the updated will be sent to
datafile but at this time the scn marks for each datafile in the tablespace are frozen
(blocked) at their current values.
The scn markers (numbers) will not be updated until the tablespace is taken out of backip

ORACLE SWITCHES TO logging full images of change database to the redologs this is
why the redologs grow at a mich faster rate while hot backups are going on.

Page 74 of 102

ie oracle maintains full copy of changed db-blocks in the redologs .’if log sitch occurs they
are archived’ at the point of time if any user wants to retrieve the updated data he gets that
data from redologs. If the data in redologs gets archived then he will retrieves the data from
dictionary cache where they are stored an default transactions. they stored as temporary
Suring hot backup the performer of the system slows sowm.
When we put the tablwspafe inend backup mode the headers of datafiles get released and
the scn numbers are updated using redolog files.

Tablespace ckpt:
A checkpoint scn occurred on onlu one ts is said to be checkpoint only that ts has differ scn
compared to all ts’s this is possible where we perform
Alter tablespace ts offline
Alter tablespace ts begin backup.

Database checkpoint:
Checkpoint occurred for only database all scn’s must be synchronized at this time(ie

How can we automate hot backup:

We can automate hot backup by writing shell script

Shell script for hot backup
#set the environment
Export oracle_sid=sree
Export oracle_home=/stage/10.2.0
Export path=$path:$oracle_home/bin

#make dynamic script:

Sqlplus <<E
Sys as sysdba
Set pages 0
Spool /tmp/backup.sql  spoll the tablespaces begin ,copying
files tablespaces end backup syntax in backupsql.

Select alter tablespace || tablespasce_naem ||’begin backup;

From dba_tablespaces where contents not in temporary union all
Select ‘alter tablespaces’ || tablesapce_name||’end backup;’ from
dba_tablesapces where contents not in (‘TEMPORARY’);
Spool off

#ranking the spooled sql and taking the backup of control file;
Sqlplus <<E
Sys as sysdba

Page 75 of 102

Alter database backup controlfile to ‘/stage/hot_back’;


Hot backup we can makes all tablespaces of database in begin backup in one shot by
Sql>alter database begin backup;

Dynamic sql:-
We can make a bunch of sqls statements in single command
Select ‘drop table’ || tname from tab;

Scn (system change number)

Scn is the increasing number. It can be used to determine the age(state) of the database

Oracle user scns in control files, datafiles headers and redo records.
Every redolog contains both a log sequences number amd low high scn the low scn records
the lowest scn recorded in logfile,while the scn records the highest scn in the log file.

The scn is incremented every 3 seconds

Every time when a user commit a transaction oracle records new scn.

Checkpoint will update the headers pf redolog files with latest scn’s.

The current scn can be quired using a package named ‘dbms_flashback’

SQL> select dbms_flashback_get_system_change_number from dual;


In log we can get from v$database.

The scn of last checkpoint can be found in v$database

Sql> select checkpoint_changed# from v$database.

We can obtain these scns in number of ways.

For example ,we can perform an incomplete recovery of a database upto scn 1030

The scn number is very useful while recovering the database pr instance all the datafile
headers will have the same scn number when the instance shut down normally.

Smon checks the scn in all datafile header when the database is started database is opened
of the scn of control file is matched with scns of df’s and redo’s of the scn’s don’t match
the database is an inconsistence.

Select checkpoint_change#,current_scn from v$database

It shows at which scn, checkpoint occurred.

Page 76 of 102

Select current scn from v$database.

Smon_scn_time table allow to roughly find out which scn was currently spwcific time in
that five days.

Checkpoint is oracle background process it is mandatory background process

A checkpoint performs the following three operations:

Every dirty block in the buffer cache is written to the database.

That is, it synchronizes the datablocks in the buffer cache with the datafiles on the disk.

The latest scn is written to the control file and redolog files

Checkpoints will lead to updating the datafile header if the oracle background process ckpt
is not available for our system (or) is not started lgwr will perform the task

From oracle 8.0 it is enabled by default the parameter is log_checkpointa_process it must

be set to true.

When the check point occurs

Alter system checkpoint {for entair db}
Alter system switch logfile
Alter tablespace <tn> offline; for only this ts
Shutdown immediate
1/3 of log buffer is fulled.
By mentioning 2 parameters in init file
Log_checkpoint_timeout --- has expired
Log_checkpoint_interval --- has reached
Begin backup
While redo log switches cause a checkpoint,checkpoint don’t cause a switch

Size of redo log

If the size of redo log os small the performance of the checkpoint will not be optinal this is
the case if the alter.log contain message like
Thred…. Cannot allocate new log

Time and scn of lsast checkpoint

The data and time of last check point can be retrived through checkpoint-time in
Sql>sekect checkpoint_time in v$datafile_header;
Difference between scn and checkpoint
Scn is representing with scn-wrap and scn-base whenever scn-base reaches
422949672909(2^32),scnwrap goes up by one and scan base will be reset to 0.the way you
can have a maximum scn at 1.8e+19.

Page 77 of 102

Checkpoint number is the scn number at which all the dirty buffers are written to
disk. The checkpoint can be at object/tablesapces/datafiles/database level.
Scn –wrap,scn-base are retrieved from table smon-scn-time

Select scn-wrap,scn-base from smon-scn-time;

Checkpoint number is never update for datafiles of read-only tablespaces.
We can also query v$transaction to arrived at the scn for that transaction
Control records information about that checkpoint and archived swquences along with the
other information.

q) does oracle do either crash recovery (or) transation recovery after shutdown abort if the
checkpoint was taken right before the instance crash?

Yes, oracle perform roll forward first if there are any changesbeyond that
checkpoint and roll back any checkpoint and roll back any uncommitted transations

Scn number are begin reported at frequent intervasls by smon in”smon-scn-time” table.

q) when this no of highest scn will be over,then what happen will oracle restart from first
If the scn really reached to its maximum allowed value (after exhausting all
wraps),database has to be opened in reset logs mode and scn will start from beginning all
over again.

Q) does all the redo ectries has scn attached to them (or) does only the commit entries has?
All changes recorded in the redo (including commits and rollbacks ) will hace scn
associated with them.

Hot backup
Conditional execution for database level:
$ps-ef |grep smon|grep venkat| grep –v grep
It will show venkat-- database is up (or) down avoiding the grep statement.

Hot backup script-2

 set the env
 check db is up or down
 check who is executing the script
 check db is in archive mode or not ?
 generate the backup syntax using dynamic
 Start backup process

1.put ts in begin backup

2.copy datafiles
3.put ts in end backup.
take backup of control files to backup location.

Page 78 of 102

unset the duplex dest.

evlauate the size of backup.
(mean source and target sizes must match)
 send mail ro dba that backup in completed.
Hot backup through dynamic script:
author :kittu
date :23-6-2008
purpose :the script will evaluate the state
of db ad perform hot backup to local
mount point
#set the environment
$home /bash-profile (or) export and set env variables
#who is executing the script.
Export usr=’/usr/bin/who am I’
If [[${usr} –eq “kittu”]];
Echo “continue operation”>> /tmp/
Echo “exit from execution”>>/tmp/exit.lst

#check database is up or down

Export b=’ps –ef| grep smon | grep kittu | grep –v grep| loc.l’
If [[${b} –eq l]];
echo “db is up …. Conformin to the next step”>>/tmp/success.lst
else “db is down ------ executing “>>/tmp/fail.lst
echo “dbs is down” | mail –s “db down “

# check archive mode or not.

Sqlplus –s <<e >/tmp/log_mode.lst
Sys as sysdba
Set head off
Select log-mode from v$database;
Export noarchemode=’grep noarchivelog’ ‘/tmp/logmode.lst | wc-l’
If [[ ${noarchmode } –eq l]];
echo “db is no arch mode inform dba to swith to archive mode”
echo “continue option”

#macki8ng sdynamic sqls:-

Sqlplus <<e
Sys as sysdba

Page 79 of 102

Set pages 0
Spool hot.sql
Select ‘alter tablesapce =’|| tablespace-name|| begin backup;
From dba_tablespaces where contents not in ‘TEMPORARY’
Union all
Select ‘alter tablespace ‘||tablespace_name”’ end backup;’ from
dba_tablespaces where contents not in (‘TEMPORARY’);
Alter database backup control file ‘/stage/hot/backup.ctl’;
$chmod 700 hot sh
Dynamically passing oracle sid:
#set the environment
Export ORACLE_SID=${1}
Export ORACLE_HOME=’grep –w$1 /etc/oratab/
Awk –f “;” ‘{print

#who is exporting the script

precious script

#check db is up /down
previous script

#check archive or not

previous script.

make dynamic sqls.

Sqlplus <<e
Alter system switch alogfile;

Alter system switch logfile;

sqlplus << e
sys as sysdba

Page 80 of 102

Alter databse ‘…’;

Sql>alter system log_archive_duplex_dest=’/stage/hot/.’
$chmod 700
$ dbkiitu (oracle_sid)
Passing sid as parameter to script.
Awk: It wil read the file and converted into formatted way.
$ls –l
-rw-r-w kittu dba 895 jun 18 13:50 ram
1 2 3 4 5 6 7 8

$ls –l | sqk ‘{print$8}’

It will print only 8th column

#how to print the script column

-f “!” (field seperater)
* $o  it print all columns
$ls –l | awk ‘{print $1,$2 }’ it peinr two columns
Set –x :- debugging the command
 $ls –l | awk ‘{print $1 “ = = = “$2= = =”$5}’
-rw-r- -r= = = 1 = = =895

 $ls –l | awk ‘{print $1 ,$2,$3}’

- rw- r- -r-- | kittu

$cat etc/oratab| awk ‘{print $1}’

App :/oraAPP/satya/satyahome:N
$cat /etc/oratab/awk –f “.” ‘{print $2}’

Q) how can we find wether a session is idle (or) not?

User: scott/tiger
Find sid of that session
 select sid,username from v$sessiuon,22 scott.
 select * from v$session –io where sid=’22’;
It display below column
Physical_reads_reading from db
Block_changesblock change mp
Consistent_changes changes(constraints)

Q) find which session is running long job?


Page 81 of 102

Q) how to kill a session?

ps –ef ?grep oracleapp | grep local =yes – 21364
Select s.sid ,s.serial#p.spids.username from
V$session s,v$process p
Where s.saddr=p.addr and


each database contains one (or) more roll back segments

a roll back segments records the dd.values of the data that were changed by each
transaction. roll back segments provides read consistency, rollback transactions and recover
the database.
 neither database users no administration can access or read roll back segment
 only oracle can write to (or) read then.
 roll back events change datablocks in the roll back segments and oracle records all
changes tp data blocks including rollback entries, in the redolog those information is very
important for active transactions ( not yet committed or rolled back) at a time of system
crash . if a system crash occurs then oracle automatically restores segments information,
including rollback entries for active transactions as part of instance or media recovery
when recovery is completed.
Oracle performs the actual roll back of transactions that had been neither committed
(nor ) roll backed at the time of system crash.
 usually when we committed a transaction, oracle release the roll back data but doesnot
immediately destroy it the data will be losted or then the last extent of roll back segment
are filled at the time oracle continues writing rollback data by wrapping around to the first
extent in the segment.

 each roll back segment can handle only fixed number of transactions from one instance.
oracle creats an initial rollback segments called segments when ever a db is created then
segment is in system ts we can’t drop system roll back segments.
 place roll back segments in separate tablespaces
 to create rollback segments, the user must have create rollback segment privilege.

Creation of rollback segments:

*create a tablespace to hold rollback segments

Sql> create tablespace <tn>
Dtafile ‘<path>’
Extent management dictionary;
*Create rollback segment
Sql> create rollback segment r1
Tablespace <tn>;
*shut dowm the database
* open initfile and comment the below parameter

Page 82 of 102

*start the database
*when we comment undo management the undo tablespace become offline.
*also rollback segments are become offline.
 this can be viewed from dba_rollback_segs
Sql>select segment_name,status,tablespace_name from dba_rollback_segs;

Segment_name status tablesapce_name

System online system
Sysmul$ offline undotbs
R1 offline rbs
R2 offline rbs

 in this situation try to insert data into some table some non-system user which is
assigned to sine permanent tablespace.
Sql>conn kittu/kittu
Sql>insert into emp values(1);

Cannot use system rollback segment for non-system tablespace ‘KITTU’;

 to resolve this situation we had to make the roll back segments online this can be done
in 2 ways.
1) manually
sql> alter system rollback segment r1 online;
2) mention parameter in initfile bounce the db.

Now we can perform all transations

 replication this scenario:
Open 3 sessions
Sql>conn kittu/kittu
generatew undo
Sql>conn sys as sysdba
fire the below query wether the extents for rollback segments and deallocation from
below rows

From v$rollstat b,v$rollname a where a.user=b.user;

Usn rollback segment number

Writes no of bytes of entries written to rollback segments

Page 83 of 102

Xactsnumber of active transactions.

To get information about roll back segments:

Sql> select segmen_name,owner,tablespace_name,status from

We can mention extents sizes for roll back segments also:-

Sql>create rollback segment rbs tablespace rbts
Storage(inintal 100k next 100k minextents 20 maxextents 100);

Alter extents :-
Sql> alter rollback segments rbs storage (maxextents 120);

It means defragmentstion
Sql>alter rollback segments rbs shrink to 100k;

Bringing rollback segments offline:

Sql>alter rollback segments rbs offline:

Dropin roll back segments:

Sql>drop rollback segment rbs;

 we can also find rollback segment information from dba segment

Sql> select segment_name,tablespae_name,bytes,blocks,extents from
dba_segmetns where segment_type=’ROLLBACK’;

 what happen when we exit from database user?

Auto commit occurs
i.e, all the uncommitted transations are going to be commited.


Every oracle database must have a method to maintain information that is used to rollback,
or undo changes to the database such information consists of records of actions of
transactions, primarily before they are committed.

 undo records are used to

• roll back transations when a roll back statement is used

• recover the database.

Page 84 of 102

till 8i the undo that used to generated, used to be handled rollback tablespace, which was
directly managed. In case we have choose to first create a rollback tablespace, then create
rollback segments and assign it to roll back tablespace.

 now oracle 9i ,the new concept of undo tablespace is introduces,whioch helps in below

• it is logically managed.
• The undo segments are created by oracle itself.
• The number of undo segments are generated by oracle itself.
• The purpose of undo management and rollback segment is same The purpose of
undo segments amd rollback segment is same except the creation and maintaince

it is not possible to use both methods in a single instance. However we can migrate
for example to created undo tablespace in database that is using rollback segments and
assign undo to db.
And to create rollback segs in database that using undo ts (or) commented it
However in both cases we must shut down and restart out database in order to effect the
switch to another.

 mode of undo space management :

If we use the rollback segments method of managing undo space you are said to be
operating in the manual undo management mode.

If we use undo tablespace method, you are operating in automatic undo management

We usually determine this mode at instance startup using the undo-management

parameter in the init file

An undo tablepsace must available into which oracle will store undorecors. The default
undo tablespace is created at database creation (or) an undo tablepspace can be created
The parameter to be specified to create and assign an undo tablespace is undo_tablespace.

 when instance startup,oracle automatically selects for use the first available undo
tablespae if there is no undo tablespace available the instance starts,but uses system
rollback segmet. This is not recommended. And an alert message is written to alert file.

 undo_retention:
Retention is period of time. it is specified in units of seconds. it cam survive system crashes
ie, undo genated before an instance is crash ,is retained until its retention time has expired
even across restarting the machine.

Page 85 of 102

When the instance is recovered undo info is returned based on current setting if
undo_retention parameter.
Default is undo_retention=900 default
We can change this value dynamically by using below statement.
Sql> alter system set undo_retention=200;
It effects immediately.

 oracle 10g guarantee undo retention

When we enable this option the database never overwrite unexpired undo data ie,undo data
whose age is less than undo retention period this option is disabled by default .

 create a undo tablespace

Sql>create undo tablespace undotbs
Datafile ‘/oraAPP/undo.dbf’ size 50m;

Create a undo with retention guarantee :-

Sql>create undo tablespace undotbs datafile ‘oraAPP/undotbs’ size
Retention guarantee;

 alter tablespace retention guarantee or no;

Sql> alter tablespace undotbs retention guarantee/noguarente

 changing undo tablespace dynamicallyto db;

Sql>alter system set undo_tablespace=’TS_NAME’;

droping undots:
Sql>drop tablespace undotbs;

table to get information about undo data:

Difference between undo and roll back segments:

 rollback segments are overwritten ie, when the last extent rollback segment gets filled it
enters the later uncommitted to first extent of that signet only it over writes the data in
those extents.
We had to create rollback segments manually it used upto 8i.
undo segments maintain the uncommitted data till the retention period is reached. Even
though it fills all the extents also, it maintains the data till it reaches retention that
time it throws an error.
Ora:30036 unable to extent the segment

Page 86 of 102

Oracle usually takes care of creating undo segments it is introduced in oracle 9i.
To build demo tables using sql Alomg with scott user :
select undtsn, undblock from v$undostat;

Temporary tablespaces
Sql>select file_name,tablespace_name,bytes,status from

To know which tablespace is assigned to database:

Sql>select property_name,property_value from database_properties;
Sql>select name from v$tempfile;

Database creation
we can create db without mentioning the below parameter in initfile.
Db_cache_size,shared_pool_size,log_buffer and control_files;
The sizes of the above parameter are
db_cache)size =48m
Loc ‘$oracle_home/dbs/
Total sga size = 112m.

How can we trace a session (user)?

We want to get information that what the user is doin. For this we had to follow below
Open 2 sessions
1) as sysdba
2) scott/tiger

1) we had to get the sid. Serial# for that session.

Sql> select sid, serial#,username from v$sessions where username is not null;

Sid serial# username


Page 87 of 102

27 1632 scott

2) now excute the below package to enable tracing for that session.

Sql> exec dbms_system set sql_trace in session (‘27’,’1632’,true);

3) perform some activities in that session

4) now server process id for that session using below query.
Sql> select p.sid from v$session s, v$process p where s.paddr=p.addr and s.sid=/s;
With this spid a trace file is generated for this session in udump:-
5) go to udump location now convert this trace file to user unserst and able format and
also eliminate sys related data.

[ kittu@linux1 n]$ tkprof ram_ora_3683.trc sys=no]

Open this file and view what are the activities being done on that session and also

How can we disable tracing on a session:

Sql> exec dbms_system set_sql_trace_in_session (sid,serial#,false);

How to kill a session

Identify the pid,serial# of that session from v$session.

Sql> select sid,serial#,username from v$session;

27 1749 rama

Now find the server processid for this session.

Sql> select p.sid from v$session s, v$process p where

s.paddr=p.addr and s.sid=27;
First kill this session using the below query in sql level;

Sql> alter system kill session ’27,1749’;

Now kill this session in o/s level first find out the process for this session we already found
the server process id for this session with this id, kill the session.

[kithu@linux2 ~] ps –ef|grep pracle ram

[kithu @linux2 ~] kill -9 9082

Page 88 of 102

Q) How can we controlled the number of archiver processes?

This is possible by defining a parameter named log_archive_max processes

sql> alter system set log_archive_max_processes=3;
Q) how can you perform manual archiving?
If your DB is in archive log mode, but don’t have automatic archival enabled. Then
we can manually fill archive filled online redolog files
Sql;> alter system archive log all;

Sql;> select log_mode from v$database;

It will show all archived logs information
Sql;>select name,dest_id,thread#,sequence#,arechived,completion

Name Dest_id thread# sequence# archived completion

/stage/-- 1 1 44 yes ly_sep-08

Sql> select def_name,name_space,arctuver,log_sequence
From v$archive_dest;
Dest_name name_space archiver log_sequence
Log_archive_dest system arch 0


Q) what is the role to grant users to allow select pr ivileges on all data dictionary views?
Q)what is role to grant users to allow excute privilages for packages and
And procedures in data dictionary?
Q) role to delete records from system audit table(aud$)

Page 89 of 102

Q) role to allow query access to any object in the sys schema.

Select any distionary.

the Authentication which we can define users such that the

database performs both identification and authentication
of users is said to be database Authentication.

The authentication through whiech we can define users that

Authentication is performed by o/s or network service
Is called external authentication.

o/s level authentication: to connect to database.

 create user exactly as o/s account

Sql> create user ops$ramesh is identified externally;
grant privilages to ops$rameh;
 make sure that the parameter value
Os_authent_prefix =ops$. It is default value
We can change this value in init ora

connect to sqlplus as follows

$ sqlplus /

Q) How can we view memory used for each user session?

Sql> select username,value || ‘Bytes’ “current uga memory” from
v$selssion sess, v$sesstat stat,v$statname name where
sess.sid=statsid and stat.statistic#=name.statistic# and’session uga memory’;
Username current uga memory.
------------------- ---------------------------------
Sys 941824 Bytes
System 226720 Bytes
Am 156256 Bytes

How can we see the curent Licessing Limits?

SQL> select session_max s_max, sessions_warning

s_warning, sessions_current s_warning sessions_current,
sessions_heghwater s_high, users_max from v$license;

s_max s_warning s_cyrrent s_high users_max

0 0 3 3 0

Page 90 of 102

During NOMOUNT stage the below tables we can acces


SQL>select * from v$SGA;

Name Value

Fixed size 1219352

Fariable size 184550632

Database Buffers 159383552

Redo Buffers 2973696


SQL> select instance_num,ber,instance_name,host_name,version,

startup_time, status from v$ Instance;

I_N I_N H_N VER startup_time started

1 app linux2 24-sep-08 started

SQl>select archiver,Logins,shutdown_pending,
Database,status,blocked, active_status form v$ Instance;

Achiver Logins Shutsown Database blocked Active

Stopped allowed No active no normal


Sql> select name from v$fixed_table where name like ‘v%’;

It displays dynamic performance tables.

All the v$views can be accesible in mount stage.


From this table we can get same queries.

Page 91 of 102

Sql> select view_definition from v$fixed_view_definition

Where view_name=”v$instance”;

It displays query to get information about instance.

AWK [Linux Command]

This command is used to select and print a particular in output (or) a fiel.
$ ps –ef|grep smon

Gopal 8052 1 0 15:12 ? 00:00:00 ora_smo_mydb

applmgr 18947 1 0 16:08 ? 00:00:00 ora_smo_mydb

$ ps –ef|grep smon|awk ‘{print $2}’


By mentioning flag ‘F’ we can comment particular word.

Awk -F “:” ‘{print $2}’
$ rm –rf *
It will remove all directories.
$ rm -ia *
It will ask for confirmation.
$ date
Fri sep 26 16:20:01 1st 2008
$ date +%d
$date +%m.
$date +%y.
We can create a with two todays date also
$ mxdir ‘date +%dm%y’

It creates a directory as 260908

Page 92 of 102

Q) What is the use of ignore=y in import?

While importing tables, it assumes that the table does not exit. If the table
exists it skips out with error. To ignore this type of errors we use ignore=y

Q) What the pre requisite to import user?

User must exit in target

Q) What is the pre requisite to import database?

Database must exists on target
 If we are using same file system on source and target, fallow below steps
• create an empty DB
• export source DB and import it.

 If we are using different file systems

• export the source DB
• create on empty DB
• create all the tablespaces in target DB which exists in source DB
• Import on target DB

Q)How can we migrate a table from DB to another DB?

 export the table by using below command
[app@linux6 ~]$exp kittu/kittu file=a.dump log=a.log
 copy dump file from source to target
 Import the table using imp we had to make sure to which user we are
[kittu@linux6 ~]$ imp app/app file=a.dmp log=a.log
tables=abc fromuser=kittu touser=app

Q) How can we migrate multiple tables?

$ exp kittu/kittu file=a.dump log=a.log tables=a,b
copy dump file from source to target
$imp app/app file=a.dmp log=a.log tables=a,b fromuser=kittu

Q) How can we migrate multiple tables from different users(schemas)?

[app@linux8~]$ exp system/manager file=a.dmp tables=kittu.a,
kittu.b, potti.b
Copy the dump file from source to target
[kittu@linux8 ~]$ imp system/manager file=a.dmp
fromuser=kittu touser=app
It will import only kittu tables
To import potti’s tables:
[kittu@linux8 ~]$imp system/manager file=a.dmp
fromuser=potti Touser=app

Page 93 of 102

Q) How can we import only tables structure without data?

 [app@linux8 ~]$exp system/manager file=a.dmp

tables=kittu.a rows=n log=a.log
 [app@linux8 ~]$imp system/manager file=a.dmp
fromuser=kittu touser=app

Q) How can we export data using par file?

$vi ram.par
File =a.dmp table=abc
$exp parfile=ram.par

Q)How can we import the table to target if the table already exists on target?
[app@linux6 ~]$exp system/manager file=a.dmp tables=kittu.a log=a.log
[kittu@linux6 ~]$emp system/manager file=a.dmp fromuser=kittu
touser=app ignore=y

Q) How can we migrate a schema from DB to another?

User must exists on the target database
 export the schema’s data by using below syntax
[app@linux6 ~]$exp system/manager file=a.dmp log=a.log

 copy the dump file from source to target and also create user in the target
import the schema’s data by using below sysntax
[kittu@linux6 ~]$imp system/manager file=a.dmp
fromuser=kittu touser=kittu

Q)How can we migrate full database to other db?

If we are using same file system, we migrate it easily. But if we are using different
file systems, we need to sync the tablespaces on both sides ie., we are required to create the
tablespace on target side what are there in source side.

This must be done as system or sys user

1. [app@linux6 ~]$exp file=a.dmp log=a.log full=y
2. copy the dump file to target
3. create the tablespaces on target DB
[app@linux6 ~]$imp file=a.dmp log=a.log full=y ignore=y

Q) How can we export and import large database whole size is 500GB?
This is possible by using filesize and file options in export
 [app@linux6 ~]$exp system/manager filesize=100GB
file=a.dmp,b.dmp,c.dmp,d.dmp,e.dmp log=a.log full=y
 copy the dump file to target
[app@linux6 ~]$ imp system/manager file=a.dmp, b.dmp, c.dmp,
d.dmp, e.dmp log=a.log full=y ignore=y

Page 94 of 102

Q) From where we can find the tablespace block size?

Select tablespace_name, block size from dba_tablespace

Transport Tablespace:-

Usually if we are migrating a user which contains 1GB, it takes move time to
export and import to reduce the time, export/import has an option transport_tablespace. By
using this option we get the information about
the metadata of tablespace


1. Make the tablespace read only

sql>alter tablespace <ts_name> read only;
2. Export the tablespace metadata only. This process can be done as ‘SYSDBA’
user only.
[app@linux6 ~]$ exp file=a.dmp tansport_tablespace=y
tablespaces=app log=a.log
3. Copy the dump file from source to target also copy the datafile related to that
tablespace from source to target.
4. If we are using same block size on bothsides no problem, otherwise we need to
mention block size related parameter (db_2k_cache_size) in init file of target db
and import the dump file
[kittu@linux6 ~]$ imp file=a.dmp tansport_tablespace=y
tablespaces=app Datafiles/oraAPP/kittu/kittudb/app.dbf
ignore=y log=a.log
SQL> alter tablespace ts01 offline immediate;
Usally when we put tablespace in offline mode, ckpt occurs for that tablespace. But
by using the above option, the ckpt doesnot occurs for the tablespace only Ts is put
in readonly mode.

Q) Script for exporting tables and ftp to another server?

vi expscript
Export ORACLE_SID=${1}
Export ORACLE_HOME= `grep -w ${1} /etc/oratab|awk -F ":"
'{print $2}'`
Export d=`date '+%d%m%y'`
exp system/manager file=s_db_${d}.dmp Log =s_db_${d}.log
ftp –n<<EOF
user ramu ramu
prompt off
mput s_db_${dt}.dmp s_db_${dt}.dmp

Page 95 of 102

We have 3 position parameter in the above file. So we need to pass 3 parameters
Second is table1
Third is tables2
[app@linux6 ~]$ ./expscript app scott.emp ram.kk

In case of users: Owner= ${2},${3}

Expscript app scott ram

In case of DB: full=y

Expscript app

Q) How can look for lines page by page in sql?

SQL> set pause on
SQL> set pagesize 10

Block Size:
Block size for data block is created at the time of DB creation. We can also maintain
database with datablocks having multiple block size.
Actually my DB it made with 8k, DB cache retrieves the 8k blocks only. In order to get
1k,4k blocks we need to add the below parameter in init.ora file.
Db_2k_cache_size=50m (2k_blocks)
This statement allots 50m from 2k blocks in db_cache
It is going to add additional space for 2k blocks in db cache.
After adding the above parameters we can create the TS with different block size as below
Create tablespace ts001 Datafile ‘/oraAPP/app/appdata/ts001.dbf’
size 10 blocksize 2k;

The data from db cache is flushed using LRU(least Recently used) algorithm.
The advantage of having biggest blocksize is if retrives data at a time.
The disadvantage of having bigger blocksize is more data is flushed into dbcache.
 what is the package which validates username/passwd when we use export/import

Q) How can we bounce listener?

lsnrctl reload <name>
Q) How can we redo the activities in editor?
!red (or) ctrlr
Q) How can we delete a line of selected range?
:n,m d [here and m are numbers]
Q) What is recursive sql?
The sql’s which work on data dictionalry are said to be recursive sql
Q) Public user?

Page 96 of 102

It is not ment for local DBA’s, only for distributed DBA’s

Sysopen is public user
Q) Row chaining?
Row spanned across the multiple blocks is called row chaining. It will came
decreasing I/O performance.
Q) Row Migration?
A row completely migrating to another block, but its address is maintained in initial
Both should reduce I/O
Q) How can you reload stylesheets?
Q) What we have do when import return error while importing starts?
Use exclude=statistics in impdb

Q) How can we Export only structure of table without Data ?

By using option rows=N

Q) What is the file which is used to read values of which are required for instance in P
file ?
i file

Options used for Input:-

All the options in Exp are also there in imp.

Some of other options are

Just lists file Contents (N)

It ignores the created errors (N)

From user:-
It indicates list of owner user names

To user:-
It indicates list of usernames

It compiles Procedures, packages and functions (Y)

Data files:-
Data files to be transported into Database

Q) How can we mcrease the speed of Exp/imp ?

By increasing Buffer size

Page 97 of 102

Q) How can we Export Bigger Data bases ?

Using file size and file option [100 GB Data]

exp System/manager file size=50G file=a.dmp,b.dmp Full=y
imp System/manager file size=50G file=a.dmp,b.dmp Full=y
The order of files in Exp/imp must be same

Page 98 of 102

Q)Why we need to pass ?

In shell script

Sql plus <<EOF

Sys as sysdba
Select name from v\$ database

It assumes as $ prompt and consider as env variable. To execute statement in

sql prompt we had to put ‘\’ on the statement .
In par file, no need of ‘\’.
‘\’ is required for special character to retain its original value

Q) What is the use of buffer ?

It means that how many records may put into place is called Buffer
Lok buffer slow
Lon bufferfast
Q) what is the use of compress=y option ?
It will making all individual extents into single bigger extent.

Imagine if a table contains to extents. Many of the blocks in those extents are not
completely filled. If we remove any rows from the extent also, oracle can’t fill that
rows by using Data. That is more space is wasted. So, while retrieving data I from 10
extends, it takes more time.


• Retrieval is some more slow

• Unnecessary wastage of free space in Db blocks
• By using compress=y [default], we may over come this situation
• Having bigger extent has also one disadvantage. It uses more space in caclue

Log:- Log file of screen output.

Full :- It will export full Database (N)

Rows:- Exports Data Rows (Y) along with structure.

Owner:- It indicates list of owner usernames.

Tables:- It indicates list of table names

Constraints:- Export constraints (y)

File Size:- It indicates maximum size of each dump file

Page 99 of 102

Tablespces:- list of Table spaces to export maximum size 07 each

Transport- Tablespces:- It indicates to export the table space metadata (N)

Profile:- It indicates parameter file name

Statntics:- Analyze object (estimate)

Object _ consistent :- Transaction set to read only during object (N)

Volsize:- Number of bytes to write to each tope volume

Q) How can we know which options are there for exp/imp?

Exp help=y

Options of Export:-
User id:- It indicates username/password
Buffer :- It indicates size of data buffer , how many statements can be generated
at a time in buffer
File:- It indicates the all put files (expdat.dmp)
Compress:- Default value :- y
By using this Option, all extents will be node into single Individual Bigger
Extent while Importing.
• Defragmentation occurs
• All Extends will be compressed into individual bigger Extent
Grants:- It will Expert Grants (Y)
Indexes:- Export indexes (y)
Direct:- It is used for Direct Path Export

We can use Export and Import in interactive and Non-Interacts also

Interactive Mode:-
$ exp

⇒ Now it will prompt us for username, password(which we wish to take backup), dump
file [default name=expdat.dmp buffer Size(4096 [d]) etc
⇒ It backup the structure ,Indexes, constraints of table also
⇒ It will Export grants ,tabledata,extent by default

Non-Interactive Model:-

We can bypass parameters when we exp/imp

Syn: Exp <user id/password> file=file.dmp log=file.log

Page 100 of 102


Instead of using parameter file ,you may use a parameter file where the parameters are
Make all inputs in file

Naming Convention:- <file_name>.par

Syntax :- parfile=<name>.par

Syntax for exp/imp:-

$exp parfile=<name>.par

 If the data is exported on a system which it is imported, imp must be the newer
version. If something needs to be exported form 10g into 9i, it must be exported with
 in order to use exp/imp the catexp.sql script must be run.
It was called by catalog.sql
 the utilities used for export and infort are exp and imp

Exp: It will scan and read the information of object form database and copies it into
Dump file in o/s level.
Imp: It will scan and read the information of dumpfile and copies it into database.
 by using export and import we can take the backup of following.
• object level (table level)
• database level
• user level
• table space level


Backup which is taken when the database is up and running is said to be logical backup.
Backing up of one or move objects of database is said to be logical backup.
By using logical backup also, we can take full backup of database.

Methodologies of logical backup;-

.export /import
. data pump
Export and import the utilities which which will allow to write data in an oracle
-binary format form the database into o/s files and to read back from those o/s
Files. There are used to perform following tasks.
o Partial backups.
o Restore tables
o Save space or reduce fragmentation in db.
o Move data from one owner to another
o Transfer data from one database to another database.

The files which had been created by export utility can only be read by import.
It is a prerequisite that oraenv or oraenv was excuted before you export or import data.

Page 101 of 102


$ tkprof <tracefile> <outputfile> sys =no.

 How can we make temporary tablespace datafile offline?
In mount stage, or in open stage also we can put temporary file offline.
Sql> alter database tempfile ‘-----------‘ offline;
But we cannot put the temporary tablespace offline
Which is organized by database.

 how can we find database startup time?

Sql> select to_char (startup_time,’dd_mm_yyyy hh24:mi:ss) ‘db
startup time’ from v$instance;

To find for particular session:

Select to_char(logtime,’yyyy_dd_mm hh24:mi:ss’) “db
startup time’ from v$session;
 what is unix command which is used to debug a file?
$ ser –x

To remove on week old files:

Find -name “v” –mbme +7 –exec run –rf{ } +
Find –name “x” –mtime –exec rm –rf{ }+

How can enable tracing for our own session

We can enable tracing for our own session by setting the parameter.
Sql> other session set sql_trace=true;

Now find spid for the session

select sid, username form v$session where username=’RAMU’;
33 106
Select s.sid s.serial# p.sid from v$session s, v$process
where s.sid =33.

Go to odump location and convert the trace file from row format to readable format by
using tkprof.

Page 102 of 102