Sunteți pe pagina 1din 6

Scenarios

DBPUMP Error
============
When error -
ORA-31633: unable to create master table ( expdp )
ORA-31633: unable to create master table ( expdp )
Example
K:\Partion\expdmp>expdp USER_PROD/USER_PROD@PROD_DB parfile=ExportParam.txt

Export: Release 10.2.0.3.0 - 64bit Production on Wednesday, 12 August, 2009 14:0
9:57
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit
Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
ORA-31626: job does not exist
ORA-31633: unable to create master table "SCHEMA_PROD.TABLE1"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-00955: name is already used by an existing object
Details
When performing and export using data pump you receive the error as above due to
the
fact that the exp was previously executed and not completed due to the fact the
it was
canceled or stopped for some reason. As a result the new expdp job has the same
name
as the old expdp job.
Solution
Verify that the table associated with the expdp job exists:
SQL> select table_name from dba_tables where table_name like '%TABLE1%';

TABLE_NAME
------------------------------
TABLE1
Identified that data pump job exists and the status of the job is NOT RUNNING
SQL>SELECT owner_name, job_name, operation, job_mode, state, attached_sessions
FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' ORDER BY 1,2;
Drop the table
SQL> drop table SCHEMA_PROD.TABLE1;

Table dropped.
Re Run the export
DBPUMP - Check progress
Check progress of the Export / Import job by using the SQL -
select substr(sql_text,instr(sql_text,'INTO "'),150) table_name,
rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minute
s,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
and open_versions > 0;
Add Tablespace if not ASM
=========================
ALTER TABLESPACE BR2 ADD DATAFILE '/adiprod01/oradata/adiprod/br2_ts05.dbf' SIZE
8192M AUTOEXTEND ON NEXT 16M MAXSIZE 8192M;
DBPUMP - Kill a running job
============================
Two step process. First get job name followed by owner name
SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
Now use this information to kill the job by invoking Oracle provided built in
SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
-- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');
h1 := DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','SCHEMA_USER');
DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/
Move tables across TABLESPACES
==============================
If you are full up on tablespace A, try and move some objects to tablespace B so
you can gain some space and continue running the queries
ALTER TABLE SPV1.BCKUP_76860_EDW_PRODUCT MOVE TABLESPACE BR2;
This statement will move the table from SPV1 to BR2 tablespace. However, any IND
EXES need to be moved manually since they are not automatically
moved. So the move across tablespaces becomes two part if INDEXES also need movi
ng
ALTER INDEX index_name REBUILD TABLESPACE new_tablespace_name;
Finally, need to be aware permissions are required in order to move tablespaces.
In the above example, SPV1 table is moved to BR2, so SPV1 schema
required the following grant -
ALTER USER SPV1 QUOTA UNLIMITED on BR2;
Once the alter succeeds, check the tablespace against the table. The data dictio
nary to check information at table level is DBA_SEGMENTS
SELECT owner,segment_name,segment_type,tablespace_name,bytes/1021/1021/1021 GB,i
nitial_extent,next_extent,extents,pct_increase
FROM DBA_SEGMENTS
WHERE --OWNER = 'SPV1' AND
SEGMENT_TYPE = 'TABLE' AND
SEGMENT_NAME LIKE '%BCKUP_76860%'
order by bytes/1021/1021/1021 desc;
Note: bytes/1021/1021/1021 means bytes - kb - MB - GB
Monitor TABLESPACES
===================
To check how much of tablespace has been used, use the following query to look u
p dba_data_files
SELECT
a.tablespace_name,
a.file_name,
a.bytes allocated_bytes,
b.free_bytes,
a.autoextensible
FROM
dba_data_files a,
(SELECT file_id, SUM(bytes) free_bytes
FROM dba_free_space b GROUP BY file_id) b
WHERE
a.file_id=b.file_id
ORDER BY
a.tablespace_name;

To understand just the sizes allocated to a data file, this simple query would s
uffice -
select file_name,bytes,maxbytes,user_bytes
from dba_data_files
where tablespace_name ='SPV1';
Get LONG RUNNING Queries
========================
SELECT osuser,
2 sl.sql_id,
3 sl.sql_hash_value,
4 opname,
5 target,
6 elapsed_seconds,
7 time_remaining
8 FROM v$session_longops sl
9 inner join v$session s ON sl.SID = s.SID AND sl.SERIAL# = s.SERIAL#
10* WHERE time_remaining > 0

Add TABLESPACE ASM
==================
If adding TEMP space -
ALTER TABLESPACE TEMP ADD TEMPFILE '+DG_SFDCPRODDATA1' SIZE 100M AUTOEXTEND ON N
EXT 100M MAXSIZE 5G
To view table space assigned in groups -
select tablespace_name, group_name from DBA_TABLESPACE_GROUPS;
For each tablespace, check used / free / total / %free space -
select df.tablespace_name "Tablespace", totalusedspace "Used MB", (df.totalspace
- tu.totalusedspace) "Free MB", df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace)) "P
ct. Free"
from (select tablespace_name, round(sum(bytes) / 1048576) TotalSpace
from dba_data_files group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments group by tablespace_name) tu
where df.tablespace_name=tu.tablespace_name;
For a given table space, check objects that are on them and the space occupied -

select SEGMENT_NAME, sum(BYTES/1024/1024) from dba_segments where TABLESPACE_NAM
E = 'ARC' group by SEGMENT_NAME order by 2 desc
Dev: Get ROWCOUNT from EXECUTE IMMEDIATE
=====================================
To get rowcount when using execute immediate, the sql%rowcount has to be execute
d in the same session. To achieve this -
execute immediate 'begin ' || DynSqlStmt || '; :x := sql%rowcount; end;' using O
UT p_rowcnt;
DBA: Alter NLS_PARAMS
=====================
NLS_PARAMETERS for a session can be altered. The details are stored in the NLS_S
ESSION_PARAMETERS table as Parameter / Value pair.
Parameter Value
--------- -----
NLS_LANGUAGE ENGLISH
NLS_TERRITORY UNITED KINGDOM
NLS_CURRENCY
To change at session level use DBMS_SESSION package -
DBMS_SESSION.SET_NLS('nls_timestamp_format', 'MM/DD/YYYY HH24:MI:SS');
Error Table for INSERT INTO
===========================
Starting from 10g Oracle provides DML error logging so rollback of insert does n
ot happen one row fails.
The syntax for the error logging clause is the same for INSERT, UPDATE, MERGE an
d DELETE statements.
LOG ERRORS [INTO [schema.]table] [('simple_expression')] [REJECT LIMIT i
nteger|UNLIMITED]
Create the error logging table.
BEGIN
DBMS_ERRLOG.create_error_log (dml_table_name => 'dest');
END;
/
Example -
INSERT INTO dest
SELECT *
FROM source
LOG ERRORS INTO err$_dest ('INSERT') REJECT LIMIT UNLIMITED;
Alter Sequences
===============
You cannot alter minvalue if the new minvalue is > old minvalue. The trick is th
en to do this -
* alter nextval to be equal to the new minvalue - by playing with increment by
* change minvalue
* reset increment by (usually to 1)
To query USER_VIEWS
===================
User_Views is not part of User_Source. The text column is a CLOB, so use the fol
lowing query to search through the views -
select *
from (select rownum, view_name, dbms_metadata.get_ddl( 'VIEW', view_name ) ddl
from user_views)
where upper(ddl) like upper('%sourcecategory7%');

SQL Plus
========
For settings in Sql Plus, use SET command.
To view all settings --> SHOW ALL
To make settings permanent so they take effect everytime, use SET commands in a
login.sql in the current directory
Run HOST commands from SQL Plus using !
Eg: ! mkdir <dir name>
! vi abc.txt

Optimizer Mode
==============
Possible values for optimizer_mode = choose/ all_rows/ first_rows/ first_rows[n]
By default, the value of optimizer_mode is CHOOSE which basically means ALL_ROWS
(if statistics on underlying tables exist) else
RULE (if there are no statistics on underlying tables). So it is very important
to have statistics collected on your tables on regular intervals
or else you are living in Stone Age.
FIRST_ROWS and ALL_ROWS are both cost based optimizer features.
In simple terms it ensures best response time of first few rows (n rows).
This mode is good for interactive client-server environment where server serves
first few rows and by the time user scroll down for more rows,
it fetches other. So user feels that he has been served the data he requested, b
ut in reality the request is still pending and query is still
fetching the data in background.
Best example for this is toad
Important facts about FIRST_ROWS
It gives preference to Index scan Vs Full scan (even when index scan is not good
).
It prefers nested loop over hash joins because nested loop returns data as selec
ted (& compared), but hash join hashes one first input in hash
table which takes time.
Cost of the query is not the only criteria for choosing the execution plan. It c
hooses plan which helps in fetching first rows fast.
It may be a good option to use this in an OLTP environment where user wants to s
ee data as early as possible.
Reference: http://oracle-online-help.blogspot.co.uk/2007/03/optimizermode-allrow
s-or-firstrows.html
ALL_ROWS
In simple terms, it means better throughput. While FIRST_ROWS may be good in ret
urning first few rows, ALL_ROWS ensures the optimum resource
consumption and throughput of the query. In other words, ALL_ROWS is better to r
etrieve the last row first.
Important facts about ALL_ROWS
ALL_ROWS considers both index scan and full scan and based on their contribution
to the overall query, it uses them. If Selectivity of a column is low, optimize
r may use index to fetch the data (for example where employee_code=7712), but if selec
tivity of column is quite high ('where deptno=10'), optimizer may consider doing
Full table scan. With ALL_ROWS, optimizer has more freedom to its job at its be
st.
Good for OLAP system, where work happens in batches/procedures. (While some of t
he report may still use FIRST_ROWS depending upon the anxiety level of report re
viewers)
Likes hash joins over nested loop for larger data sets.

S-ar putea să vă placă și