Documente Academic
Documente Profesional
Documente Cultură
Alejandro Vargas
Oracle Israel
TABLE OF CONTENTS
This test was designed to check the benefit of data compression to store read only data.
The test was performed using Oracle RDBMS 10.g version 10.0.1.3.0 on Linux and Solaris.
This test was intended to check how long may take to transfer 6 TB of historic data to a 10g Linux based database.
The test was implemented by taking a single partition from the main historic table as sample data, plugging its tablespace into a 10g
database in UNIX, Solaris 5.8 used as a staging area for all the tests.
Each test moved 31 G of data, 8 Scenarios were tested:
STEP TEST 1: TEST 2: TEST 3: TEST 4: TEST 5: TEST 6: TEST 7: TEST 8: TEST 9:
CONVERT TIME 54 Min 0 Min 0 Min 0 Min 0 Min 54 Min 54 Min 0 Min 0 Min
EXPORT TIME 0 Min 41 Min 0 Min 0 Min 0 Min 0 Min 0 Min 0 Min 0 Min
FTP TIME 86 Min 57 Min 0 Min 0 Min 0 Min 0 Min 86 Min 0 Min 0 Min
MOVE TIME 17 Min 0 Min 0 Min 0 Min 0 Min 51 Min 18 Min 0 Min 40 Min
IMPORT TIME 0 Min 74 Min 79 Min 109 Min 126 Min 0 Min 0 Min 124 Min 0 Min
TOTAL TIME 157 Min 172 Min 120 Min 109 Min 126 Min 105 Min 158 Min 124 Min not relevant
COMPRESSION 73 % 73 % 71 % 63 % 74% 73% 73% 13% 0%
Based on the tests results the best estimation to move 6 Tera of data is:
Simplest Test : #5 Insert as select from source = ~4.2 min/Giga, 420 hours, 17.5 days.
Best Test : #6 Upgrade to 10g / Convert/ NFS mount & Move = ~3.5 min/Giga, 350 hours, 14.5 days
List of Tested Scenarios
top
Scenario #1: Alter Table Move Compress into ASM Based Tablespace.
Scenario #2: 10g Export Data Pump and Import Data Pump into ASM Based Tablespace.
Scenario #3: 10g Parallel export Data Pump and import from NFS mounted File System, into ASM Based Tablespace.
Scenario #4: Parallel Insert as select from Solaris into ASM Based Tablespace
Scenario #5: Sequential Insert as select from Solaris into ASM Based Tablespace
Scenario #6: Transportable Tablespace based on NFS mounted filesystem + Move table
Scenario #7: Transportable Tablespace + Move table into FS based tablespace (scenario 1 with FS)
Scenario #8: Import data pump using network link
Scenario #9: Comparison test, move table in 8i, compare time with move on 10g
top
Scenario #1. Alter Table Move Compress into ASM Based Tablespace.
top
Previous check
SEGMENT_TYPE SEGMENT_NAME MB
--------------- -----------------------------------
TABLE CMP_TAB_DET_ALL_P 29700
TABLE TAB_DET_ALL_P0410_BC13 300
TABLE CMP_CALL_PART_TAB_P 300
TABLE CMP_CONT_FREE_UNIT_TAB_P 300
TABLE CMP_INCOME_DETAIL_TAB_P 3300
SEGMENT_TYPE SEGMENT_NAME MB
--------------- ------------------------------ ----
INDEX CMP_IND_TAB_DET_ALL 4800
INDEX CMP_PK_INCOME_DETAIL_TAB_P 600
INDEX CMP_PK_CONT_FREE_UNIT_TAB_P 300
INDEX CMP_CALL_PART_TAB_P_I1 300
Execute Move Objects
Move Tables
Rebuild Indexes
Result Scenario 1
Commands used:
alter table <tname> move compress parallel tablespace <tbsp>;
alter index <iname> rebuild tablespace <tbsp> nologging parallel;
Results:
TABLES
** An empty table has an allocation of 1 extent on the regular tablespace, that is 300MB on the
plugged tablespace. On the ASM based tablespace the size of the stripe unit is 1 MB, the disk is
uniformly divided into 1MB cells, The space allocation required for the empty table is 0.188 MB! .
INDEXES
INDEX NAME MB BEFORE MB AFTER %COMPRESSED MOVE TIME
----------------------------- --------------------- ----------------- ---------------- -------------------------
CMP_IND_TAB_DET_ALL 4,800 802 83% 00:10:06.76 (parallel)
CMP_PK_INCOME_DETAIL_TAB_P 600 55 90% 00:01:51.89 (parallel)
CMP_CALL_PART_TAB_P_I1 300 304 0% 00:00:00.06 (parallel)
CMP_PK_CONT_FREE_UNIT_TAB_P 300 305 0% 00:00:09.01 (parallel)
CONSISTENCY CHECK:
COUNT(*)
----------
76144304
SQL> l
1 select bytes/1024/1024 MB from dba_segments
2 where segment_name='CMP_TAB_DET_ALL_P'
3* and segment_type='TABLE' and owner='FYT_RENT'
SQL> /
MB
----------
8003
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning and Data Mining options
COUNT(*)
----------
76144304
MB
----------
29700
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production
COUNT(*)
----------
76144304
MB
----------
29700
Scenario #2. 10g Export Data Pump and Import Data Pump into ASM Based Tablespace.
top
Connect to 10g
Sqlplus
SQL*Plus: Release 10.1.0.3.0 - Production on Tue Feb 8 15:48:31 2005
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSTEM
UNDOTBS1
SYSAUX
TEMP
DATA
TBS_CDT_0410_BC13_BIG
6 rows selected.
FILE_NAME
--------------------------------------------------------------------------------
+DATA/zylnx/datafile/system.264.1
+DATA/zylnx/datafile/undotbs1.265.1
+DATA/zylnx/datafile/sysaux.266.1
+DATA/zylnx/datafile/data.268.1
/srvtst2/od01/zylnx/detall0410_BC13_big_01.dbf
/srvtst2/od01/zylnx/detall0410_BC13_big_02.dbf
/srvtst2/od01/zylnx/detall0410_BC13_big_03.dbf
/srvtst2/od01/zylnx/detall0410_BC13_big_04.dbf
/srvtst2/od01/zylnx/detall0410_BC13_big_05.dbf
9 rows selected.
Tablespace altered.
Tablespace dropped.
FILE_NAME
--------------------------------------------------------------------------------
+DATA/zylnx/datafile/system.264.1
+DATA/zylnx/datafile/undotbs1.265.1
+DATA/zylnx/datafile/sysaux.266.1
+DATA/zylnx/datafile/data.268.1
TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSTEM
UNDOTBS1
SYSAUX
TEMP
DATA
Create a directory on 10g on Solaris to hold the export file
Directory created.
Export main table using export data pump on 10g and Solaris
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning and Data Mining options
Starting "FYT_RENT"."SYS_EXPORT_TABLE_01": fyt_rent/******** DIRECTORY=call_det_exp
DUMPFILE=call_det_pump.dmp TABLES=CMP_TAB_DET_ALL_P LOGFILE=call_det_pump.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29.00 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "FYT_RENT"."CMP_TAB_DET_ALL_P" 24.98 GB 76144304 rows
Master table "FYT_RENT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for FYT_RENT.SYS_EXPORT_TABLE_01 is:
/srvtst/dw/export/call_det_pump.dmp
Job "FYT_RENT"."SYS_EXPORT_TABLE_01" successfully completed at 16:06
FTP dump file to Linux
ftp finished in 57 minutes to transfer 26GB (26824982528 bytes) 7.6 M per second.
Rename the old table and create a new empty compress enabled table to hold the new data
Table renamed.
Table created.
Import dump file using impdp into 10g on Linux
Directory created.
Import in parallel
Username: fyt_rent
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Operation: IMPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /srvtst2/od02/export/call_det_pump.dmp
Worker 1 Status:
State: EXECUTING
Worker 2 Status:
State: WORK WAITING
Worker 3 Status:
State: WORK WAITING
Worker 4 Status:
State: WORK WAITING
Master table "FYT_RENT"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Job: SYS_IMPORT_FULL_01
Operation: IMPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: /srvtst2/od02/export/call_det_pump.dmp
Worker 1 Status:
State: EXECUTING
Worker 2 Status:
State: WORK WAITING
Worker 3 Status:
State: WORK WAITING
Worker 4 Status:
State: WORK WAITING
Starting "FYT_RENT"."SYS_IMPORT_FULL_01": fyt_rent/******** DIRECTORY=IMP_DIR
dumpfile=call_det_pump.dmp logfile=call_det_pump.log parallel=4
table_exists_action=append content=data_only status=60
Job: SYS_IMPORT_FULL_01
Operation: IMPORT
Mode: FULL
State: COMPLETED
Bytes Processed: 26,824,827,344
Percent Done: 100
Current Parallelism: 4
Job Error Count: 0
Dump File: /srvtst2/od02/export/call_det_pump.dmp
Worker 1 Status:
State: WORK WAITING
Worker 2 Status:
State: WORK WAITING
Worker 3 Status:
State: WORK WAITING
Worker 4 Status:
State: WORK WAITING
Job "FYT_RENT"."SYS_IMPORT_FULL_01" successfully completed at 14:57
Scenario #3: 10g Parallel export Data Pump and import from NFS mounted File System, into ASM
Based Tablespace.
top
Timing
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit
Production
With the Partitioning and Data Mining options
Starting "FYT_RENT"."PARTST": USERID=fyt_rent/******** DIRECTORY=call_det_exp
DUMPFILE=expdat%U.dmp parallel=8 TABLES=CMP_TAB_DET_ALL_P LOGFILE=expdat%U.log
JOB_NAME=partst
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29.00 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "FYT_RENT"."CMP_TAB_DET_ALL_P" 24.98 GB 76144304 rows
Master table "FYT_RENT"."PARTST" successfully loaded/unloaded
******************************************************************************
Dump file set for FYT_RENT.PARTST is:
/srvtst/dw/export/expdat01.dmp
/srvtst/dw/export/expdat02.dmp
/srvtst/dw/export/expdat03.dmp
/srvtst/dw/export/expdat04.dmp
/srvtst/dw/export/expdat05.dmp
/srvtst/dw/export/expdat06.dmp
/srvtst/dw/export/expdat07.dmp
/srvtst/dw/export/expdat08.dmp
/srvtst/dw/export/expdat09.dmp
Job "FYT_RENT"."PARTST" successfully completed at 11:27
10g Import Data Pump from NFS, into ASM Based Tablespace .
Import start 16:21
Import end 17:40
Total Time 79 minutes
;;;
Import: Release 10.1.0.3.0 - 64bit Production on Thursday, 10 February, 2005 16:21
SYSDATE
-------------------
13/02/2005 17:33:33
Elapsed: 00:00:00.00
SQL> create table CMP_TAB_DET_ALL_P compress tablespace data
2 as select * from CMP_TAB_DET_ALL_P@zycmp
3 where 1=2;
Table created.
Elapsed: 00:00:00.71
SQL> alter session enable parallel dml;
Session altered.
Elapsed: 00:00:00.00
SQL> ALTER TABLE CMP_TAB_DET_ALL_P PARALLEL (DEGREE 8);
Table altered.
Elapsed: 00:00:00.01
SQL> insert /*+ APPEND NOLOGGING */ into CMP_TAB_DET_ALL_P select /*+
PARALLEL(instab,4) */ 2 * from CMP_TAB_DET_ALL_P@zycmp
3 /
Elapsed: 01:49:09.57
Scenario #5: Sequential Insert as select from Solaris into ASM Based Tablespace
top
Table truncated.
Elapsed: 00:00:01.14
SQL> alter table CMP_TAB_DET_ALL_P parallel (degree 1);
Table altered.
Elapsed: 00:00:00.01
Elapsed: 02:06:05.21
SQL> select bytes/1024/1024 from dba_segments where
segment_name='CMP_TAB_DET_ALL_P';
BYTES/1024/1024
---------------
8000
Elapsed: 00:00:00.06
Scenario #6: Transportable Tablespace based on NFS mounted filesystem + Move table
top
Place the converted files on an NFS file system that is mounted on the Target Linux Server
Import metadata on Linux database pointing to the datafiles located on the NFS mounted File
System
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
Import metadata on Linux database pointing to the datafiles located on the NFS mounted File
System
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
Tablespace created.
Elapsed: 00:09:15.69
Table altered.
Elapsed: 00:18:18.83
Scenario #8: Import data pump using network link
top
Both databases need to be 10.1.0.3 at least, if not you will get ora-39022
In this example impdp is run from Linux with rdbms 10.1.0.3 against a Solaris with
rdbms 10.1.0.2:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
ORA-39006: internal error
ORA-39022: Database version 10.1.0.2.0 is not supported.
Result
Tablespace created.
Move table
Table altered.
Elapsed: 00:39:31.60
Compare times
TABLE_NAME
--------------------------------------------------------------------------------
CMP_TAB_DET_ALL_P
COUNT(*)
----------
327
BYTES
----------
9797894144
Table dropped.
Elapsed: 00:00:00.09
References:
top
Note:277650.1 How to Use Export and Import when Transferring Data Across Platforms
This mean that the view includes only partitions older than 01/02/2004.
We can use newer partitions to transport them to a side database.
After checking the objects contained on the last tablespace I did find that it
contain several related objects. Because the size of one partition is quite big,
29GB I did choose to transport the whole tablespace with all its objects into a
side tablespace that will be used to perform all tests.
Prepare a script to create the tables and indexes that sit on the tablespace to
transport
spool off;
Password:
/srvprd/zyhist/od36/oradata/detall0410_BC13_big_01.dbf
/srvprd/zyhist/od36/oradata/detall0410_BC13_big_02.dbf
/srvprd/zyhist/od36/oradata/detall0410_BC13_big_03.dbf
/srvprd/zyhist/od36/oradata/detall0410_BC13_big_04.dbf
/srvprd/zyhist/od36/oradata/detall0410_BC13_big_05.dbf
Copy datafiles and metadata dump to working server
Copy the datafiles and the metadata export over to Srvtst to build up there an
8174 32 bits database to plug in the tablespace.
#!/usr/bin/ksh
# mvdatafiles
v_dest=srvtst:/srvtst/dw/zycmp
rcp -p /srvprd/app01/oracle/scripts/av/compress_test/transp_zyhist.dmp $v_dest
rcp -p /srvprd/zyhist/od36/oradata/detall0410_BC13_big_01.dbf $v_dest
rcp -p /srvprd/zyhist/od36/oradata/detall0410_BC13_big_02.dbf $v_dest
rcp -p /srvprd/zyhist/od36/oradata/detall0410_BC13_big_03.dbf $v_dest
rcp -p /srvprd/zyhist/od36/oradata/detall0410_BC13_big_04.dbf $v_dest
rcp -p /srvprd/zyhist/od36/oradata/detall0410_BC13_big_05.dbf $v_dest
rcp -p /srvprd/app01/oracle/scripts/av/compress_test/ENDcp $v_dest
Create a new 8.1.7.4 database to plug in the tablespace and start to work.
Check that the tablespace and its tables are accessible, check that there are not
columns of type NCHAR on the database, if there are you will need to convert them
to a supported Unicode character set before proceeding to upgrade.
Export Metadata
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit
Production
With the Partitioning and Data Mining options
Export done in IW8ISO8859P8 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TBS_CDT_0410_BC13_BIG ...
. exporting cluster definitions
. exporting table definitions
. . exporting table CMP_TAB_DET_ALL_P
. . exporting table TAB_DET_ALL_P0410_BC13
. . exporting table CMP_CALL_PART_TAB_P
. . exporting table CMP_CONT_FREE_UNIT_TAB_P
. . exporting table CMP_INCOME_DETAIL_TAB_P
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.
System altered.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Note: 10g compatible must be used in order to convert the tablespace datafiles to
Linux format.
Check that you have enough space to hold a full tablespace backup
If you don’t see the datafiles in Rman executing the report schema command, make
the tablespace read write and read only again, and repeat the report schema
command, this will update the control file headers.
The FTP of 40GB took from 18:18 until 19:44, in total 86 minutes
FILE# NAME
---------- ------------------------------------------------------------
7 /srvtst/dw/zycmp/detall0410_BC13_big_05.dbf
8 /srvtst/dw/zycmp/detall0410_BC13_big_04.dbf
9 /srvtst/dw/zycmp/detall0410_BC13_big_03.dbf
10 /srvtst/dw/zycmp/detall0410_BC13_big_02.dbf
11 /srvtst/dw/zycmp/detall0410_BC13_big_01.dbf
User created.
Grant succeeded.
If necessaire change national character set to match the nchar of the export
Change national character set, the nchar in solaris was AL16UTF16 while in Linux
is UTF8 in this test:
SQL> alter database NATIONAL CHARACTER SET AL16UTF16;
Database altered.
Enable an 8k cache size to plug the 8k block size tablespace into the 32k block
size database
System altered.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
6 rows selected.
10g Install on Linux AMD64
top
• Silent install of all recommended patches for 9i-10g (Get from Amnon Nissim)
• Review 10g prerequisites according Note:296665.1: Pre-Install checks for 10g RDBMS on
Linux AMD64/EM64T
Reference Note
Linux srvtst2 2.4.21-15.ELsmp #1 SMP Thu Apr 22 00:09:01 EDT 2004 x86_64 x86_64
x86_64 GNU/Linux
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
The preparation of ASM disks consist on marking unformatted disks to make them
available for ASM.
Request unformatted disks from storage (not raw devices), check their names using
fdisk -l as root, in this example disks /dev/sdc and /dev/sdd were added:
[root@srvtst2 root]# fdisk -l
Caution: Disks need to be made available without any sort of logical definition.
ie: if the fdisk show then as devices like in this output, oracleasm still will be
able to stamp them as ASM available disks but it will be unable to find them when
it need to create a disk group:
df –ha
ls -l /dev/oracleasm/disks
cat /proc/filesystems
lsmod
10g Install
top
Execute a custom install, do not create a database.
The install of 10g in Linux is fast and straightforward.
top
10g Database Creation on Linux.
top
Create a custom database using dbca
Choose ASM as the storage use external redundancy.
Choose 32 k for block size.
top
Start ASM instance fail with ORA-29701 unable to connect to Cluster Manager
This problem was seen after taking care of changing the UID and GID of oracle user
to match that of Solaris.
The change was done after shutting down the database and asm instance. But I
forgot to shutdown also the asm manager.
This error was also seen after killing all old cssd processes, and before
restarting asm manager:
SQL> startup
ASM instance started
SQL> shutdown
ORA-15100: invalid or missing diskgroup name
impdp
DIRECTORY=DATA_DUMP_DIR
DUMPFILE=expdat%U.dmp
PARALLEL=8
REMAP_TABLESPACE=TBS_CDT_0410_BC13_BIG:DATA
TABLES=CMP_TAB_DET_ALL_P
TABLE_EXISTS_ACTION=append
CONTENT=data_only
LOGFILE=impdat.log
JOB_NAME=imptst
STATUS=60
From another windows invoke impdp and attach to the JOB_NAME you defined in the
impdp session, then execute the kill_job command.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
Job: IMPTST
Owner: FYT_RENT
Operation: IMPORT
Creator Privs: FALSE
GUID: EFC3E65A66702BE1E030050AD8B41192
Start Time: Thursday, 10 February, 2005 16:21
Mode: TABLE
Instance: zylnx
Max Parallelism: 8
EXPORT Job Parameters:
CLIENT_COMMAND USERID=fyt_rent/******** DIRECTORY=call_det_exp
DUMPFILE=expdat%U.dmp parallel=8 TABLES=CMP_TAB_DET_ALL_P LOGFILE=expdat%U.log
JOB_NAME=partst
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY CALL_DET_EXP
LOG_FILE_NAME expdat%U.log
TABLE_CONSISTENCY 0
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND USERID=fyt_rent/******** DIRECTORY=DATA_DUMP_DIR
DUMPFILE=expdat%U.dmp parallel=8 REMAP_TABLESPACE=TBS_CDT_0410_BC13_BIG:DATA
TABLES=CMP_TAB_DET_ALL_P LOGFILE=impdat.log JOB_NAME=imptst STATUS=60
TABLE_EXISTS_ACTION=append CONTENT=data_only
DATA_ACCESS_METHOD AUTOMATIC
INCLUDE_METADATA 0
LOG_FILE_DIRECTORY DATA_DUMP_DIR
LOG_FILE_NAME impdat.log
SKIP_UNUSABLE_INDEXES 1
TABLE_EXISTS_ACTION APPEND
STREAMS_CONFIGURATION 1
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 8
Job Error Count: 0
Dump File: /srvtst/dw/export/expdat%u.dmp
Dump File: /srvtst/dw/export/expdat01.dmp
Dump File: /srvtst/dw/export/expdat02.dmp
Dump File: /srvtst/dw/export/expdat03.dmp
Dump File: /srvtst/dw/export/expdat04.dmp
Dump File: /srvtst/dw/export/expdat05.dmp
Dump File: /srvtst/dw/export/expdat06.dmp
Dump File: /srvtst/dw/export/expdat07.dmp
Dump File: /srvtst/dw/export/expdat08.dmp
Dump File: /srvtst/dw/export/expdat09.dmp
Worker 1 Status:
State: WORK WAITING
Worker 2 Status:
State: WORK WAITING
Worker 3 Status:
State: WORK WAITING
Worker 4 Status:
State: WORK WAITING
Worker 5 Status:
State: EXECUTING
Object Schema: FYT_RENT
Object Name: CMP_TAB_DET_ALL_P
Object Type: TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Completed Objects: 1
Completed Rows: 11,253,273
Completed Bytes: 26,823,686,304
Percent Done: 100
Worker 6 Status:
State: WORK WAITING
Worker 7 Status:
State: WORK WAITING
Worker 8 Status:
State: WORK WAITING
Import> KILL_JOB
Are you sure you want to kill job imptst? (yes or no) yes
top
Setting 10g Archive log mode with destination out of ASM
top
System altered.
System altered.
Database altered.
Database altered.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 464
Next log sequence to archive 466
Current log sequence 466
System altered.
System altered.
top