Sunteți pe pagina 1din 44

ORACLE TO SQL SERVER MIGRATION

Moving to cost effective platforms and reducing the total cost of ownership are the main business drivers for the enterprises around the world to move to Microsoft SQL Server.

This case study describes in detail the steps involved in migrating data from Oracle to Microsoft SQL Server and how the specific nuances of the migration were handled by BitWise.

Many Minds...One World

Oracle to SQL Server Migration

E X ECUT I V E SUM M ARY


Our insurance customer was developing a Data Warehouse for their various lines of insurance products. The Data Warehouse was built on Oracle database and Business Objects Data Integrator for ETL. The insurance customer expressed a need to move off the Oracle / Data Integrator platform to a more cost effective solution. Since the insurance customer had enterprise licensing agreement for Microsoft products and was already a user of SQL Server Database, it was proposed that the Data Warehouse be migrated from Oracle to SQL Server.

P ROJ EC T S P E C I F I C AT IO N
Our insurance customer decided to replace Oracle Database with Microsoft SQL Server for their MIR Data Warehouse. The project scope included migrating all Database objects, ETL and Data from the existing Oracle Database to SQL Server. The following components were identified for migration to SQL Server: 1. DML All Database objects and structures . 2. Stored Procedures and Functions All PL/SQL stored procedure and functions to be migrated to SQL Server. 3. Triggers SQL triggers in Oracle to be migrated to SQL Server 4. Data All Data that currently resides on the Oracle database had to be migrated to the SQL Server Database 5. ETL - Data Integrator ETL that currently works on Oracle needs to be made compatible to SQL Server or rewritten in SSIS.

TECHNICAL SPE C IFICATION


Microsoft provides a basic migration assistant to convert logical objects from Oracle to SQL Server. The migration assistant is not able to convert complex logical objects. The following list of Database Objects can be transferred with the help of SQL Server Migration Assistant For Oracle (SSMA) Tool: 1. Views 2. Basic Functions 3. Basic Stored Procedures

Many Minds...One World

> Page 1

Oracle to SQL Server Migration

After the migration has been done with the help of SSMA manual changes are required in specific cases. For example, if the View uses a rank function which uses an order by, this view although migrated without errors through SSMA gives a different result in Oracle and SQL Server. To get the exact result we have to alter the view.

The Approach for any logical object conversion would be the following: 1. Use SSMA to do the basic layout conversion 2. Test the Logical object; use the Sql Conversion Guidelines, Functions and SP guidelines to make any changes necessary. 3. Re-write the proc logic if the proc does not produce the expected result.

PHYSI C AL OB JE C TS M AP P I NG
Our insurance customer decided to replace Oracle Database with Microsoft SQL Server for their MIR Data Warehouse. The project scope included migrating all Database objects, ETL and Data from the existing Oracle Database to SQL Server. The following components were identified for migration to SQL Server:

TABLESP A CES
The tablespaces in Oracle consists of collection of datafiles. The tables are located on a tablespace. The SQL Server equivalent for tablespaces is the FileGroup. The migration of tablespaces to FileGroups is straight forward. Tables created in the tablespaces will be created into the FileGroups during the migration. Appendix A : Example #1 The functions of the temp tablespace will be done in the tempdb of SQL Server. The system tablespaces will be replaced with the primary filegroup of the SQL Server Database.

SCHE MA S
In Oracle users and schemas will be maintained as is in SQL Server .

T A BLES
Itables will be created in SQL Server on the respective FileGroups during migration. One is able to determine which tablespace the table is created in, from Oracle Metadata or table scripts. The tables when migrated to SQL Server will be created in the same filegroup. The collection of data files per filegroup and its physical location will be kept

Many Minds...One World

> Page 2

Oracle to SQL Server Migration

same as in Oracle. For example if a tablespace by the name DW_DATA_01 with 3 data file located on Data Drive, the same structure would be repeated on the SQL Server environment.

From our experiences we have understood row level compression in SQL Server well. In SQL Server 2005 all the tables have to be created with the var decimal option set to enabled. This setting enables Data compression. Lets say will have a fact table with 50 columns of data type numeric (28,7) and 250 Million rows, without the row level compression (which is the only compression supported in SQL Server 2005) the table would take up ~ 250 GB, with the compression it takes ~ 100 GB.

SQL Sever 2008 has the capability to further reduce the data size of such large tables. The compression capability is enhanced to Page level, which would mean that physical data pages would be compressed and the same table would now take ~60 GB. Defaults on Oracle will be converted to defaults on SQL Server, primary keys to primary keys, unique keys to unique keys, foreign keys to foreign keys, NOT NULL constraints to NOT NULL, check constraints to check constraints. Defaults, primary keys, unique keys, and foreign keys will preserve their names.

PAR T ITION S
Partitioned Tables in Oracle will be maintained as partitioned in SQL Server. SQL Server supports Range and List Partition types and allows partitioning only on a single column. Also sub-partitions are not supported by SQL Server.

Oracle To SQL Server Partitions Mapping:

Oracle Partition Types List Hash Range Composite

Microsoft SQL Server Partition Type. List Range Range Not Supported. We can have a workaround. The Composite key would be evaluated into a single key which would then be partitioned in SQL Server.

Many Minds...One World

> Page 3

Oracle to SQL Server Migration

IN D EX ES
SQL Server supports clustered and non-clustered indexes (unique / normal). All the bitmap indexes will be created as non-clustered indexes on the respective file-groups. Some additional indexes would have to be created to tune the performance . SQL Server can create indexes with an include clause which includes additional columns to be included in the index which would boost the performance. These columns are still not a part of the index keys but are to columns which would correspond the column list of a select query.

Oracle To SQL Server Index Type Mappings:

Oracle Indexes Unique Index Bitmap Index Bitmap join Index

Microsoft SQL Server Indexes Unique Non Clustered Index Non Clustered Index Non Clustered Index. This can also be simulated as non clustered index on indexed view. Clustered Index. (Only 1 per table, this index is the Primary key based index by default). Not Supported. Work around is to add a computed column based on the function and create an index on it.

Index-organized tables

Index-organized tables

All the rest of the indexes will be converted to Non clustered Indexes.

SQL Server provides options like FILL-FACTOR and PAD_INDEX which we have effectively used in our projects. An indexes fil-lfactor specifies the percentage that the index data pages on disk are filled when the index is first created. An index fill-factor of 100% will cause each index data page to be completely filled up. This is ideal from a disk capacity standpoint as there is no wasted space with data pages that are not fully allocated. However, this is not ideal from a SQL Server performance perspective for data updates (Inserts, Updates and Deletes). If one creates a clustered index that has a fill-factor of 100%, every time a record is inserted, deleted or even modified, page splits can occur because there is likely no room on the existing index data page to write the change. Page splits increase IO and can dramatically degrade SQL Server performance.

Many Minds...One World

> Page 4

Oracle to SQL Server Migration

Appendix A: Example #3

It is very easy to react by simply applying a very low fill-factor of say, 50%, to reduce page splits in a highly transactional system. The problem with this approach is that by doing this, one has in effect doubled the amount of data required to read and cache index information for a table. So in improving write performance, one has potentially degraded read performance. The trick is to find the right balance between read performance and write performance by optimizing the fill-factor settings for each index.

When troubleshooting database performance problems, even the most careful scrutiny of stored procedures, index placement and database blocking can be overshadowed by incorrect fill-factor settings. Paying attention to this one simple index configuration option can significantly increase database performance by dramatically reducing disk IO.

L OG ICA L OBJ E C T S MAP P ING


S E QU E N C ES
Sequences can be incorporated into the SQL Server tables itself by creating an identity column or by having a instead of insert trigger that would yield this value.

Appendix A : Example #2

This would eventually change all the procedures/ETL processes accessing this table. Sequences also can be incorporated by using a instead insert trigger. The trigger would be a row level trigger implemented with a cursor on the inserted table. Every iteration the value of the surrogate key would be incremented and then passed on to be inserted into the table.
SYSTEM INTEGRATION & TEST

TRIGGERS
Oracle Triggers will be converted to SQL Server triggers based on the following rules:

! BEFORE triggers will be converted to INSTEAD OF triggers. ! AFTER triggers will be converted to AFTER triggers. ! Row-level triggers will be emulated using cursor processing. ! Multiple triggers defined on the same operation will be combined into one trigger.
Appendix A: Example #4

Many Minds...One World

> Page 5

Oracle to SQL Server Migration

VIEWS
Queries in the views will be rewritten and the views would be created accordingly. The only exception is the materialized view in Oracle, which becomes an ordinary table in SQL Server.

PACK A GES
SQL Server does not support packages. Packages in Oracle will be converted to SQL Server Stored Procedures and functions which would follow the same naming convention.

FUNCT I ONS
Functions in Oracle will be converted to functions in SQL Server. SQL Server Function does not allow an insert/update/delete. Functions containing inserts/updates and deletes will be converted to stored procedures. A function containing an output parameter will be replaced with a stored procedure.

Scalar Functions will be converted to Scalar Functions

Table Valued Functions will be converted to table valued functions.

STORED PROCEDURES
Some of the stored procedures are encrypted (wrapped) , these procs need to be decrypted and then created and encrypted in SQL Server. Also some procs are using external libraries, these procs will be created as SQL Server Assemblies or SQL Server Extended Stored Procedures. All the other procs will be converted to SQL Server syntax and re-created.

Appendix A: Example #5

E XC EPT I ON M ANAG EME N T


Exception Raising The Oracle exception raising model comprises the following features:

Many Minds...One World

> Page 6

Oracle to SQL Server Migration

! The SELECT INTO statement causes an exception if not exactly one row is returned. ! The RAISE statement can raise any exception, including system errors. ! User-defined exceptions can be named and raised by name. ! The RAISE_APPLICATION_ERROR procedure can generate exceptions with a custom number and message.
If the SELECT statement can return zero, one, or many rows, it makes sense to check the number of rows by using the @@ROWCOUNT function. Its value can be used to emulate any logic that was implemented in Oracle by using the TOO_MANY_ROWS or NO_DATA_FOUND exceptions. Normally, the SELECT INTO statement should return only one row, so in most cases one would not need to emulate this type of exception raising.

Appendix A: Example #6

Also, PL/SQL programs can sometimes use user-defined exceptions to provide business logic. These exceptions are declared in the PL/SQL block's declaration section. In Transact-SQL, one can replace that behavior by using flags or custom error numbers.

Appendix A: Example #7

If the user-defined exception is associated with some error number by using pragma EXCEPTION_INIT, one can handle the system error in the CATCH block as described later. To emulate the raise_application_error procedure and the system predefined exception, one can use the RAISERROR statement with a custom error number and message. Also, change the application logic in that case to support SQL Server 2005 error numbers.

Note that SQL Server 2005 treats exceptions with a severity of less than 11 as in formation messages. To interrupt execution and pass control to a CATCH block, the exception severity must be at least 11. (In most cases one should use a severity level of 16.)

Exception Handling Oracle provides the following exception-handling features: ! The EXCEPTION block

! The WHEN THEN block ! The SQLCODE and SQLERRM system functions ! Exception re-raising

Many Minds...One World

> Page 7

Oracle to SQL Server Migration

Transact-SQL implements error handling with a TRY.....CATCH construct. To provide exception handling, place all ? trying? statements into a BEGIN TRY END TRY block, while placing the exception handler itself into a BEGIN CATCH END CATCH block. TRY CATCH blocks also can be nested.

To recognize the exception (WHEN THEN functionality), one can use the following system functions:

! error_number ! error_line ! error_procedure ! error_severity ! error_state ! error_message


One can use the error_number and error_message functions instead of the SQLCODE and SQLERRM Oracle functions. Note that error messages and numbers are different in Oracle and SQL Server, so they should be translated during migration.

Appendix A: Example #8

Unfortunately, SQL Server 2005 does not support exception re-raising. If the exception is not handled, it can be passed to the calling block by using the RAISERROR statement with a custom error number and appropriate message.

IMPLICIT T RA N SAC T I ON S
When a connection is operating in implicit transaction mode, the instance of the SQL Server Database Engine automatically starts a new transaction after the current transaction is committed or rolled back. One does nothing to delineate the start of a transaction; one only commits or rolls back each transaction. Implicit transaction mode generates a continuous chain of transactions. By setting this connection setting we can have connection behavior same as Oracle.

After implicit transaction mode has been set on for a connection, the instance of the Database Engine automatically starts a transaction when it first executes any of these statements:

Many Minds...One World

> Page 8

Oracle to SQL Server Migration

ALTER TABLE CREATE DELETE DROP FETCH GRANT

INSERT OPEN REVOKE SELECT TRUNCATE TABLE UPDATE

The transaction remains in effect until one issues a COMMIT or ROLLBACK statement. After the first transaction is committed or rolled back, the instance of the Database Engine automatically starts a new transaction the next time any of these statements is executed by the connection. The instance keeps generating a chain of implicit transactions until implicit transaction mode is turned off.

Implicit transaction mode is set either using the Transact-SQL SET statement, or through database API functions and methods.

SQ L CON VE RSION
! Outer joins of (+) form on Oracle will be converted to ANSI-standard outer joins on SQL Server. ! Hints on Oracle will be converted to hints on SQL Server. Currently supported hints include
FIRST_ROWS, INDEX (tablename indexname), APPEND, MERGE_AJ, MERGE_SJ, MERGE(tablename). Hints are used to boost performance, if a hint in oracle does not have an equivalent the Query would be tuned to create a specific index to boost the performance. Appendix A: Example #9

! Order By Clause: By default Oracle returns the order by result with the nulls at the bottom unless
specified not to. SQL Server returns the order by result in exactly the opposite manner.All the queries using the order by on a column having nulls would be defaulted to a higher number so that the result would be as Oracle.

Many Minds...One World

> Page 9

Oracle to SQL Server Migration

! Numeric parameters with unspecified length and precision will be converted to numeric(38, 10) ! System functions will be converted to either Microsoft SQL Server system functions or user-defined
functions from the provided system function library. For example the oracle function Greatest is not present in SQL Server. This function will rewritten as a SQL Server user defined function.

! IF-ELSIF ELSIF-ELSE-END IF statements will be converted to nested IF statements. ! LOOP statement (with EXIT or EXIT WHEN) will be converted to WHILE (1=1) statement with BREAK
statement. Appendix A: Example #10

! Numeric FOR loop (including optional REVERSE keyword) will be converted to WHILE statement.
Appendix A: Example #11

! Cursor conversion
Cursor attributes will be converted as follows: cursor_name%NOTFOUND (@@FETCH_STATUS = -1) cursor_name%FOUND (@@FETCH_STATUS = 0) cursor_name%ISOPEN (cursor_status (=local', =cursor_name') = 1) cursor_name%ROWCOUNT @v_cursor_name_rowcount declared and incremented after each fetch operation Appendix A: Example #12 Cursors with parameters will be converted to multiple cursors. Appendix A: Example #13 Cursor FOR loop will be converted to a cursor with local variables. CLOSE cursor_name will be converted to CLOSE cursor_name and DEALLOCATE cursor_name. Appendix A: Example #14

! Variable declaration conversion:


Static variable declarations will be converted to variable declarations. Variable declarations including %TYPE have the column data type resolved at conversion time. Appendix A: Example #15

Many Minds...One World

> Page 10

Oracle to SQL Server Migration

! Transaction management conversion:


BEGIN TRAN, COMMIT, and ROLLBACK statements on Oracle will be converted to the corresponding BEGIN TRAN, COMMIT, and ROLLBACK statements on SQL Server. Because in Oracle transactions are started automatically when a DML operation is performed, in SQL Server we will allow implicit transactions by using SET IMPLICIT_TRANSACTIONS ON statement or will use BEGIN TRAN and COMMIT TRAN. SAVEPOINT will be converted to SAVE TRANSACTION.

DATABASE T ROUBLESHOOTING A ND PERF O RMANCE MONI T ORING UT IL IT I ES


Following are the utilities that has helped us to monitor SQL Server and troubleshoot performance in the previous project that we have done for Oracle to SQL Server Migration.

SP_WHO2 The sp_who2 internal procedure allows users to view current activity on the database. This command provides a view into several system tables (e.g., syslocks, sysprocesses, etc.). The sp_who command returns the following information:

! SpidThe system process ID. ! statusThe status of the process (e.g., RUNNABLE, SLEEPING). ! loginameLogin name of the user. ! hostnameMachine name of the user. ! blkIf the process is getting blocked, this value is the SPID of the blocking process. ! dbnameName of database the process is using. ! CmdThe command currently being executed (e.g., SELECT, INSERT) ! CPUTimeTotal CPU time the process has taken. ! DiskIOTotal amount of disk reads for the process. ! LastBatchLast time a client called a procedure or executed a query. ! ProgramNameApplication that has initiated the connection (e.g., Visual Basic, MS SQL Query

Many Minds...One World

> Page 11

Oracle to SQL Server Migration

SP_MONITOR Displays statistics about SQL Server. SP_monitor returns

Column name last_run current_run seconds cpu_busy

Description Time sp_monitor was last run Time sp_monitor is being run Number of elapsed Number of seconds that the server computers CPU has been doing SQL Server work Number of seconds that SQL Server has spent doing input and output operations Number of seconds that SQL Server has been idle Number of input packets read by SQL Server Number of output packets written by SQL Server Number of errors encountered by SQL Server while reading and writing packets Number of reads by SQL Server Number of writes by SQL Server Number of errors encountered by SQL Server while reading and writing Number of logins or attempted logins to SQL Server

io_busy

idle packets_received packets_sent packet_errors

total_read total_write total_errors

connections

XP_FIXEDDRIVES Displays the amount of free space left on every drive of the Server. DBCC INPUTBUFFER (SPID ) Displays the statement being executed in the given SPID. SPID is the SQL Server Process ID.

Many Minds...One World

> Page 12

Oracle to SQL Server Migration

SQL Server Profiler SQL Server Profiler shows how SQL Server resolves queries internally. This allows administrators to see exactly what Transact-SQL statements or Multi-Dimensional Expressions are submitted to the server and how the server accesses the database or cube to return result sets.

Using SQL Server Profiler, one can do the following:

! Create a trace that is based on a reusable template ! Watch the trace results as the trace runs ! Store the trace results in a table ! Start, stop, pause, and modify the trace results as necessary ! Replay the trace results
We have used SQL Server Profiler to monitor only the events in which one would be interested. If traces are becoming too large, one can filter them based on the information one would want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process, and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time. SQL Server Database Engine Tuning Advisor Database Engine Tuning Advisor (DTA) in Microsoft SQL Server 2005 is a powerful tool that can assist DBAs in selecting an appropriate physical design for a SQL Server installation.

DTA can be used to tune an individual SQL statement that is performing poorly, or to tune a large workload of queries and updates. DTA offers assistance both to novice users as well as to experienced DBAs. The simplest use of this tool requires the user to point DTA to one or more databases and to a workload of SQL queries and updates. DTA returns a recommendation, which is a list of suggested physical design changes (for example, create/drop index) for optimizing the performance of the given workload. For more advanced users, DTA exposes several customization options such as:

! Which physical design features to recommend (indexes only, indexes and indexed views, and so on). ! Which tables to tuneonly selected tables are tuned.

Many Minds...One World

> Page 13

Oracle to SQL Server Migration

! Bound on the total storage space that can be consumed by the database(s) inclusive of indexes and
indexed views.

! Partitioning options (no partitioning, aligned partitioning for manageability, partitioning purely for
performance).

! Control over existing physical design structures, such as to keep all existing structures or to keep all
existing clustered indexes.

! The ability to partially specify the physical design (for example, the DBA wants a particular clustered
index on a table, but allows DTA to pick other indexes). DTA is designed to keep the query optimizer ? in the loop? when suggesting physical design changes. There are two important benefits of this: (1) if DTA recommends an index for a query, the index, if implemented, will very likely be used by the query optimizer to answer that query, and (2) the DTA recommendation is cost-based. In particular, the design goal is to find the physical design with the lowest optimizer estimated cost for the given workload. Note that if the workload contains insert, update, or delete statements, DTA automatically takes into account the cost of updating the physical design structures. DTA Usage Scenarios

! Troubleshooting the performance of a problem query ! Tuning a workload of queries and updates ! Performing an exploratory what-if analysis ! Tuning a production server !
Incorporating manageability requirements

! Managing storage space


Database Administration The cost of database administration in SQL Server as compared to Oracle is less. The disk storage as opposed would be on the higher side. As SQL Server does not support compression of backups and data the disk-space required is higher.

As in Oracle , SQL Server supports Full, Differential and Transaction Log backup.

Many Minds...One World

> Page 14

Oracle to SQL Server Migration

The Restoration of Database is straight forward.

SQL Server can shrink the log files and data files and return the unused space back to the OS.

Following was the backup strategy that we adopted for one of our clients. 1. Recovery Model Set to Full which would allow us to have Transaction Log Backup if required. 2. Every Sunday a Full back up and database shrink activity. 3. Every day a differential backup. As the ETL to process the data runs till 7:00 am daily, we take the differential backup after that.

The backups are cleaned up after a months interval.

Data types Mapping The following table contains the default data type mapping. Oracle Data Type bfile binary_double binary_float blob char char varying[*..*] char[*..*] character Default SQL Server 2005 Data Type varbinary(max) float[53] float[53] varbinary(max) char varchar[*] char[*] char

Many Minds...One World

> Page 15

Oracle to SQL Server Migration

character varying[*..*] character[*..*] clob date dec dec[*..*] dec[*..*][*..*] decimal decimal[*..*] decimal[*..*][*..*] double precision float float[*..53] float[54..*] int integer long long raw long raw[*..8000] long raw[8001..*]

varchar[*] char[*] varchar(max) datetime dec[38][0] dec[*][0] dec[*][*] decimal[38][0] decimal[*][0] decimal[*][*] float[53] float[53] float[*] float[53] int int varchar(max) varbinary(max) varbinary[*] varbinary(max)

Many Minds...One World

> Page 16

Oracle to SQL Server Migration

long varchar long[*..8000] long[8001..*] national char national char varying[*..*] national char[*..*] national character national character varying[*..*] national character[*..*] nchar nchar[*] nclob number number[*..*] number[*..*][*..*] numeric numeric[*..*] numeric[*..*][*..*] nvarchar2[*..*] raw[*..*]

varchar(max) varchar[*] varchar(max) nchar nvarchar[*] nchar[*] nchar nvarchar[*] nchar[*] nchar nchar[*] nvarchar(max) float[53] numeric[*] numeric[*][*] numeric numeric[*] numeric[*][*] nvarchar[*] varbinary[*]

Many Minds...One World

> Page 17

Oracle to SQL Server Migration

real rowid smallint timestamp timestamp with local time zone timestamp with local time zone[*..*] timestamp with time zone timestamp with time zone[*..*] timestamp[*..*] Urowid urowid[*..*] varchar[*..*] varchar2[*..*] Xmltype

float[53] uniqueidentifier smallint datetime datetime datetime datetime datetime datetime uniqueidentifier uniqueidentifier varchar[*] varchar[*] xml

DATA Migration After the Physical Schema has been created the next step would be to convert the Data that is currently residing on the Oracle database into the SQL Server Schema. The following steps will be followed in this process. 1. ReData Load will be done using SSIS (Built-in ETL Tool With SQL Server 2005). This approach would require some amount of develpoment effort but is the most relaible approach. The benefit of this approach is that it provides a greater control for data migration. It provides the mechanism for logging errors and exception handling. These error logs will then be reviewed and necessary changes will be made to accommodate such data.

Many Minds...One World

> Page 18

Oracle to SQL Server Migration

2. After such data is loaded successfully, we ran the checks Oracle vs SQL Server so as to infer the data integrity and conclude the data migration. 3. Database maintenance tasks will run on SQL Server to rebuild indexes, update statistics and shrink the log file.

Change Control Multiple groups/users are working on production instance of data warehouse in Oracle. These groups/users were deploying their changes on the same production database regularly. These changes would have incurred redundant work in the migration.

To handle these changes effectively implemented change control mechanism. The change control mechanism would consist of 2 control points described below:

! The first control point would be to create snapshot of the current database, to have the freeze
version of the database to create the SQL Server Data warehouse onto it.

! The action items from the first control point to the second/final control point (at go live stage)
would be to have list of changes occurred in the production database till that date.

! The corresponding development of SQL Server Data warehouse and SQL Server Integration Services
packages would be to sync the changes from the first control point to the second/final control point. Note - Our insurance customer provided us scripts for the changes that they have done on production after first control point to the next subsequent control point. ETL Changes and Enhancement The most challenging task in the migration activity was the ETL changes. Our insurance customer is using Data Integrator an ETL Tool by Business Objects to load data from various sources into the Data Warehouse. Following were the changes that had to done to the all existing ETLs

! Our insurance customer was calling some Oracle functions like LPAD, Date Add functions which were
not directly present in SQL Server. These queries were rewritten to send the output in the padded fashion. We tried to do the same through DI functions but it did not generate a pushdown SQL (SQL that is directly fired onto the Database) & we ran into performance issues.

Many Minds...One World

> Page 19

Oracle to SQL Server Migration

! Decimal and Character conversion was done implicitly through DI for Oracle as back-end, but this
compatibility was not the same with SQL Server. Hence we had to write a DI function that would do this conversion explicitly. Going forward our insurance customer asked us to develop ETL using the same data model for the new modules that we introduced. These ETLs were developed using SSIS and are giving accurate result and better throughput than DI. The development of these SSIS packages were done to fit in the new modules, hence these packages were called from DI.

1. Create Credentials and Proxy Accounts in SQL Server to allow the SSIS packages to deployed / run. This step has to be done is the package is not called from the Windows user. 2. New Package was deployed on the SQL Server. 3. A job was created to call this package . 4. This Job was called from the DI workflow. 5. Once the job was executed successfully the Control was returned to DI and the next dataflow was called. 6. Data validation queries were fired to see whether the output in Oracle matched with the output in SQL Server. Data validation queries were also fired to check the output of premium calculations against the source system. 7. It took some time for the SQL Server ETL environment to give the correct results and optimized performance. After this point the Oracle environment was retired and SQL Server environment was used. 8. Changes were made in the connection strings of BO universe to now point them to the SQL Server environment.

Many Minds...One World

> Page 20

Oracle to SQL Server Migration

AP P E ND IX
E X A MPLE # 1: T A BLESP A CE
Oracle Script :
CREATE TABLE BI_DUS071 ( DW_DUS071_KEY NUMBER(28,7) NOT NULL CLAIM VARCHAR2(8 BYTE) NULL , INCDTE NUMBER(8) NULL , ACCSTE VARCHAR2(2 BYTE) NULL , CLMOCC VARCHAR2(1 BYTE) NULL , FIELDD VARCHAR2(5 BYTE) NULL , TRTYCO VARCHAR2(2 BYTE) NULL , . . . ) TABLESPACE WRK_DATA NOLOGGING PCTFREE 2 PCTUSED 0 INITRANS 1 MAXTRANS 255 STORAGE ( INITIAL 81920 MINEXTENTS 1 MAXEXTENTS 2147483645 BUFFER_POOL DEFAULT );

SQL Server Script:


CREATE TABLE BI_DUS071 ( DW_DUS071_KEY NUMERIC(28,7) NOT NULL , CLAIM VARCHAR(8) NULL , INCDTE NUMBER(8) NULL ,

Many Minds...One World

> Page 21

Oracle to SQL Server Migration

ACCSTE VARCHAR(2) NULL , CLMOCC VARCHAR(1) NULL , FIELDD VARCHAR(5) NULL , TRTYCO VARCHAR(2) NULL , . . . on [WRK_DATA]

EX AMPL E #2: SEQUENCE S


Oracle Script :
CREATE SEQUENCE DW_F_PREM_ID_SEQ INCREMENT BY 1 START WITH 100 NOCYCLE ORDER;

SQL Server Script: Identity Column in Table


CREATE TABLE [dbo].[FACT_PREMIUM]( [DW_F_PREM_ID] [numeric](38, 0)IDENTITY(100,1) NOT NULL, . . . ON [DW_DATA_O1]

E X A MPLE # 3: I N D E XES - F IL LFACT O R & PAD_ I NDE X


SQL Server Script:
CREATE CLUSTERED INDEX [ci_Product_ProductId] ON [dbo].[Product]([ProductId]) WITH PAD_INDEX, FILLFACTOR=80 ON [DW_DATA_01]

Many Minds...One World

> Page 22

Oracle to SQL Server Migration

E X A MPLE # 4: TRI G GERS

Oracle
create trigger tr_bi_rank_table before insert on rank_table for each row begin select seq_rank_id.nextval

Microsoft SQL Server


create trigger instead_of_insert_on_rank_table on rank_table instead of insert as /* begin of trigger implementation */ set nocount on

/* column variables declaration */ into :new.r_id from declare sys.dual; @column_new_value____1 numeric, end; @column_new_value____2 varchar(50), @column_new_value____3 numeric, @column_new_value____4 char(1)

/* iterate for each row from inserted/deleted tables*/ declare ForEachInsertedRowTriggerCursor cursor local forward_only read_only for select RANK, RANK_NAME, R_ID, R_SN from inserted

open ForEachInsertedRowTriggerCursor fetch next from ForEachInsertedRowTriggerCursor into @column_new_value____1,@column_new_value____2, @column_new_value____3,@column_new_value____4 while @@fetch_status = 0 begin

Many Minds...One World

> Page 23

Oracle to SQL Server Migration

/* Oracle trigger tr_bi_rank_table implementation: begin*/ begin select @ column_new_value____3 = max(R_ID) + 1 from rank_table select @ column_new_value____3 = isnull(@ column_new_value____3,1) end /* Oracle trigger tr_bi_rank_table implementation: end*/

/* DML-operation emulation */ insert into rank_table (RANK, RANK_NAME, R_ID, R_SN) values (@column_new_value____1,@column_new_value____2 , @column_new_value____3,@column_new_value____4) fetch next from ForEachInsertedRowTriggerCursor into @column_new_value____1,@column_new_value____2, @column_new_value____3,@column_new_value____4 end close ForEachInsertedRowTriggerCursor deallocate ForEachInsertedRowTriggerCursor /* end of trigger implementation */

Many Minds...One World

> Page 24

Oracle to SQL Server Migration

E X A MPLE # 5 : S T OR ED PR OCE DUR E S


Oracle Script :
CREATE OR REPLACE PROCEDURE (p_acct_mth_key IN Number) AS BEGIN update bi_dus007 set DW_EXCL_RSN = 'Dup Found in DUS006_S' where dw_dus007_key in ( select dw_dus007_key from BI_DUS007 m where exists ( select 1 from BI_DUS006_s d, win_dwxp050 w where d.dw_dwxp050_key = w.dw_dwxp050_key and w.dw_src_op_fg = 'I' and d.dw_dwxp050_key > 1 and m.actdte = d.actdte and decode(m.co, 17,'PIC1', 20,'URL1', 40,'MTH1', 16,'UIC1') = w.co and m.actunit = d.actunit and m.ursretr = d.ursretr and m.treaty = d.treaty and m.fielda = w.fielda and m.fieldb = w.fieldb and m.fieldc = w.fieldc and m.aslob = w.aslob and m.clmocc = w.clmocc and m.prmste = w.prmste -) and m.certif = d.certif DUP_007

Many Minds...One World

> Page 25

Oracle to SQL Server Migration

and m.dac = 'C' and m.actdte = p_acct_mth_key and premp = 0 ) ; update bi_dus007 set DW_EXCL_RSN = 'Dup Found in DUS006' where dw_dus007_key in (

select dw_dus007_key from BI_DUS007 m where exists ( select 1 from BI_DUS006 d where m.actdte = d.actdte and m.co = d.co and m.actunit = d.actunit and m.ursretr = d.ursretr and m.treaty = d.treaty and m.fielda = d.fielda and m.fieldb = d.fieldb and m.fieldc = d.fieldc and m.aslob = d.aslob and m.clmocc = d.clmocc and m.prmste = d.prmste -) and m.dac = 'C' and m.actdte = p_acct_mth_key and premp = 0 ) ; commit; END; and m.certif = d.certif

Many Minds...One World

> Page 26

Oracle to SQL Server Migration

SQL Server Script :


CREATE PROCEDURE dbo.DUP_007 @p_acct_mth_key numeric AS /* * Generated by SQL Server Migration Assistant for Oracle. * Contact ora2sql@microsoft.com or visit http://www.microsoft.com/sql/migration for more information. */ BEGIN Begin Try Begin Transaction UPDATE dbo.BI_DUS007 SET DW_EXCL_RSN = 'Dup Found in DUS006_S' WHERE BI_DUS007.DW_DUS007_KEY IN ( SELECT m.DW_DUS007_KEY FROM dbo.BI_DUS007 AS m WHERE EXISTS ( SELECT 1 AS expr FROM dbo.BI_DUS006_S AS d, dbo.WIN_DWXP050 AS w

WHERE d.DW_DWXP050_KEY = w.DW_DWXP050_KEY AND w.DW_SRC_OP_FG = 'I' AND d.DW_DWXP050_KEY > 1 AND m.ACTDTE = d.ACTDTE AND CASE m.CO WHEN 17 THEN 'PIC1' WHEN 20 THEN 'URL1' WHEN 40 THEN 'MTH1' WHEN 16 THEN 'UIC1

Many Minds...One World

> Page 27

Oracle to SQL Server Migration

END = w.CO AND m.ACTUNIT = d.ACTUNIT AND m.URSRETR = d.URSRETR AND m.TREATY = d.TREATY AND m.FIELDA = w.FIELDA AND m.FIELDB = w.FIELDB AND m.FIELDC = w.FIELDC AND m.ASLOB = w.ASLOB AND m.CLMOCC = w.CLMOCC AND m.PRMSTE = w.PRMSTE/* d.certif*/ ) AND m.DAC = 'C' AND m.ACTDTE = @p_acct_mth_key AND m.PREMP = 0 ) UPDATE dbo.BI_DUS007 SET DW_EXCL_RSN = 'Dup Found in DUS006_S' WHERE BI_DUS007.DW_DUS007_KEY IN ( SELECT m.DW_DUS007_KEY FROM dbo.BI_DUS007 AS m WHERE EXISTS ( SELECT 1 AS expr FROM dbo.BI_DUS006_S WHERE m.ACTDTE = d.ACTDTE AND m.CO = d.CO AND m.ACTUNIT = d.ACTUNIT AND AS d and m.certif =

Many Minds...One World

> Page 28

Oracle to SQL Server Migration

m.URSRETR = d.URSRETR AND m.TREATY = d.TREATY AND m.FIELDA = d.FIELDA AND m.FIELDA = w.FIELDA AND m.FIELDB = d.FIELDB AND m.FIELDC = d.FIELDC AND m.ASLOB = d.ASLOB AND m.CLMOCC = d.CLMOCC AND and m.certif =

m.PRMSTE = d.PRMSTE /* d.certif*/ ) AND m.DAC = 'C' AND m.ACTDTE = @p_acct_mth_key AND m.PREMP = 0 ) COMMIT Transaction End Try BEGIN CATCH IF (XACT_STATE()) = -1 BEGIN

-- PRINT N'The transaction is in an uncommittable state. Rolling back transaction.' ROLLBACK TRANSACTION END IF (XACT_STATE()) = 1 BEGIN -- PRINT N'The transaction is committable. Committing transaction.' IF ERROR_NUMBER() > 0 Begin -- print 'Rollback ....' ROLLBACK TRANSACTION

Many Minds...One World

> Page 29

Oracle to SQL Server Migration

END ELSE BEGIN -print 'Commit ....' COMMIT TRANSACTION END END END CATCH END

EX A M PL E #6: EX C EPTION RAIS I NG


Oracle Script :
BEGIN SELECT <expression> INTO <variable> FROM <table>; EXCEPTION WHEN NO_DATA_FOUND THEN <Statements> END

SQL Server Script :


SELECT <variable> = <expression> FROM <table> IF @@ROWCOUNT = 0 BEGIN <Statements> END

E X A MPLE # 7: CUS T OMER E R R OR S


Oracle Script :
declare myexception exception ; BEGIN IF <condition> THEN

Many Minds...One World

> Page 30

Oracle to SQL Server Migration

RAISE myexception; END IF; EXCEPTION WHEN myexception THEN <Statements> END

SQL Server Script :


BEGIN TRY IF <condition> RAISERROR ('myexception', 16, 1) END TRY BEGIN CATCH IF ERROR_MESSAGE() = 'myexception' BEGIN <Statements> END ELSE <rest_of_handler code> END CATCH

EX A MPLE #8: S QLC ODE A N D SQL E R R M


Oracle Script :
BEGIN INSERT INTO <table> VALUES WHEN DUP_VAL_ON_INDEX THEN <Statements> END

Many Minds...One World

> Page 31

Oracle to SQL Server Migration

SQL Server Script :


BEGIN TRY INSERT INTO <table> VALUES END TRY BEGIN CATCH IF ERROR_MESSAGE() = 2627 <Statements> END CATCH

EX AMPLE #9: QUERY HINTS


Oracle
select /*+ first_rows index (rank_table ix_rank_table_1) */ r_id from rank_table;

Microsoft SQL Server


select r_id from rank_table with (index (ix_rank_table_1)) option (fast 1)

EX A MPLE #10 : LOOP STAT E MENTS


Oracle
loop exit when rank > max_rank; do something; rank := rank + 1; end loop;

Microsoft SQL Server


while (1 =1) begin if @rank > @max_rank break do something set @rank = @rank + 1 end

Many Minds...One World

> Page 32

Oracle to SQL Server Migration

E X A MPLE # 11 : N UME R IC F OR LOOPS


Oracle
for rank in 1..max_rank loop do something; end loop;

Microsoft SQL Server


declare @rank int set @rank = 1 while (@rank <= max_rank) begin do something set @rank = @rank + 1 end

EX AMPLE #12: CURS O R


Oracle
Declare cursor rank_cur is select rank, rank_name from rank_table; rank_rec rank_cur%ROWTYPE; begin open rank_cur; loop fetch rank_cur into rank_rec; exit when rank_cur%NOTFOUND; do something; end loop; close rank_cur; end;

Microsoft SQL Server


Declare @v_rank_cur_rowcount int, @rank numeric, @rank_name varchar(50) declare rank_cur cursor for select rank, rank_name from rank_table; begin open rank_cur set @v_rank_cur_rowcount = 0 while (1=1) begin fetch next from rank_cur into @rank, @rank_name set @v_rank_cur_rowcount = @v_rank_cur_rowcount + 1

Many Minds...One World

> Page 33

Oracle to SQL Server Migration

if (@@fetch_status = -1) break do something end close rank_cur deallocate rank_cur end

EX AMPLE #13:CU R SOR WITH PARAMETERS


Oracle
declare cursor rank_cur (id number, sn char(1)) is select rank, rank_name from rank_table where r_id = id and r_sn = sn; begin declare open rank_cur (1, =c'); rank_cur_1 cursor for open rank_cur (2, =d'); select rank, rank_name end; from rank_table where r_id = @id and r_sn = @sn

Microsoft SQL Server


declare @id numeric, @sn char(1) begin set @id = 1 set @sn = =c'

open rank_cur_1 set @id = 2 set @sn = =d'

declare rank_cur_2 cursor for

Many Minds...One World

> Page 34

Oracle to SQL Server Migration

select rank, rank_name from rank_table where r_id = @id and r_sn = @sn

open rank_cur_2 end

E X A MPLE # 14: CUR S OR SY N TA X


Operation Declaring a cursor Oracle
CURSOR cursor_name [(cursor_parameter(s))] IS select_statement;

Microsoft SQL Server


DECLARE cursor_name CURSOR [LOCAL | GLOBAL] [FORWARD_ONLY | SCROLL] [STATIC | KEYSET | DYNAMIC |

FAST_FORWARD] READ_ONLY | SCROLL_LOCKS | OPTIMISTIC] [TYPE_WARNING] FOR select_statement [FOR UPDATE [OF column_name [,n]]]

Ref cursor type definition

TYPE type_name IS REF CURSOR [RETURN {{db_table_name|cursor_name | cursor_variable_name}% ROWTYPE | record_name % TYPE | record_type_name | ref_cursor_type_name}];

See Below.

Many Minds...One World

> Page 35

Oracle to SQL Server Migration

Operation Opening a cursor

Oracle
OPEN cursor_name [(cursor_parameter(s))];

Microsoft SQL Server


OPEN cursor_name

Cursor attributes

{ cursor_name | cursor_variable_name | :host_cursor_variable_name} % {FOUND | ISOPEN | NOTFOUND | ROWCOUNT}

See Below.

SQL cursors

SQL % {FOUND | ISOPEN | NOTFOUND | ROWCOUNT |BULK_ROWCOUNT(index)| BULK_EXCEPTIONS(index). {ERROR_INDEX | ERROR_CODE}}

See Below.

Fetching from cursor

FETCH cursor_name INTO variable(s)

FETCH [[NEXT | PRIOR | FIRST | LAST | ABSOLUTE {n | @nvar} | RELATIVE {n | @nvar}] FROM] cursor_name [INTO @variable(s)]

Update fetched row

UPDATE table_name SET statement(s) WHERE CURRENT OF cursor_name;

UPDATE table_name SET statement(s) WHERE CURRENT OF cursor_name

Delete fetched row

DELETE FROM table_name WHERE CURRENT OF cursor_name;

DELETE FROM table_name WHERE CURRENT OF cursor_name

Closing cursor

CLOSE cursor_name;

DELETE FROM table_name WHERE CURRENT OF cursor_name

Many Minds...One World

> Page 36

Oracle to SQL Server Migration

Operation Remove cursor data structures OPEN FOR cursors

Oracle N/A

Microsoft SQL Server


DEALLOCATE cursor_name

OPEN {cursor_variable_name | :host_cursor_variable_name}FO R dynamic_string [using_clause]

See Below.

EX A MP L E #15 : V AR IA BLE D ECL ARATION


If a variable is declared in the following way:
var1 table1.col1%TYPE;

and the col1 in table1 has varchar2(50) data type, then it will be converted to:
var1 varchar(50)

Variable declarations including %ROWTYPE on Oracle will be converted to a group of local variables on SQL Server.

RECORDs on Oracle will be converted to a group of local variables on SQL Server.

Oracle
create or replace procedure test_proc ( arg_rec1 table1%ROWTYPE; arg_rec2 table2%ROWTYPE; ) as type rec is record ( col1 int;

Microsoft SQL Server


create procedure test_proc @arg_rec1_col1_table1 numeric (38), @arg_rec1_col2_table1 numeric (38), @arg_rec1_col3_table1 varchar (32), @arg_rec2_col1_table1 numeric (38), @arg_rec2_col2_table1 numeric (38), @arg_rec2_col3_table1 varchar (32) as

Many Minds...One World

> Page 37

Oracle to SQL Server Migration

Oracle
col2 table1.c1%TYPE; col3 varchar2(32) ); rec1 rec; begin rec1 := NULL; rec1 := arg_rec1; rec1.col2 := arg_rec2.col1_table1; end;

Microsoft SQL Server


declare @rec1_col1 int, @rec1_col2 numeric, @rec1_col3 varchar (32) begin set @rec1_col1 = null set @rec1_col2 = null set @rec1_col3 = null set @rec1_col1 = @arg_rec1_col1_table1 set @rec1_col2 = @arg_rec1_col2_table1 set @rec1_col3 = @arg_rec1_col3_table1 set @rec1_col2 = @arg_rec2_col1_table1

Many Minds...One World

> Page 38

Oracle to SQL Server Migration

ADDE N D UM
IN SUR ANCE D AT A W A R E HOUS E TE R MINO L OGY
Following are the terms being used to answer specific questions : 1. Schemas: a. STG: Staging Schema. This is the schema that the source tables are loaded into b. 3NF: The third normal form schema. This schema holds the surrogate key for a given source natural key. c. DW: This schema holds the dimension, facts and the aggregate tables. d. MIS: This is a schema that holds a few report specific tables for the top management. 2. RUNC Tables : These tables are the exact copy of the source tables on a given day. These tables are loaded with the entire source table data daily. It is not a data change capture process. They are present in STG schema. 3. HISTORY Tables: These are the tables which do the change data capture instead they mark the data for insertion or updates in the RUNC tables. Inserts go in directly but the update process needs to close the previous record and insert the new open record (type 2). They are present in STG schema. 4. 3NF tables : These tables contain the surrogate keys for an input natural key from source. This schema is a 3rd normal form schema. They are present in 3NFschema. 5. SRC tables : These tables are a type 2 type of 3NF tables. These contain the surrogate key from 3NF tables along with the Effective date. They are present in STG schema. 6. Key File Tables: These are the tables which contain the onset and offset records for the measures coming from History tables. These tables also provide a base to load the fact tables. They are present in the STG schema.

Many Minds...One World

> Page 39

Oracle to SQL Server Migration

F REQUENTLY ASK ED QUE S T I ON S Q1. What are the rough stats surrounding this one client's conversion? A1. Our insurance customers Oracle DB Statistics:
4 Schemas (STG, 3NF, DW, MIS) which in all contained 1. Views: 85 2. Tables: 955 3. Triggers: 955 4. Stored Procedures + Functions: 95 Our insurance customers Data Migration Statistics: The total data to migrate resulted in 1.8 Terabytes in SQL Server. Our insurance customers ETL Statistics: The insurance customer had two streams of jobs to be executed, The Daily & the Monthly; daily runs with a window of 5 Hrs and the monthly runs in 6 hrs. Daily Jobs Statistics: Work Flows: 350 Data Flows: 765 Monthly Jobs Stats: Work Flows: 100 Data Flows: 170 The data Flow's contained code which was specific to Oracle and was converted to SQL server specific syntax.

Q2. How long did it take? A2. Our insurance customers Oracle DB Migration: 10 man days for all objects conversion.
Our insurance customers Data Migration: Development: 10 man days Execution: 1 day

Many Minds...One World

> Page 40

Oracle to SQL Server Migration

Our insurance customers ETL Conversion: Code Changes: 10 man days Testing: 10 days Data Validation: 10 days (with help of automation using scripts)

Q3. How many databases and tables were converted? A3. Our insurance customers EDW was based on a single Oracle database having multiple schemas.
Schemas: 4 Tables: 955

Q4. What was the typical data volume that we migrated? A4. The total SQL Server database size after data migration was 1.8 terabytes. Following is the break up in
percentage: 1. MIS schema: 5% 2. 3NF schema: 5% 3. STG Schema: 50% 4. DW Schema: 40% The average Fact table's (in DW schema) size was 80 GB of data and 100 GB of indexes. The average Key File table's (in STG schema) was 60 GB of data and 25 GB of Indexes.

Q5. How much PL/SQL was converted ? A5. PL/SQL Conversion Objects:
1. Views - 85 2. Stored Procedures and Functions: 95 3. ETL Data Flows: 935.

Many Minds...One World

> Page 41

Oracle to SQL Server Migration

Q6. How many objects were reconfigured to point to the new environment? A6. The composition of these ETL Jobs is
Daily Jobs Stats: Work Flows: 350 Data Flows: 765 Monthly Jobs Stats: Work Flows: 100 Data Flows: 170

Q7. How much time did it take to test these? A7. As we followed strict conversion guideline and automated processes process the time taken to test and
validate the data of the ETL was 10 working days.

Many Minds...One World

> Page 42

CONTACT INFORMATION
BitWise Inc. 1515 Woodfield Rd. Suite 930 Schaumburg, IL 60173 BitWise Australia Pty Ltd. Level 39, 2 Park Street Sydney, NSW 2000 BitWise Solutions Pvt. Ltd. BitWise World Off Intl Convention Centre Senapati Bapat Road Pune - 411016 - INDIA

Phone : 847-969-1544 Fax : 847-969-1500 Email : info@bitwiseusa.com

Phone : 61 2 9004 7887 Fax : 1300 790 860 Email : info@bitwiseaustralia.com

Phone : 91 20 40102000 Fax : 91 20 40102010 Email : info@bitwiseindia.com

SUPPORTING PARTNERSHIPS

tm

Partner

Copyright 2010 BitWise believes the information in this publication is accurate as of its publication date; such information is subject to change without notice. BitWise acknowledges the proprietary rights of the trademarks and product names of other companies mentioned in this document.

Many Minds...One World

S-ar putea să vă placă și