Documente Academic
Documente Profesional
Documente Cultură
Application Development
Framework
Making the Move from Oracle Warehouse Builder Write for OTN
Earn money and promote your
Application Express
to Oracle Data Integrator 12c technical skills by writing a
technical article for Oracle
Technology Network.
Big Data by Stewart Bryson Learn more
Business Intelligence A detailed guide to a phased migration to Oracle Data Integrator 12c Stay Connected
OTN Architect Community
Cloud Computing
January 2014
Communications
Downloads
Database Performance & Oracle Data Integrator
Availability
Dynamic Scripting Languages With its 12c release, Oracle Data Integrator (ODI) now appeals to both the database and tool-oriented crowds. Besides delivering an impressive new
release with major advancements in data integration functionality, ODI 12c has also added capabilities for OWB customers looking to make the
Embedded
switch to ODI. First, we have a new feature called OWB Runtime Integration that allows ODI agents to orchestrate and execute OWB processes.
Digital Experience Additionally, we have a new OWB to ODI Migration Utility that allows us to convert a subset of our OWB development artifacts into comparable ODI
artifacts.
Enterprise Architecture
Enterprise Management In this article, we'll cover these two approaches to making the move from OWB to ODI 12c: the integration route, and the migration route. I'll start by
walking through a fairly standard ODI installation, one that includes the configuration of JEE agents, and finish with a description of a phased
Identity & Security migration, which makes use of both the integration and migration capabilities in ODI 12c.
Java
ODI Installation
Linux Our environment is Oracle Linux 6.4, which we installed using the Red Hat Kernel instead of the Unbreakable Linux Kernel. Because it also has the
Mobile Oracle Database installed, we included several additional packages specific to that install, but those packages are not required for ODI. The first
component needed for both installing and running ODI 12c is a Java Development Kit (JDK). ODI 12c supports only version 7 of the JDK, so I
Service-Oriented Architecture installed JDK 1.7.0_21 (available from http://java.oracle.com). Our Linux environment has several different Java installations, so we add the following
Solaris to the .bash_profile to ensure we always use the Oracle JDK:
Step 1: Welcome
On the welcome screen, we see the message:
"If you plan to install the JEE Agent, then ensure that you have installed Oracle Fusion Middleware Infrastructure 12c."
Ignore this message; the Enterprise Installation of ODI will install WebLogic Server (WLS) and Fusion Middleware (FMW) infrastructure libraries
behind the scenes.
1 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
Patches
Along with the ODI 12c install, the Universal Installer also creates a zip archive called odi_1212_opatch.zip, which, when extracted, includes
three patches: 16926420, 17053768 and 17170540. We need to unzip this file, navigate into the directory for each of these patches, and apply the
patch. The application of patch 17170540 is demonstrated below:
==> cd 17170540/
==> $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 13.1.0.0.0
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/app/oracle/product/odi_1')
OPatch succeeded.
==>
==> $ORACLE_HOME/oracle_common/bin/rcu
2 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
==> $ORACLE_HOME/oracle_common/common/bin/config.sh
3 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
Page 2: Templates
To make it easy to deploy standard functionality for different FMW products, the configuration utility contains templates for configuring and deploying
different applications in the domain. The first template we select is Oracle Enterprise Manager Plugin for ODI - 12.1.2.0 [em], which also selects the
following additional templates:
Notice that the Configuration Utility is aware of our JDK location based on the value of the JAVA_HOME environment variable. That is indeed the JDK
we want to use.
Page 9: Credentials
We use this screen to configure two entries in the FMW Credential Store. The first key is already partially populated: the SUPERVISOR key is looking
for our SUPERVISOR username and password. The second key we have to add by clicking the Add button, and then specifying the domain name,
along with the administration username and password for the domain, as illustrated in Figure 4.
4 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
Clicking through the rest of the informational screens should give us a completed domain configuration and an application deployed to the managed
server, which we can use for our JEE agent. To make things easier in the future, we add the DOMAIN_HOME environment variable to our
.bash_profile file, as demonstrated below:
export DOMAIN_HOME=$ORACLE_HOME/user_projects/domains/odi_domain
==> $ORACLE_HOME/odi/studio/odi.sh
Using the ODI Topology Navigator, under Physical Architecture, we right-click Agents and then select New Agent. We specify the following parameter
values as demonstrated in Figure 6.
Name: OracleDIAgent
Host: oracle.localdomain (IP address or hostname)
Port: 15101 (default Managed Server port)
5 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
As part of the domain configuration above, the Node Manager is likely already running. In case it isn't, and for future reference, we can start the Node
Manager using the following command:
Once we ensure that the Node Manager is running, we are then able to start the Administration Server, using the following command:
With the Administration Server in RUNNING mode, we are now able to start the Managed Server. This can be done from the command-line similar to
how we started the Node Manager and Administration Server, or using Fusion Middleware Control (FMC), using the URL pattern below:
http://administration_server_host:administration_server_port/em
http://oracle.localdomain:7001/em
For demonstration purposes, we use FMC. Under the WebLogic Domain tree, we chose odi_domain, followed by ODI_cluster1, and finally
ODI_server1. On the resulting page, we can click the Start Up button:
Once we have the Managed Server running successfully, our JEE agent should now be available to us. Back in the Topology Navigator in ODI Studio,
we right-click the OracleDIAgent in the Physical Architecture pane and choose the option Test, which should give use feedback that the agent is
available. The final task is just to create an agent in the Logical Architecture pane that maps to our physical agent through a context; in our case,
using the Global Context.
6 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
ODI packages. Before we get into the technical details, let's have a look at our existing OWB project from a high-level, and see how we can
incorporate it into ODI.
7 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
The separation of elements into separate process flows in OWB is a common design approach: it provides flexibility in orchestrating our overall batch
load, as well as allowing us to execute granular pieces of the complete batch load for unit testing purposes.
Runtime Topology
We now need to configure the integration with our existing OWB workspace. Like any other element in the ODI topology, we begin by adding a data
server in the Physical Architecture pane of the Topology Navigator. There is a new technology in ODI 12c called OWB Runtime Repository. We right-
click on that technology and choose New Data Server, which opens the Data Server Definition tab, as demonstrated in Figure 11. We provide a name
for the data server, and under Connection we provide the username and password of the OWB workspace owner. We name the data server ORCL
after the Oracle database instance where it resides (any unique name would suffice), and we provide the OWBREP workspace owner and the
associated password.
On the JDBC tab, we use the oracle.jdbc.OracleDriver driver and provide the connection details for the Oracle database holding the OWB workspace.
We click the Save button, and then right-click our new Data Server in the Physical Architecture pane and choose New Physical Schema, which will
open the Definition tab. We choose the correct value for Workspace (Schema) from the dropdown list-for us this is OWBREP.OWBREP-and then
accept the remainder of the defaults as demonstrated in Figure 12.
8 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
Once our data server and physical schema have been created, we need to create a logical schema and assign the physical schema to it via a
context. We choose the Logical Architecture pane in the Topology Navigator, and we again right-click the OWB Runtime Repository Technology, and
choose New Logical Schema, which opens the Definition tab. We're only working with the Global Context, so we map the new logical schema-which
for us is called OWBREP-to our previously created physical schema, which completes the topology configuration.
Click the Command tab in the Package Step to see the ODI Tool command generated by the values we provided:
Once the package has been completed, it's easy to create an ODI Scenario and execute it using our JEE agent. We can see in Figure 14 that the
ODI and OWB auditing is completely integrated now, with ODI being aware of all the processes and child processes executed by the OWB Control
Center, as well as the execution results, including execution times and the number of records affected.
9 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
==> cd 17547241/
==> opatch apply
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
OPatch succeeded.
==>
As our environment is Linux, patch 17547241 has installed a new command-line utility in the OWB ORACLE_HOME called migration.sh. The exact
location is $OWB_HOME/bin/unix, which for our environment is listed below:
/app/oracle/product/11.2.0/dbhome_1/owb/bin/unix
Dimensional modeling metadata, and any mappings that use that metadata, including dimensions and cubes
Process flows
Configurations
Data quality, data profiles and data auditors
The third item in this list, process flows, should be noted specifically. If we choose to migrate an entire project to ODI, we will only have the mappings
and the metadata associated with them when the migration is complete. So it's worth noting that we will need to use ODI packages, load plans or
both to develop a new orchestration strategy post-migration. In many cases, this is no minor piece of work. We'll investigate some of the strategies
relevant to this later in this article.
10 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
FAST_CHECK: Performs a read-only check of the OWB repository and reports back the items than can and cannot be migrated.
DRY_RUN: Performs a migration to ODI using the ODI 12c SDK, but does not perform a commit at the end of the process.
RUN (Default): Executes the migration and commits migrated objects to the target ODI 12c repository.
The run mode is specified in the migration configuration parameter. The configuration parameter can be called anything and placed anywhere, since
the file location is specified when executing the utility. The installation of the Migration Utility patch also creates a sample configuration file called
migration.config located in the $OWB_HOME/bin/admin directory. Some of the specific driver properties that we have in our configuration file
are listed below:
ODI_MASTER_USER=DEV_ODI_REPO
ODI_MASTER_URL=jdbc:oracle:thin:@oracle.localdomain:1521:orcl
ODI_MASTER_DRIVER=oracle.jdbc.OracleDriver
ODI_USERNAME=SUPERVISOR
ODI_WORK_REPOSITORY_NAME=WORKREP
OWB_WORKSPACE_OWNER=OWBREP
OWB_URL=oracle.localdomain:1521:orcl
OWB_WORKSPACE_NAME=OWBREP
MIGRATION_LOG_FILE=/app/oracle/product/11.2.0/dbhome_1/owb/bin/unix/migration.log
MIGRATION_MODE=RUN
MIGRATE_DEPENDENCIES=true
MIGRATION_OBJECTS=PROJECT.GCBC_SALES.MODULE.ETL;
Notice the last two migration properties: MIGRATE_DEPENDENCIES and MIGRATION_OBJECTS. We can use these two parameters to specify the
content we want to migrate. MIGRATION_OBJECTS uses dot-notation to specify which OWB project we want to migrate, and more granularly, which
individual object or bulk object we want to migrate. We can see which parameters are supported in the documentation: http://docs.oracle.com
/middleware/1212/odi/ODIMG/migrating.htm#CHDIDBDH.
We're migrating the specific OWB module called ETL. In our workspace, the OWB mappings for the entire project exist in this single module, so all of
them will be migrated. However, our specification for MIGRATE_DEPENDENCIES will instruct the utility that we also want to migrate dependent
objects, such as table metadata, sequence metadata, locations, etc
.
The Migration Utility accepts a few command-line options as well-primarily for passwords, so they aren't explicitly written in the configuration file-along
with the path to the configuration file. The command-line options and order are specified below:
ODI master repository password: the password for the ODI_MASTER_USER specified in the configuration file
ODI user password: the password for the ODI_USERNAME specified in the configuration file
OWB workspace password: the password for the OWB_WORKSPACE_OWNER specified in the configuration file
Migration configuration file: the path for the configuration file created above.
We execute the Migration Utility by entering the following at the command-line:
When we specified the location of the migration log file using the parameter MIGRATION_LOG_FILE, this will also specify the name and location of
our migration report file, which based on our configuration file will be called migration.report. The top of the report shows us the aggregated
results from the Migration Utility, with the specific objects migrated listed further down in the report (omitted here for brevity):
Statistics
------------
********************************************************************************
PROJECT: PUBLIC_PROJECT
********************************************************************************
PROJECT: GCBC_SALES
We have a relatively small project here, but all the mappings in the ETL module were migrated successfully. The workspace metadata for tables and
sequences was also migrated, becoming content in models in the ODI Designer.
ODI Topology
We can now see the results of the migration process by viewing the ODI Topology and seeing that our OWB locations and modules were migrated as
demonstrated in Figure 15. ODI data servers, physical schemas and logical schemas were created for all the modules represented in our OWB
mappings: GCBC_CRM, GCBC_EDW, GCBC_POS and GCBC_STAGING.
11 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
One point to notice: the Migration Utility creates separate ODI data servers for each OWB location, even though many of the OWB locations exist in
the same physical Oracle database. Although this may seem troubling at first, the situation is easily corrected with the power of the ODI Topology. We
can create new data servers with multiple physical schemas if desired, remapping those to logical schemas, which allows ODI to generate more
efficient code by understanding that the schemas exist in the same database. Use cases like this one is the main reason the ODI Topology abstracts
our physical and logical schemas, and also why we shouldn't be too concerned that the Migration Utility handles it in this way.
Leaving our physical architecture the way the Migration Utility created, we still need to make a few small changes. The Migration Utility doesn't bring
over the passwords from OWB locations, so we need to provide them in the Connection section of our data server configurations, as demonstrated in
Figure 16. We also have to set the work schema for any of our physical schemas that are involved with mappings, as demonstrated in Figure 17.
12 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
This mapping pulls data from two tables in the GCBC_POS schema, which are joined together using a joiner component. The resulting data then
maps to another Joiner along with the dimension tables from the GCBC_EDW schema-the reporting target schema-to do the lookup for the surrogate
keys before finally loading the data into the SALES_FACT table. All the dimension tables are joined to the source data set using a single joiner called
SURROGATE_PIPELINE. We have two separate expression components in the mapping as well: one called simply EXPRESSION, the other called
GET_DATE_KEY. We also have pre-mapping process and post-mapping process components, which are very common in OWB mappings. Without
the functionality of the Knowledge Module (KM) architecture that we have in ODI, OWB developers often resigned themselves to adding custom,
repeatable processes to their OWB mappings with these intra-mapping components. The migrated ODI Mapping MAP_SALES_FACT is shown in
Figure 19, which we will discuss in the next section.
13 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
In looking at the migrated mapping in Figure 19, notice the single joiner from OWB called SURROGATE_PIPELINE that joined all the dimension
tables to the source data is converted to four distinct Join components in ODI. The ODI join component does support having more than two incoming
connections, but the Migration Utility does not generate the designer metadata in that way. This shouldn't concern us much: the new component-style
KMs in ODI 12c generate code comparable to OWB using the four distinct join components, and that same code is nearly identical to what ODI
generates when a single join component is used.
Expressions are a new feature in ODI 12c, and they retain similar functionality to expressions in OWB. The strictly declarative design paradigm using
interfaces in ODI 11g had no place for components, much less an expression component. But ODI 12c mixes the declarative and flow-based designs.
Each attribute in the target component has a built-in location for configuring declarative transformation logic, but the use of expression components is
available to make transformation logic explicit and reusable, and also assists the Migration Utility in converting OWB mappings.
In ODI, KMs are pluggable templates that use the ODI Substitution API to control the generated code for both source and targets. Component-style
KMs are a new, complimentary code-generation tool encapsulating modular, reusable pieces of logic specific to particular components. This allows
the use of template-based KMs only where we need them: not for the entire mapping, as was the case in ODI 11g. To see or modify the assignment
of the loading KM (LKM) for a specific target in a mapping, we switch over to the physical view and click the access point for that target. This is the
"touch-point" for the arrow that extends from the source to the target. For MAP_SALES_FACT, our access point is the GET_EFFECTIVE_DATE
expression, which is demonstrated in Figure 21.
14 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
The Migration Utility chooses a single component-style LKM for all our converted mappings: LKM Oracle to Oracle Pull (DB Link). This new KM is a
sensible choice because OWB always uses a database link to pull data from a remote server. A slight modification we made to our migrated
mappings was to hard-code a database link into the SOURCE_ACCESS_DB_LINK option in the KM to eliminate the process of repeatedly dropping
and creating the database link with each run, also demonstrated in Figure 21.
If we aren't happy with the database link approach (though I'm not sure why we wouldn't be), we could use a different KM that resembles loading
techniques common in other ETL tools using separate connections to the source and target databases, combined with array-based processing.
Figure 22 shows the selection of LKM SQL to SQL (Built-In). Keep in mind that the database link will typically outperform array-based processing,
and is a major value-add when deploying ODI instead of other ETL tools.
At first sight, it may seem like our pre- and post-mapping process components were lost in the migration. They certainly aren't visible on the logical
mapping palette, and they aren't an option in the list of components shown in Figure 20. Further investigation shows that the logical mapping view is
the wrong place to look. Instead, we should again be looking at the physical mapping view and some of the options available in the new component-
style KM. To see the post-mapping process, we click our target table SALES_FACT, navigate to the Extract Options section in the Properties pane,
and have a look at the BEGIN_MAPPING_SQL option:
15 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
The original PL/SQL call from our OWB post-mapping process is listed below:
BEGIN
"TRANS_ETL"."END_MAPPING"(REPLACE(GET_MODEL_NAME, '"', NULL) , );
END
This PL/SQL procedure executed a few custom post-load options for the table, and it used the individual mapping name to pull options from a
configuration table to decide what processes to run. With the customization power of ODI, it's unlikely we would elect to code post-load options in this
way, but these sorts of changes aren't made overnight, so it's important that we are able to initially execute the PL/SQL procedure in a similar way.
The GET_MODEL_NAME constant (demonstrated above) in an OWB mapping always returns the mapping name wrapped in double-quotes, so we
could use that to eliminate hard-coded values. We need an equivalent in ODI, which we have using the Substitution API. The modified call is listed
below:
BEGIN
TRANS_ETL.END_MAPPING('<%=odiRef.getPop("POP_NAME")%>');
END
Finding the pre-mapping process was a little more difficult, and seemingly arbitrary. Although we have several dimension tables that are joined in on
the target side, the custom PL/SQL call was placed in the Extract Options for the CUSTOMER_DIM table, which we can only assume was chosen
alphabetically. Regardless, a similar modification to the PL/SQL call gives us the desired result. With our small tweaks in place, the execution of the
mapping from the ODI Operator is show in Figure 24.
Migration Approaches
The Big-Bang Approach: One Fell Swoop
The Migration Utility is an impressive piece of functionality. Although we had to polish the mappings a bit post-migration, all the complex source-to-
target logic survived intact, and when it's all said and done, that's what we care about the most. The Migration Utility gives us a tremendous
jumpstart, but it isn't a complete migration solution. Honestly, it's daunting to consider how many lines of code would be needed to furnish a complete,
start-to-finish Migration Utility.
When considering the list of OWB content that isn't supported by the Migration Utility, the most noteworthy exclusion is process flows. Even though
Oracle made the correct choice in focusing first on mappings, we are left with the requirement to re-orchestrate our load process or processes using
some combination of load plans and packages. Depending on the scope and complexity of the data integration processes currently in OWB, this
could run the gamut from uncomfortable to excruciating, especially when considering formal QA and regression testing processes driven by data
validation.
Considering what a complete migration from OWB would entail, below is a list of required steps and optional steps that would factor in to such a
16 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
migration:
Similarly, organizations that utilize a Business Intelligence Competency Center (BICC) recognize tremendous value from standardization and a
decreased footprint. Introducing some kind of hybrid solution, where legacy portions of the solution still run on OWB while non-legacy portions are
implemented with ODI could possibly reduce the overall value of the solution by introducing increased support and maintenance costs.
Maybe we should consider a third option: an approach that utilizes runtime integration and the Migration Utility, blending the long-term goal of
migrating to ODI 12c with the short-term focus on providing value to the business. We'll call this the "phased approach," and its mission statement is
simple:
"Any task undertaken to migrate content from OWB to ODI will add immediate value to our BI stakeholders."
Perhaps this mission statement seems antithetical. Is it possible to add value to a process of migrating functionality from one platform to another?
Obviously, when the OWB process we are migrating needs an enhancement, we see the immediate value-add. Can we extend the reach of the
phased approach even further, finding opportunities for migration even when those features aren't due for enhancements?
Let's investigate what the phased approach would look like. Here are the actions we take to deliver on this methodology:
We start by developing ODI scenarios to execute our OWB process flows inside of load plans in a reasonably granular fashion.
We put our OWB workspace in "maintenance mode," allowing development only when emergency "hot fixes" require it. All new, BI roadmap
development would occur using ODI Designer content, including mappings, procedures, packages, load plans and custom KM development. Any
auto-generation tasks that would have previously been written using TCL would instead use Groovy. At this point, we haven't deviated too far from
the side-by-side approach.
Current OWB content slated for enhancement would be migrated to ODI using the Migration Utility, either one at a time or in batch, depending on how
many OWB processes figure into our new enhancement. We would tweak the migrated content to optimal execution-similar to the tweaks made
above when working with the Migration Utility-followed by developing the enhancements requested by the stakeholders.
We make any additional enhancements to our process-including re-architecting portions of our OWB processes using ODI features-whenever we feel
we can bring immediate value to the business.
Granular Execution of Process Flows
In Figure 13 we demonstrated an ODI package and scenario to execute MAIN_LOAD, which is our high-level, entry-point process flow used to initiate
a complete load.
Action #1 in our methodology prescribes building scenarios for our process flows, but building them as granularly as possible. For instance, we would
execute the LOAD_DIMS and LOAD_FACTS Process Flows from Figures 9 and 10 as different steps in the same load plan as opposed to executing
MAIN_LOAD as a single step, as demonstrated in Figure 25. The reason for this will hopefully be clear a bit later on.
Once this is complete, we need to orchestrate MAP_SURVEYS_FACT into our batch load process (Action #3), which we do by generating a scenario
for the mapping adding a new step in our load plan, as demonstrated in Figure 27. When generating scenarios, I've used the prefixes ODI_ and
OWB_ to distinguish where the mappings exist.
But considering Action #4, we also take this opportunity to add value above and beyond what has been requested in the new feature request. We go
17 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
ahead and generate packages and corresponding scenarios for all the OWB mappings in this subject area, and add them as individual steps in the
load plan instead of having a single step execute an OWB process flow.
Why do we want to do this? Because ODI load plans provide an important feature that OWB process flows don't have: restartability. The "restart from
failed children" setting allows us to gracefully recover from failed executions. If one of the dimension loads fails, we would be able to correct the
issues with that single mapping and then restart the load plan knowing that any of the mappings that succeeded would not run again during the
restart. This adds immediate value to stakeholders by minimizing the time and effort associated with recovering from failed runs, and holds true to our
mission statement.
MIGRATION_OBJECTS=PROJECT.GCBC_SALES.MODULE.ETL.MAPPING.MAP_SALES_FACT;
We start with the migrated ODI mapping, demonstrated above in Figure 19. We encapsulate the new discount calculation logic in a reusable
mapping, as demonstrated in Figure 28, so that any subsequent processes that need to pull this source data will be able to use the join logic as
corresponding discount calculations in a single, unified process.
18 of 19 12/8/2018, 4:08 PM
Making the Move from Oracle Warehouse Builder to Oracle Data Integra... https://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb...
Conclusion
Alone, 12c provides a tremendous amount of new functionality, beginning with the new declarative flow-based design paradigm that will feel familiar
to developers of other ETL tools, especially OWB. This paradigm provides a much easier transition for OWB developers than was perhaps possible
with previous versions of ODI. Combined with Runtime Integration and the Migration Utility, OWB customers are now secure in the years of
investment placed in OWB. There's never been a better time to migrate to ODI, and this article has outlined three different approaches organizations
can take.
Using a geographic metaphor, if the Big Bang approach is represented as one coast and the side-by-side approach as the other coast, then the
phased approach is like the "flyover states" and will find an audience within diverse types of organizations. These approaches all have distinct value
and provide ROI to customers in different ways. Whether our OWB investment is strictly legacy in nature, or alive and flourishing, we can now move
forward to a future in ODI that reduces migration risk and adds value specific to an organization's needs.
© Oracle Site Map Terms of Use and Privacy Cookie Preferences Ad Choices
19 of 19 12/8/2018, 4:08 PM