Sunteți pe pagina 1din 125

White Paper Oracle Customer Data Hub Implementation Concepts and Strategies (Release HZ.

N/Financials Family Pack G)


Table of Contents
This white paper contains the following information. 1. Introduction 2. Customer Data Hub Overview 3. Customer Data Hub Installation 4. Instance Strategy 5. Source System Management (SSM) 6. Single Source of Truth (SST) 7. Transactions Viewer 8. Data Model Extensibility 9. Integration 10. Building the Customer Data Hub 11. Appendix A: Oracle Application Server 10g Integration 12. Appendix B: Customer Data Hub Business Flows 13. Appendix C: Interconnect Sample Code 14. Appendix D: Transactions Viewer Sample Code 15. Change Record

Introduction
Background
The Oracle Customer Data Hub (CDH) is a complete packaged solution that allows companies to create a single, enterprise view of their customer base, by consolidating/synchronizing customer data from heterogeneous systems into a central, operational, data store. The CDH solution ties together enhanced versions of existing, proven products such as Oracle Customers Online (OCO), Customer Data Librarian (CDL), and the Oracle Trading Community Architecture (TCA) to create a robust, scalable, central repository of duplicate-free, enriched data across a heterogeneous landscape.

By consolidating all customer data into the Customer Data Hub, organizations can use out of the box functionality to ensure that their customer records are standardized, cleansed, and enriched. Spoke applications are the beneficiaries of this data via nearreal time synchronization with the Hub. Similarly, updates made to customer information in any of the spoke applications are carried to the Hub and shared with other spoke applications based on defined business rules. Middleware facilitates this bi-directional synchronization.

Scope of this Document


The purpose of this document is to address the various considerations and decisions that come into play when implementing the Customer Data Hub, and to outline a concrete methodology for successfully implementing the CDH. Although every organizations requirements are unique, whereby various extraneous factors may alter the specific scope and timeline, there are many factors that remain consistent across all CDH implementations. This document will address these elements in detail, and will outline the various decisions and methodologies that should be considered and leveraged across all CDH implementations. In an effort to streamline the implementation process, this document has been arranged according to the natural flow of considerations and decisions that should be addressed when implementing the CDH. 1. 2. 3. 4. 5. 6. 7. 8. 9. Background and introduction to the Customer Data Hub Install the baseline application and technology footprint Determine the Customer Data Hub instance strategy to be implemented Define and setup the Source Systems being integrated with the CDH Define the Single Source of Truth strategy and establish source systems as SST enabled Configure the Transactions viewer in the CDH to query transactions from source systems Implement your chosen middleware as the integration engine between the CDH and source systems Build the Customer Data Hub and turn on synchronization via middleware Transact with the Customer Data Hub in real-time according to business process flows

Customer Data Hub functionality is continually being enhanced. With each new release, an corresponding version of this document will be available to reflect the latest in functionality and best practice implementation methodologies.

Document Structure
This document consists of three major components: functionality embedded in the Customer Data Hub products, strategic decisions that must be made that will effect the CDH implementation and both functional and technical examples. Topics that cover the CDH embedded functionality and strategic decisions are provided in the body of this document, while business flow examples and sample code are available in the appendices.

Intended Audience

This document is intended for Oracle customers, Oracle Consultants, Oracle Sales Consultants, and Oracle Partner organizations responsible for evaluating and implementing the Customer Data Hub.

Reference Documents

Oracle Trading Community Architecture Administration Guide Oracle Customer Data Librarian Implementation Guide Oracle Customers Online Implementation Guide Oracle Trading Community Architecture Reference Guide Oracle Trading Community Architecture Technical Implementation Guide Oracle 9i Heterogeneous Connectivity Administrators Guide

Customer Data Hub Overview


Elements of the Customer Data Hub
Before undertaking the task of implementing the Customer Data Hub, it is important that the implementer, as well as the implementing organization, understand the components that, when aggregated, comprise the Customer Data Hub solution. The Customer Data Hub is a bundling of architecture and application elements that work together to provide a clean and reliable central and shared repository of customer data across Oracle and non-Oracle systems. The following are brief descriptions of the architecture, application and integration elements that make up the Customer Data Hub. Necessary setup steps for these elements to get the Customer Data Hub up and running are discussed later in this document. Architecture Oracles Trading Community Architecture (TCA) provides the basis for the Customer Data Hub. TCA is made up of two core elements: the database schema and the enabling infrastructure. The TCA database schema is the repository for all party related data (a party may be an organization, a person, a buying consortium, a contact, a supplier, a partner, or any other person or organization with which you do business), including profile characteristics of a party (who/what they are), what their association/relationship with other parties is, where they are located, what the locations are used for, how to get in touch with them, how they are classified and more. The TCA enabling infrastructure is an intermediary between the business applications that wish to utilize the data stored in TCA (whether they are Oracle eBusiness Suite applications or other systems) and the schema itself. The enabling infrastructure is meant to ensure that the data that is entered into, or extracted from, the database tables

is done in a uniform manner, ensuring consistent data quality. This infrastructure takes the form of:

Public APIs which ensure the integrity of the data entered into and retrieved from the database tables. Data Quality Management (DQM) tools which use sophisticated match rules to detect and/or resolve potential duplicate data. Third Party Content Integration which enriches or validates party data to ensure usefulness and correctness.

Applications Oracle Customers Online, the application that provides the window into the Customer Data Hubs centrally stored data, provides the tools and funct ionality to:

Set-up and maintain Party Relationships, Party Classifications, Data Quality Management, Enrichment, Security, Source Systems, Adapters, Phone Numbers and Extensible Attributes. Create, view and update party information, including: party profile, addresses, contact points, classifications, relationships with other parties, accounts, transactions (i.e. leads, opportunities, credit items, etc), notes, tasks, attachments, interactions and the party identifier(s) in external (source) systems. Upload flat data files for import into TCA. View reports regarding both the customers in the system (i.e. who they are, where they are, etc) and the quality of the customer data (how complete is it, how enriched is it, etc).

Oracle Customer Data Librarian, a data quality application, plays a central role in cleansing, enriching and maintaining party data. Data Librarian provides the tools needed to:

Configure and deploy match rules that meet the specific needs of the implementing organization to assist in searching for a record or automated duplicate identification. Maintain mappings between a party record in TCA and the corresponding record(s) in one or more third party systems. Identify potential duplicate records, and take action to merge those records together with a mix of the best attribute values from the records to be merged. Load purchased Dun & Bradstreet data directly into the TCA schema, which may then be blended with user-entered party data to provide a highly accurate party profile. Assign party certification levels to provide end-users with information regarding the perceived quality of the data included in a party record. Purge party records from the TCA database. Manage party active/inactive status. Manage the queue of merge requests from end users and other data librarians. Control party merge at the attribute, relationship and address levels.

Integration

The final element that brings the Customer Data Hub together is the integration layer that links the Customer Data Hub to the spoke systems. A spoke system is defined as a source of information that is providing or synchronizing information with the Customer Data Hub on a regular basis (it is important to note that a system that is a source for one-time movement of data into the hub as part of a migration process is not considered a spoke system). For more information about the definition of a spoke system, and how they relates to pricing and product purchase decisions, see the corporate product pricing sheet. There are two high-level elements of the integration layer:

Middleware (software that enables disparate systems to communicate with eachother) that sits in-between the Customer Data Hub and the external systems, acting as the traffic cop, receiving and routing messages about changes in party data to the correct system/location to keep information in heterogeneous systems synchronized. The Transaction Viewer, a part of the Oracle Customers Online application, that allows a user to view transaction information (i.e. leads, opportunities, orders, service requests, etc) for a specific party from external systems without leaving Oracle Customers Online application, providing a 360 degree view of the party on one page.

Elements of the Customer Data Hub


Data may be loaded into the Customer Data Hub via various methods: bulk import, row-by-row import (using the Customer Interface), and public APIs. Bulk import allows for loading various types of party data into the TCA schema. The data to be loaded may be transferred to the pre-load interface tables using an ETL (Extract, Transform and Load) tool or SQL for large amounts of data, or via a CSV file for small amounts of data. The import procedures have been optimized for maximum performance with large amounts of data, and can take advantage of Data Quality Management (DQM) to find duplicates and address validation utilities on import to improve data quality. The types of party data that may be loaded via bulk import are:

Parties Addresses Contact Points Relationships Contacts Contact Roles Classifications Credit Ratings Financial Numbers Financial Reports

Additionally, Dun & Bradstreet (D&B) data may be loaded into the TCA schema in bulk when purchased from D&B, through an out-of-the-box adapter that allows for

seamless integration between D&B data and the TCA schema (D&B data may also be purchased online through the Oracle Customers Online application). The Customer Interface provides another method for loading party and account data into the TCA schema. The Customer Interface uses staging tables to load data row-byrow into the schema (using the public APIs), rather than in bulk. The Customer Interface is the only interface available to load account data into TCA. Row-by-row processing may take considerably longer than bulk processing with large amounts of data, therefore it is recommended that bulk import be used if account data is not being loaded into the system. The TCA public APIs represent the final method for getting data into the TCA schema. These APIs are already leveraged by every eBusiness Suite application that requires the ability to view, create or update party-related data. These APIs may be leveraged in one of two ways. The first is a direct call from an existing eBusiness Suite application or a custom application which has been built on top of the TCA schema. The second is through Customer Data Hub real-time synchronization, where data is passed to the installed middleware, raising an event where the appropriate TCA API is called, and the data from the message is passed to be validated and inserted (or updated) in the schema.

How Data Gets Out of the Customer Data Hub


Data may be extracted from the Customer Data Hub in two ways. The first is through the previously mentioned public APIs that are built on top of the TCA schema. The second is through a business event being raised. In the previous section, we discussed how TCA's public APIs are used to get data into the Customer Data Hub, including both the scenarios where data is coming from an application built on top of the TCA schema or from an application that exists outside the eBusiness Suite and needs to communicate via middleware. These public APIs also facilitate the extraction of data from the TCA schema. For eBusiness Suite and custom applications built on top of the schema, these APIs are called by an application to query party data, and that data is returned directly to the calling application. In the case of an application residing outside of the eBusiness Suite, the creation or update of data in the TCA schema would raise an event (via Oracles BES, or Business Event System) that will send a defined set of new or updated data to the middleware, where external systems that have subscribed to t his event will receive the data, allowing the external system to keep party data synchronized with the Customer Data Hub.

Customer Data Hub Installation


The Customer Data Hub solution is a composite of core Trading Community Architecture (TCA) functionality as well as the Oracle Customers Online (OCO) and

Customer Data Librarian (CDL) applications. In order to gain the most benefits out of the Customer Data Hub, it is recommended that implementing organizations install the 11i10 Financials Family Pack G as well as the latest TCA and Oracle Customers Online/Oracle Customer Data Librarian patch sets. These releases have been optimized for the Customer Data Hub. The following two sections outline the optimal and minimum baselines for implementing the Customer Data Hub. Note that as CDH functionality is continually enhanced throughout future releases, the recommended baseline will be modified to reflect the latest in CDH patch set levels.

11i10 Financials Family Pack G Baseline This document assumes the following baseline, and refers to the features available at this level:

11i10 Financials Family Pack G 11i.FIN_PF.G (3653484) HZ.N (3618299) IMC.M (4017594) Oracle 10g Database

Minimum Baseline
The following is the minimum baseline for a Customer Data Hub(for more information, see the appropriate Oracle Customer Data Hub Implementation Concepts and Strategies document version):

11i9 Financials Family Pack D 11i.FIN_PF.D (3016445) HZ L (3036401) + 12.0.2 consolidated roll up (3295400) + Bulk Import Consolidated Patch (3597225) IMC K (3161885) Oracle 9i Database

Instance Strategy
Overview
There are three main instance strategy options that organizations will often consider when implementing the Oracle Customer Data Hub. Although the specific instance strategy options can be modified to suit specific business needs, the three general options are:

E-Business Suite as Hub to 3rd Party Systems Customer Data Hub Integrated with E-Business Suite and 3rd Party Systems Customer Data Hub with no E-Business Suite

The Customer Data Hub can be implemented as part of any E-Business Suite implementation, or as its own implementation to support a heterogeneous systems landscape. Many CDH customers will already be running pieces of the E-Business Suite, and will want to leverage the Customer Data Hub functionality by integrating their footprint with disparate 3rd party systems for a single source of truth of customer information. As such, two options have been provided to depict how an implementing organizations existing E-Business Suite investment can be leveraged to gain true economies with the Customer Data Hub. In addition, customers may decide to implement the Customer Data Hub as a first step in their desired migration to the full E-Business Suite. Or, some customers will decide that they are unable to migrate to the full E-Business Suite, but want to use the CDH as a means to simply consolidate their customer information from their existing legacy systems. For these customers, a third instance strategy has been provided which outlines an environment where the customer is running no E-Business Suite applications at all. Option 1: E-Business Suite as a Hub to 3rd Party Systems With this option, customers can easily leverage their existing E-Business Suite footprint to include the functionality of a Customer Data Hub with integration to heterogeneous 3rd party systems. Given that E-Business Suite applications are built on the Oracle Trading Community Architecture, organizations running the E-Business Suite already store customer data in TCA. Therefore, in order to make the TCA customer data sync with the customer data residing in disparate systems, the implementing organization will need to leverage the Oracle AS 10g integration services (or other middleware), and the Source Systems Management capabilities within the TCA infrastructure. In addition, implementing organizations will use Customer Data Librarian to keep the customer data clean and duplicate-free. Pros

Existing customer data is migrated one time from 3rd party systems and is continually kept in sync. In line with the single-global-instance vision, thereby lowering costs and increasing benefits. Allows implementing organizations to sunset any legacy application at any time. All customer data resides in operational, transactional E-Business Suite. Therefore, customer information is actionable in real-time.

Cons

Potentially longer initial implementation timeframe given more complexities in maintaining single source of customer truth within the operational data system.

Option 2: Customer Data Hub Integrated with E-Business Suite and 3rd Party Systems

With this option, customers are able to implement the Customer Data Hub more rapidly and begin realizing immediate benefits by integrating the Hub with existing EBusiness Suite and 3rd party transactional systems. In this model, the Oracle Customer Data Hub is installed as the central repository of customer data, and is integrated with the existing E-Business Suite footprint as a source system, in the same fashion that 3rd party legacy systems are integrated. By mapping the E-Business Suite as a spoke system to the Customer Data Hub, implementing organizations are able to get the Hub up-and-running in parallel with their operational business systems, without spending extensive time on transactional business flow integration testing. In addition, this methodology allows implementing organizations to refine their single source of customer truth over time, and eventually cut over to the single instance model when appropriate. Just as with any other spoke system mapping, it is required that organizations use the Oracle AS 10g integration services (or other third party middleware), and the Source Systems Management capabilities within TCA to keep customer information in sync across all systems, including the E-Business Suite. In addition, implementing organizations will use Customer Data Librarian to keep the customer data clean and duplicate-free. Pros

Shorter initial implementation timeframe given less complexities in maintaining the CDH separate from operational systems. Minimal risk of adversely impacting customer data which could impede business and transactional processes. Allows implementing organizations to migrate to the single instance model at their own pace, while still taking advantage of a single source of customer reality. Allows implementing organizations to patch the CDH with latest functionality without risk of impacting the existing E-Business Suite footprint.

Cons

Additional spoke system created with this model, rather than reduction of systems. Therefore, additional integration development is required. Customer data migrated to Customer Data Hub initially, and then a second effort is required to sync the Customer Data Hub with the E-Business Suite when consolidation occurs in the future.

Option 3: Customer Data Hub with no E-Business Suite Of course, customers who are not running any E-Business Suite applications at all can also implement the Customer Data Hub. In this case, the Hub is implemented as the central source of customer truth, and is integrated with all heterogeneous 3rd party systems. Just as with the previous examples, it is required that organizations use the Oracle AS 10g integration services (or other third party middleware), and the Source Systems Management capabilities within TCA to keep customer information in sync across all systems. In addition, implementing organizations will use Customer Data Librarian to keep the customer data clean and duplicate-free. This methodology easily allows implementing organizations to choose to replace 3rd party transactional systems at any time with E-Business Suite applications that can sit natively on the

Customer Data Hub, thereby continually increasing their investment in a single source of customer reality. In some cases, organizations may never wish to replace any of the transactional systems, and will continue to use them as they are. Pros

Allows implementing organizations to begin realizing the value of the EBusiness Suite functionality, even before they have the E-Business Suite up and running. Allows implementing organizations to plug in pieces of the E-Business Suite at their own pace, or keep legacy systems living on if migrating to the EBusiness Suite is not an option. Provides an easy migration path to single global instance vision.

Cons

Continued costs (IT and personnel) associated with maintaining disparate systems, with different technologies and platforms.

Source System Management (SSM)


The Customer Data Hub was developed to consolidate an organizations party-related data from a heterogeneous application landscape into a single repository. Information is shared between external applications (known by the CDH as source, or spoke, systems) and the CDH, providing a consistent view of party-related data across an organizations many business applications. Source systems may take the form of packaged third-party or custom applications which leverage some level of customer/party functionality. Any number of source systems may become the spokes of a Customer Data Hub solution, meaning that they are a provider and/or consumer of the hubs data. Many Customer Data Hub implementation decisions are based on the types of source systems being integrated, as well as the entities within these systems that should be addressed. As such, a prerequisite to implementing the CDH is to identify which source systems will be integrated with the CDH, and for which customer data entities. Once these are identified, the Customer Data Hub provides a set of Source System Management tools to facilitate the mappings. Source System Management (SSM) functionality enables implementing organizations to store the customer entity mappings between records in the Customer Data Hub and those in disparate source systems. SSM provides the ability to maintain mappings between the CDH and any external source system, including enterprise, legacy, and enrichment source systems, all of which integrate with the CDH. The source system (OS) as well as the record ID (OSR) of the entity in the source system is mapped to the Registry ID of the TCA record, such as the party or contact point.

The basic concepts of Source System Management are explained in the next few sections. For detailed information on Source System Management, please refer to the Oracle Trading Community Architecture Administration Guide (Part No. B10854-04), section 8 Source System Management.

SSM Setup Decisions


There are many entities within the Customer Data Hub that can be mapped to 3rd party source systems. The following is a complete list of the Customer Data Hub entities within SSM that can be mapped to source systems. Note that for the purposes of the Customer Data Hub, implementing organizations will largely focus on the Party layer entities. If the implementing organization is running E-Business Suite Applications that touch the Account layer as well, then these entities will come into play. Party Layer Entity HZ_PARTIES (Logical Entity = Parties) Description The HZ_PARTIES table stores basic information about parties that can be shared with any relationship that the party might establish with another party. Although a record in the HZ_PARTIES table represents a unique party, multiple parties can have the same name. The parties can be one of three types:

Organization (e.g. Oracle Corporation) Person (e.g. Jane Doe) Relationship (e.g. Jane Doe at Oracle Corporation)

Party records can be created and updated using third party data sources such as Dun & Bradstreet's Global Data Products. HZ_PARTY_SITES (Logical Entity = Addresses) The HZ_PARTY_SITES table links a party (see HZ_PARTIES) and a location (see HZ_LOCATIONS) and stores location-specific party information such as MAILSTOP and ADDRESSEE. One party can optionally have one or more party sites. One location can optionally be used by one or more parties. For example, 500 Oracle Parkway can be specified as a

Entity

Description party site for Oracle Corporation. This party site can then be used for multiple customer accounts within the same party.

HZ_CONTACT_POINTS (Logical Entity = Contact Points)

The HZ_CONTACT_POINTS table stores information about how to communicate to parties or party sites using electronic media or methods such as Electronic Data Interchange (EDI), e-mail, telephone, telex, and the Internet. For example, telephonerelated data can include the type of telephone line, a touch-tone indicator, a country code, the area code, the telephone number, and an extension number to a specific handset. The HZ_LOCATIONS table stores information about a delivery or postal address such as building number, street address, postal code, and directions to a location. This table provides physical location information about parties (organizations and people) and customer accounts. For example, you can store information such as Building 300, 500 Oracle Parkway, 94065, and "Take the Ralston Avenue exit from highway 101, go east on Twin Dolphins Drive, turn left on Oracle Parkway, watch for the Building 300 sign on your right." The HZ_ORG_CONTACTS table stores information about the position of the contact for a party or party site. The records in this table provide information about a contact position such as JOB_TITLE, RANK, as well as general contact information. This table is not used to store information about a specific person or organization, such as name and identification codes. That information is stored in the HZ_PARTIES table. For example, this table may include a record for the position of vice president of manufacturing that indicates that the contact is a senior executive, but it

HZ_LOCATIONS (Logical Entity = Addresses)

HZ_ORG_CONTACTS (Logical Entity = Contacts)

Entity

Description would not include the name of the person in that position.

HZ_ORG_CONTACT_ROLES The HZ_ORG_CONTACT_ROLES (Logical Entity = Contacts) table stores information about the role of the contact position that is specified in the HZ_ORG_CONTACTS table. Contacts may have multiple roles. For example, a vice president of manufacturing may have a customdefined role as a member of a capital expenditures review board. Account Layer Entity HZ_CUST_ACCOUNTS (Logical Entity = Accounts) Description The HZ_CUST_ACCOUNTS table stores information about financial relationships established with a party. Since a party can have multiple customer accounts, this table may contain several records for a single party. For example, an individual person may establish a personal account, a family account, and a professional account for a consulting practice. Note that the focus of this table is a business relationship and how transactions are conducted in the relationship.

HZ_CUST_ACCOUNT_ROLES The HZ_CUST_ACCOUNT_ROLES (Logical Entity = Accounts) table stores information about a role or function that a party performs in relation to a customer account. For example, Jane Doe might be a legal contact for Vision Corporation. HZ_CUST_ACCT_SITES_ALL The HZ_CUST_ACCT_SITES_ALL (Logical Entity = Accounts) table stores information about customer account sites. HZ_CUST_SITE_USES_ALL (Logical Entity = Accounts) The HZ_CUST_SITE_USES_ALL table stores information about the uses of customer sites.

SSM Setup Details

Source System setups must be established for all external systems interacting with the Customer Data Hub, whether they be sources of purchased data (such as Dun & Bradstreet) or internal transaction systems. This entails creating the source system record in the SSM administration console, and mapping the entities between a particular source system and the Customer Data Hub. All entities of a customer within the CDH that are mapped to external systems are tracked through Source System Management. Examples of such entities include parties, locations, contacts, accounts, etc. The following screen shots depict the SSM Administration console where source systems and their characteristics are established: 1. Enter Source System header information.

2. Determine whether multiple identifier records in the Source System will be allowed to point to a single party in TCA.

Source System Management allows for any given Customer Data Hub record to be mapped to multiple records across source systems that represent the same person or organization. So, if Business World Inc. exists in the CDH as Registry ID 123, it can be mapped to SAPs version of Business World that is ID 456 and Siebels version of Business World that is ID 789. In addition, if enabled, one CDH record can be mapped to multiple records within a particular source system. This is useful for cases where a customer record is identified as a duplicate in the Hub and is merged accordingly, but still points back to the source system that originally passed these duplicates to the Customer Data Hub, and still continues to operate with these duplicate records. The Source System mapping between the Customer Data Hub and external source systems allows implementing organizations to:

Consolidate multiple customer databases, from various applications across different platforms, into the Customer Data Hub. Create, maintain, share, and leverage an operational, single view of customer information, or Customer Hub, across your enterprise. Continue to operate source systems as usual, sending updates to and receiving updates from the Customer Data Hub.

Circumstances may come about where a particular source system contains multiple rows for any given entity within the Customer Data Hub. For example, an SAP system may break addresses out into multiple rows, whereby the Customer Data Hub contains this in one row. If this is the case, it is recommended that the source system provide a unique key for the entity in question (in this example, the address entity), upon integration. This unique key must be generated by the implementing organization as part of their implementation efforts. Additionally, many times 3rd party systems will contain fields that do not directly map to fields in the Customer Data Hub. The Customer Data Hub provides descriptive flex fields that can be setup to accommodate data points from other systems that are not stored out-of-the-box in the CDH. These flex fields are available at many levels, including the Party, Location, Party Site, Contact, etc. Another option, available for the first time in this release, is Data Model Extensibility. Extensible attributes available through Data Model Extensibility allow the implementing organization to infinitely extend supported TCA entities. See the section on Data Model Extensibility for more details.

Source System Management Assignments


Once source systems are established within the SSM Administration console, CDH Administrators can begin physically mapping the external customer records with records in the Customer Data Hub. Because it would be too tedious of an exercise to manually map each and every record, the Customer Data Hub offers two import utilities to facilitate the mapping process. In addition, manually mapping capabilities do exist within the Customer Data Librarian responsibility for those one-off cases where manual mapping is more efficient than import. SSM Via Bulk Import

As mentioned earlier in this document, a Bulk Import utility is provided as part of the TCA enabling infrastructure. This utility is meant to be used to import large amounts of data at one time, and therefore does not use row-by-row processing, leading to better performance and decreased load time. It is important to note that Bulk Import will only process a batch of records from one source system at a time, therefore the existence of multiple source systems will require data to be loaded in multiple batches. Upon Bulk Import, a unique ID must be provided for each record in the interface table. This unique id is actually a combination of two fields (Original System and Original System Reference OS/OSR). If the source system inherently does not have a unique key for the records being passed to the Hub, the implementing organization must ensure one is created. The unique ID must be a combination of:

The source system id (termed as OS Original System), defined through Source System Management (SSM) administration, which identifies the source that the imported data comes from. The source ID (termed as OSR Original System Record), which identifies the record in the source system.

In the HZ_PARTIES, HZ_PARTY_SITES, HZ_CONTACTS_POINTS, and HZ_ORG_CONTACTS tables, these unique IDs are treated as source IDs for Source System Management. In other tables that do not support SSM, the logical key (e.g. combination of the parent OS/OSR) serves as the ID for the record. The OS and OSR serve to link:

Details of each party together, as foreign key references among interface tables. The data in the source system, now in the interface tables, to the target TCA tables, for making updates to existing parties in the Registry from the same source data.

Implementing organizations should maintain all unique IDs locally in the respective source system, and use them for future updates to specific information in the Customer Data Hub.

Note: Bulk Import functionality is only for Party level information. Therefore, any information being imported into the Account level must be done so using the Oracle Receivables Customer Interface program or via Public APIs. If attempting to load accounts for parties loaded through Bulk Import, verify that OSRs are unique across all sources. See the Oracle Trading Community Architecture Administration Guide for more details.

SSM via PL SQL APIs Customer records can be imported into the Customer Data Hub via PL SQL open APIs. It is recommended that APIs be used for customer import for small volumes of data. When importing records into the Customer Data Hub via PL SQL APIs, make sure to pass the OS/OSR to the TCA Public APIs, as SSM is required. As with the Bulk Import SSM process, the PL SQL APIs require a unique source system id, including the source system record (OS and OSR). Please reference the Oracle Trading Community Architecture Technical Implementation Guide (Part No. B13890-02) for additional details. SSM via Manual Assignment The Oracle Customers Online module within the Customer Data Hub exposes SSM functionality via the Source Systems tab. Although, this functionality is primarily recommended for viewing purposes, Customer Data Librarian users are able to update and add source system mappings for one-off cases as necessary. However, because Bulk Import and the PL SQL APIs provide the ability to map source systems upon import and integration, it should not be too necessary to have to manage the SSM manually.

Single Source of Truth (SST)


Single Source of Truth (SST) allows implementing organizations to display a single, blended, best-of-breed customer record in the Customer Data Hub, generated by data coming from all integrated heterogeneous systems. When SST is setup, CDH Administrators will have the ability to view Organization and Person profile information across their customer base, and determine at the attribute level, which data sources' values should be displayed within the Hub. With Single Source of Truth, CDH Administrators can:

Setup order of display preference for each Organization and Person profile attribute within the Hub (based on rank or most recently updated). Generate, view, and store the Single Source of Truth (SST) record about a customer. Determine if user entered data can overwrite existing 3rd party data in the Hub.

Prior to setting up SST, end users will have the ability to view only user-entered information in the CDH. Once SST is setup for a source system, and data is imported from that system, CDH Administrators can determine which source systems data values will comprise the Single Source of Truth for each given attribute of a customer. After the SST decisions are made, end users of the CDH will see a truly blended, best of breed view of each customer in the Customer Data Hub.

Customer Data Hub Administrators will create a Single Source of Truth record for a party by selecting which system should be used to display various attributes about a customer in the Customer Data Hub user interface (Oracle Customers Online). The various systems are ranked in order of preference. So, if system 1 is ranked highest for a particular attribute, that system will provide the SST attribute for all parties in the CDH, assuming the value has been entered in system 1. If system 1 does not have the particular value populated for a certain party record, the next highest ranked system (e.g. system 2) will provide the SST value, and so on. Each attribute in the SST record will have data from either a user-entered or third party data source, depending on the Third Party Data Integration, Source System Management setup, and the availability of data. Example: A party in the Customer Data Hub has Business World as the user-entered organization name, Business World Inc. as the D&B provided name, and Business World Ltd. as the value coming from the Siebel source system. With Single Source of Truth functionality, the Customer Data Hub administrator can rank each system for all organization and person profile values, to determine which system is the most accurate for each piece of information. In this case, lets say the Administrator selects D&B as the most accurate for Organization Name, Siebel second, and User Entered third. Upon generation of an SST record, users would see Business World in the user-entered record, Business World Inc. in the D&B record, Business World Ltd. in Siebel record, and Business World Inc. in the SST record, which takes data from the highest ranked source, which in this case is D&B. An alternative to the data source rank option for SST display is most recently updated date. This SST display method displays the attribute value of the most recently updated data source, and does not take ranking into consideration. For detailed information on Source System Management, please refer to the Oracle Trading Community Architecture Administration Guide (Part No. B10854-04), section 6 Third Party Data Integration.

Setting up a new Source System in SST


Out of the box, the Customer Data Hub includes Single Source of Truth functionality for Dun & Bradstreet (D&B) integration. However, all other source systems can be established in SST as well, if applicable to an implementing customer. Release HZ.N/Financials Family Pack G introduces user interfaces which facilitate the creation and maintenance of SST rules. Step 1: SST Setup The first step in using SST functionality is to designate which Source Systems are enabled for SST use. This is executed within the Create Source System screen (see SSM Setup Details), by checking the Enable Single Source of Truth (SST) checkbox.

Now that the new Source System has been added to the SST, the Customer Data Hub Administrator is ready to setup the SST functionality. The following steps are outlined in detail in the Oracle Trading Community Architecture Administration Guide (Part No. B10854-04):

Select the SST data display method (data source rank or most recently updated) for Party Profile attributes. Define rules to allow or disallow users from overwriting third party data. Define create and update data security rules for Other Entities. Submit the Third Party Integration Update concurrent program to finish the setup process. The concurrent program updates existing data and dynamically generates new package bodies based on the setup.

Step 2: SST Enrichment UI Setup The Customer Data Hub Administrator sets proper rankings for all attributes, including Organization Name, Year Established, etc. For entities such as locations and contact points, the CDH Administrator decides to select SAP as source to be visible in the Customer Data Hub.

Once these steps are complete, a Source System is SST enabled and customer data displayed in the hub will be displayed based on the rankings selected during the setup process.

Transactions Viewer
Oracle Customers Online serves as the viewer for customer information within the Oracle Customer Data Hub. The Transactions Viewer provides users with a true 360degree view of all customer business transactions. The viewer helps CDH end users make better business decisions, by providing real-time transactional information related to campaign to cash, problem to resolution, and invoice to cash business flows. Examples of transaction types included in these flows are: Marketing Campaigns, Marketing Events, Sales Leads, Sales Opportunities, Quotes, Orders, Service Requests, Installed Base, Credits, Debits, Delinquencies, and Broken Promises. The configurable Transactions Viewer is built on a metadata driven model, which inherently queries E-Business Suite applications for real-time transactional data. However, because this robust model builds Views of transaction data at runtime, and does not physically store transactional information, implementing organizations can easily extend the model to view data from non-E-Business Suite applications as well.

This section provides a detailed understanding of the flexible architecture driving the Transactions Viewer within the Oracle Customers Online (OCO) application and outlines how implementing organizations can extend this view to display non-EBusiness Suite transactions within the Customer Data Hub.

Overview of the Transactions Engine


The following diagram depicts a high level overview of how the Transactions Viewer engine in Oracle Customers Online reads the transactional metadata, processes it, and displays it in the Transactions Viewer. When the Transactions Viewer is accessed by the end user, the Transactions Viewer engine reads the metadata queries from the supporting views and tables. The engine then creates dynamic view objects for each transaction type using the queries and the accompanying column information that is provided. These view objects are used to retrieve the data for the transaction type and display it in the Transactions Viewer page. The process is repeated for all transaction types in the metadata model.

Transactions Metadata Model


The Transactions Viewer Metadata Model consists of several tables that store information for a customers transaction types. At run -time, the Transactions Viewer retrieves and displays the transactions for each customer in the Customer Data Hub by running the queries stored in the metadata model. The following section below describes the Transactions Viewer Metadata Model. These views and tables are leveraged by the Transactions Viewer within the Customer Data Hub.

Much of the key information for supporting the Transactions Viewer resides in two particular views: the Transaction Query view and the Transaction Column view. The Transaction Query view (IMC_THREE_SIXTY_QUERY_VL) is supported by two tables: IMC_THREE_SIXTY_QUERY_B and IMC_THREE_SIXTY_QUERY_TL. The Transaction Column view (IMC_THREE_SIXTY_COLS_VL) is supported by two tables: IMC_THREE_SIXTY_COLS_B and IMC_THREE_SIXTY_COLS_TL.

Transaction Query View (IMC_THREE_SIXTY_QUERY_VL) The Transaction Query view is also known as the Header view because it stores metadata relevant to the header of the the transaction type such as the SQL query to be executed, the specific transaction types name, header text, etc. Essentially, this

view represents the queries used to retrieve data for the Transactions Viewer. Each transaction type has a representative record in this view. Column Name query_id Data Type NUMBER Description Sequence generated id to uniquely identify a record in the table. When inserting a transaction type's query into the view, the query_id value should not be provided (NULL), as it will be produced by the system.

application_id

VARCHAR2 The Unique identification number assigned to each EBusiness suite product. The identification should refer to the product that owns the query. No Transactions Viewer functionality is dependent upon this value. DATE Valid flag values are 'F', 'T', 'EXTF', and 'EXTT'. The value denotes whether the query is a filter query (F or EXTF) or a transaction query (T or EXTT), and whether the query receives parameters only from the page context (F or T) or if it binds parameters from separate crossreference "external" queries (EXTF or EXTT). Number of filters for the transaction type. If 'n' number of columns in the displayed transaction type are to be used as filters, then this value is equal to 'n'. Number of columns to be displayed in the table for the transaction type.

query_type_flag

filter_count

NUMBER

display_column_count NUMBER

product_query1

VARCHAR2 Query for the transaction details. If the query is equal to or less than 2000 characters, than only this column needs to be used. VARCHAR2 Part 2 of the Query for the transaction details, to be used only if the previous

product_query2

Column Name

Data Type

Description "product_query" columns are insufficient in size. NULL if unused.

product_query3

VARCHAR2 Part 3 of the Query for the transaction details, to be used only if the previous "product_query" columns are insufficient in size. NULL if unused. VARCHAR2 Part 4 of the Query for the transaction details, to be used only if the previous "product_query" columns are insufficient in size. NULL if unused. VARCHAR2 Part 5 of the Query for the transaction details, to be used only if the previous "product_query" columns are insufficient in size. NULL if unused. NUMBER The sequence of the transaction type in the list view.

product_query4

product_query5

sequence_no display_flag

VARCHAR2 Whether the transaction type should be displayed or should be turned off. Y/N value VARCHAR2 Form Function name to implement the security at the transaction level using FND form functions security scheme. If a transaction type is marked for display (display_flag == 'Y'), an FND Form Function must be created for the transaction type so that it can be displayed. VARCHAR2 The drill down URL for the transaction details. NULL if unused. VARCHAR2 The name of the transaction type. To be displayed at the top of the transaction type's table region. VARCHAR2 The header text (if any)to be displayed on top of table region. NULL if unused.

security_function

product_url

transaction_name

header_text

Column Name creation_date created_by last_update_date last_updated_by last_update_login be_code

Data Type DATE DATE

Description Standard Who Column Standard Who Column

VARCHAR2 Standard Who Column VARCHAR2 Standard Who Column VARCHAR2 Standard Who Column Version number of the record VARCHAR2 Logical business entity code. For the Customer Data Hub, the value is IMC_TXN_BE_PARTY. The column will also default to 'IMC_TXN_BE_PARTY' if no value is given. VARCHAR2 Category code within a business entity. No functionality is currently provided for the category code.

object_version_number NUMBER

category_code

Transaction Query Parameters Table (IMC_THREE_SIXTY_PARAMS) Additional metadata is needed to retrieve the data for a transaction type (in addition to the queries stored in the Transaction Query View). This additional information is the bind parameters for the query, that are bound to the WHERE clause and are used to identify the transactions to retrieve. The parameters that are stored in this table are each tied back to a query and represent values from the page context. For example, a customers Party ID value is commonly available through the page context and may be bound as a variable to a transaction query to identify transactions pertaining to that customer. While this example uses a single value (Party ID) as the key to identify a customer, the Transactions Viewer infrastructure allows for multi-part keys through the use of multiple bind parameters. Column Name query_id Data Type NUMBER Description Query identifier. Foreign key to the IMC_THREE_SIXTY_QUERY_VL view. Each record in the table must have a valid value for the query_id or the ssm_query_id. External query identifier. Foreign key to the IMC_THREE_SIXTY_SSM_QUERY table. Each record in the table must have a valid value for the query_id or the ssm_query_id.

ssm_query_id

NUMBER

Column Name param_position

Data Type NUMBER

Description Order in which returned parameter values must be bound in the transaction query or the external query.

param_name

VARCHAR2 Parameter name in the page context. Example: in Oracle Customers Online, 'ImcPartyId' (the customer's party_id) is a page context parameter. DATE DATE NUMBER NUMBER NUMBER NUMBER Standard Who Column Standard Who Column Standard Who Column Standard Who Column Standard Who Column Version number of the record The Unique identification number assigned to each E-Business suite product.

creation_date created_by last_update_date last_updated_by last_update_login application_id

object_version_number NUMBER

External Source System Query Table (IMC_THREE_SIXTY_SSM_QUERY) In addition to bind parameters from the page context, the Transactions Viewer infrastructure also allows transaction queries to bind parameters from entirely separate SQL logic. In other words, a separate query can be seeded to return a value to bind to the transaction query. These separate queries are stored in the IMC_THREE_SIXTY_SSM_QUERY table. While these queries are referred to as "external source system queries," they can in fact be any SQL query returning a bind parameter value. For example, imagine two systems: the Customer Data Hub and a transactional system System B. While both systems may have a record for a customer, their internal identifiers for the customer may differ. To retrieve and display the customer's transactions from System B in the Transactions Viewer, a transaction query must be seeded to retrieve the transaction data from System B. To identify the customer in that transaction query, the Transactions Viewer must first determine the customer's identifier in System B. That identifier can be determined through a separate query. Assume that customer John Smith has Party ID 100 in the Customer Data Hub. Additionally, in the Customer Data Hub's Source System Management tables, there is a mapping for John Smith between the Customer Data Hub and System B. The mapping indicates that John Smiths identifier in System B is 5439. A separate external query can be stored in this table to retrieve the value 5439 from the CDH SSM tables. That query, using Party ID 100 from the page context, retrieves System Bs identifier (5439) for John Smith. Like the transaction queries, these external source system queries take their page context parameters from IMC_THREE_SIXTY_PARAMS.

Column Name ssm_query_id

Data Type NUMBER

Description Sequence generated id to uniquely identify an external source system query.When inserting a query into this view, the ssm_query_id value should not be provided (NULL), as it will be produced by the system.

ssm_query_string creation_date created_by last_update_date last_updated_by last_update_login application_id

VARCHAR2 Query string for the external source system query. DATE DATE NUMBER NUMBER NUMBER NUMBER Standard Who Column Standard Who Column Standard Who Column Standard Who Column Standard Who Column Version number of the record The Unique identification number assigned to each EBusiness suite product.

object_version_number NUMBER

External Source System Query Mappings Table (IMC_THREE_SIXTY_SSM_QUERY_MAP) While the external source system queries stored in IMC_THREE_SIXTY_SSM_QUERY return the bind parameter values for the transaction queries, it is the External Source System Query Mappings table that maps the returned value to the correct transaction query bind parameter. Holding these mappings in a separate table allows the infrastructure to re-use external source system queries for multiple transaction queries. Continuing our previous example, John Smiths identifier in System B is returned through the external source system query. The Transactions Viewer infrastructure then ties the System B identifier to the transaction query retrieving data from System B using the mappings stored within IMC_THREE_SIXTY_SSM_QUERY_MAP. This transaction query is, once again, stored in IMC_THREE_SIXTY_QUERY_VL. Column Name query_id Data Type NUMBER Description Query identifier. Foreign key to the IMC_THREE_SIXTY_QUERY_VL view. External query identifier. Foreign key to the IMC_THREE_SIXTY_SSM_QUERY table.

ssm_query_id

NUMBER

Column Name query_param_source

Data Type

Description

VARCHAR2 Parameter value source, SQL or PAGECONTEXT Order in which returned parameter values must be bound in the main query Standard Who Column Standard Who Column Standard Who Column Standard Who Column Standard Who Column Version number of the record The Unique identification number assigned to each E-Business suite product.

query_param_position NUMBER

creation_date created_by last_update_date last_updated_by last_update_login application_id

DATE DATE NUMBER NUMBER NUMBER NUMBER

object_version_number NUMBER

Transaction Column View (IMC_THREE_SIXTY_COLS_VL) The Transaction Column view is also known as the Details view because it stores metadata about the column details that are selected in the query such as the column label, data type, length, etc. The view is used to describe the layout of how the transactional data will be displayed in the Transactions Viewer. Column Name column_id query_id Data Type NUMBER NUMBER Description Sequence generated id to uniquely identify a column in a table region Id of the Query. Foreign Key to IMC_THREE_SIXTY_QUERY_VL. It indicates the query to which this column belongs to. Foreign key to IMC_THREE_SIXTY_QUERY table for a query_id where QUERY_TYPE is F (filter). If FILTER_FLAG=Y, and the filter is an LOV, then this column refers to the query to be used to populate the values

filter_query_id

NUMBER

column_name

VARCHAR2 Name of the column in the query column name should be the string COLUMN concatenated with the number ranging from 1 to maximum number of columns in the query. For

Column Name

Data Type

Description e.g., COLUMN1 , COLUMN2 etc., This naming convention has to be strictly followed while inserting value to this column

column_length display_flag sort_flag filter_flag range_filter_flag hyperlink_flag column_data_type security_function seq_no column_label creation_date created_by last_update_date last_updated_by last_update_login

NUMBER

Length of the column in database

VARCHAR2 Specifies whether a column is to be displayed VARCHAR2 Specifies whether a column is to be used for sorting VARCHAR2 Specifies whether a column is to be used as a filter VARCHAR2 Specifies whether a column is range or not VARCHAR2 Specifies whether a column is a hyperlink or not VARCHAR2 Data Type of the column VARCHAR2 Form Function name to implement the column security NUMBER The sequence of the column in the query

VARCHAR2 Column header to for the column in the table region DATE DATE NUMBER NUMBER NUMBER Standard Who Column Standard Who Column Standard Who Column Standard Who Column Standard Who Column Version number of the record

object_version_number NUMBER

Filtering Transactions
It is important to note that the columns within this metadata model can be used as filters to be applied to any given transaction. For instance, if a user want to filter transaction records based on creation date or status, this is possible by the use of filters. Filter metadata is captured in the Transaction Column view. Furthermore, if a filter has a list of selectable attributes, then a query to retrieve the list of values (LOV) must be stored in the Transaction Query view. Three types of filters may be implemented within the Transactions Viewer based on the COLUMN_DATA_TYPE column:

List of Values (LOV): Used for columns with filter_query_id is listed as not null and a query has been included in IMC_THREE_SIXTY_QUERY_VL to populate the dropdown list corresponding to such a column filter. The QUERY_TYPE_FLAG in the header view must be set to F or 'EXTF' to designate the query as one intended for powering a filter criterion. From Date and To Date ranges: For columns with data type Date and numbers where the range flag is set to Y. Free Text item: Used for columns with filter_query_id as null and data type VARCHAR2. This is to enable filtering with free text.

Disabling Seeded Transaction Types


To disable a seeded transaction type, or any transaction type, from being shown in the Transactions Viewer, update the value for DISPLAY_FLAG in IMC_THREE_SIXTY_QUERY_VL to 'N'.

Extending the Transactions Viewer Metadata Model


As mentioned, the Customer Data Hub Transactions Viewer inherently queries EBusiness Suite tables to display the relevant transactions for any given customer record. However, because of the extensible metadata model, and Source System Management functionality, implementing organizations are easily able to extend the queries to point to non-E-Business Suite applications, thereby achieving a 360-degree transactional view across all spoke systems integrating with the CDH. This section will outline the various steps for extending the Transactions Metadata Model to accommodate non-E-Business Suite applications. Although the following has been tested and proven in a proof-of-concept fashion, it is recommended that systems integrators perform extensive testing to ensure its viability for specific customer requirements. Example 1: Bind parameters from only the page context This section details the first of two scenarios for the extension of the Transactions Viewer. In this case, the Transactions Viewer is to be extended with a transaction query that identifies the customer from only the page context. Consider a transaction query that receives its parameters directly from the page context. Imagine that we would like to retrieve information regarding all of the orders for a given customer from an order management system using the Party ID and Party Name. In this scenario, entries are needed for three of the five views/tables: IMC_THREE_SIXTY_QUERY_VL, IMC_THREE_SIXTY_COLS_VL, and IMC_THREE_SIXTY_PARAMS.

An entry in IMC_THREE_SIXTY_QUERY_VL is created for the query. Assume that it has a query_id of 1. The query_type is listed as T for this particular scenario, to indicate a transaction type that receives its bind parameters from only the page context. Note that not all columns are represented in these examples:

IMC_THREE_SIXTY_QUERY_VL

query_id product_query1 1 SELECT o.orderNum column1, o.orderName column2, o.numItems column3 FROM orders o WHERE o.custID=:1 AND o.custName=:2

query_type T

For this query, the parameters are to be found from the page context, directly from the parameters table (IMC_THREE_SIXTY_PARAMS). Two entries exist in the table, representing the two parameters that are to be bound to the transaction query. Note that a parameter position of zero indicates that the parameter is the first parameter to be bound in order to the transaction query, corresponding to o.custID:
IMC_THREE_SIXTY_PARAMS

query_id 1 1

ssm_query_id -

param_position 0 1

param_source ImcPartyId ImcPartyName

Column formatting information is stored in the Transaction Column View. In our example, the three columns that we have retrieved from the transactional order system (Order Number, Order Name, and Number of Items) are represented by three entries in the IMC_THREE_SIXTY_COLS_VL view.
IMC_THREE_SIXTY_COLS_VL

query_id column_name seq_no column_data_type column_label 1 1 1 column1 column2 column3 0 1 2 NUMBER VARCHAR2 NUMBER Order Number Order Name Number of Items

In this scenario, the Transactions Viewer Engine will bind the page context values of Party ID and Party Name to parameters of the main transaction query (custID and custName, respectively). The query will return the transactional data, which will be formatted and displayed into three columns: Order Number, Order Name, and Number of Items. Example 2: Bind parameters from separate queries and the page context In additon to variables from the page context, the Transactions Viewer infrastructure also provides the ability to bind parameters to a transaction query from a separate SQL statement. This section details the second of two scenarios for the extension of the Transactions Viewer. In this case, the Transactions Viewer is to be extended with

a transaction query that identifies the customer with one or more parameters found using external source system queries. All five views and tables are to be used in this example:

Consider the same example as scenario 1, with the exception that the customer ID value for our order management system is not the same as the Party ID. Instead, we must access separate source system tables, namely the Customer Data Hub's Source System Management tables, to determine the customer ID value for the order management system. We determine that the Party Name, from the page context, can still be used as the customer name. Note that not all columns are represented in these examples: This type of query requires entries in all five views/tables, as an external source system query must be run. The Transaction Query view record is marked with type EXTT, so that the infrastructure can recognize this fact. Any query with type EXTT may have bind parameters from either the page context or a separate SQL query. Assuming that the transaction query for this example is query_id = 2, the transaction query has a single entry in IMC_THREE_SIXTY_QUERY_VL:

Note: The entries in the tables and views from Example 1 will remain in this example to show the differences

A new entry is needed in the Transaction Query View to represent this query. Note that the only difference is the EXTT query type.
IMC_THREE_SIXTY_QUERY_VL

query_id product_query1 1 SELECT o.orderNum column1, o.orderName column2, o.numItems column3 FROM orders o WHERE o.custID=:1 AND o.custName=:2 SELECT o.orderNum column1, o.orderName column2, o.numItems column3 FROM orders o WHERE o.custID=:1 AND o.custName=:2

query_type T

EXTT

A new entry is created in the IMC_THREE_SIXTY_SSM_QUERY table. It represents the query that will be run against the source system table to determine what

the customer ID value is in the order management system. One bind parameter for this query is needed for the Party ID value, which translates to the owner_table_id in the hz_orig_sys_references table.
IMC_THREE_SIXTY_SSM_QUERY

ssm_query_id 500

ssm_query_string SELECT s.orig_system_reference FROM hz_orig_sys_references s WHERE s.orig_system='orders' AND s.owner_table_name='HZ_PARTIES' AND s.owner_table_id=:1

Unlike queries of query_type T, that receive their bind parameters directly from IMC_THREE_SIXTY_PARAMS, the bind parameters for the query of type EXTT are found from IMC_THREE_SIXTY_SSM_QUERY_MAP. For instance, the two parameters from the transaction query would be represented with two separate entries in IMC_THREE_SIXTY_SSM_QUERY_MAP, one pointing to the external source system query for customer ID and another to the page context for customer name:
IMC_THREE_SIXTY_SSM_QUERY_MAP

query_id ssm_query_id qyery_param_source query_param_position 2 2 500 SQL PAGE 0 1

The above entries tell us that the first parameter of the transaction query (custID, query_param_position = 0) is found through the external source system query with ssm_query_id 500. The second parameter (custName, query_param_position = 1) is found through the page context Like the main transaction query, this external transaction query similarly has bind parameters that are retrieved from the IMC_THREE_SIXTY_PARAMS table. For this transaction type, there are two total entries in the IMC_THREE_SIXTY_PARAMS table:
IMC_THREE_SIXTY_PARAMS

query_id 1 1 2

ssm_query_id 500 -

param_position 0 1 0 1

param_source ImcPartyId ImcPartyName ImcPartyId ImcPartyName

Parameters from the page context that are to be bound to an EXTT query have a null value for the query_id. Parameters from the page context that are to be bound to a

transaction query have a null value for the ssm_query_id. Note that for the SSM querys parameter, param_position represents the position of ImcPartyId in ssm_query 500. The values for the Transaction Column View would remain the same between both scenarios, as the information returned is formatted identically:
IMC_THREE_SIXTY_COLS_VL

query_id column_name seq_no column_data_type column_label 1 1 1 2 2 2 column1 column2 column3 column1 column2 column3 0 1 2 0 1 2 NUMBER VARCHAR2 NUMBER NUMBER VARCHAR2 NUMBER Order Number Order Name Number of Items Order Number Order Name Number of Items

In the above scenario, the Transactions Viewer Engine would first execute the SQL query for ssm_query_id 500. The value returned by the query would then be bound to the main transaction query, along with another page context parameter. The main transaction query would then be executed to retrieve the transactions and display them in the table region of the Transactions Viewer. How to Populate the Transactions Metadata When extending the Transactions Viewer, metadata must be inserted into the data model using a set of PL/SQL APIs as described in the examples above. While those examples only detail the columns most important for a conceptual understanding, each column of metadata in each view or table should be seeded for the row that is inserted. For each view or table in the data model, a corresponding PL/SQL package allows for the insertion of metadata: Corresponding PL/SQL Package imc_three_sixty_query_pkg imc_three_sixty_cols_pkg imc_three_sixty_params_pkg imc_three_sixty_ssm_query_pkg

View or Table IMC_THREE_SIXTY_QUERY_VL IMC_THREE_SIXTY_COLS_VL IMC_THREE_SIXTY_PARAMS IMC_THREE_SIXTY_SSM_QUERY

IMC_THREE_SIXTY_SSM_QUERY_MAP imc_three_sixty_query_map_pkg Each PL/SQL package includes a procedure named insert_row that inserts a single row into the view or table. For example, to insert a new external source system query

into IMC_THREE_SIXTY_QUERY, the procedure imc_three_sixty_ssm_query_pkg.insert_row will be called. While the sample code below demonstrates how to insert metadata for IMC_THREE_SIXTY_QUERY_VL, the process is identical for other views and tables. Calling the PL/SQL API procedures consists of a four-step process:
Step 1: Create an FND Form Function for the transaction type

Each transaction type must have a security function value so that the transaction type can be displayed in the Transactions Viewer. The security function will then be tied to the transaction type through the IMC_THREE_SIXTY_QUERY_VL view. To create the FND function: 1. Log into Oracle Applications under "System Administrator" 2. Navigate to Application -> Function 3. Create a new function keeping in mind a few points: o The Function Name will be passed into IMC_THREE_SIXTY_QUERY_VL as the "Security Function" value o Under Properties, the value for Type should be "Database Provider Portlet" o Provide the name of your transaction type under Web HTML -> HTML Call 4. Add the new function to the menu IMC_NG_360_SECURITY_FUNCTIONS, which houses all of the Transactions Viewer security functions
Step 2: Declare variables for the view or table columns DECLARE x_query_id NUMBER; x_application_id NUMBER; x_query_type_flag VARCHAR2(30); x_product_query1 VARCHAR2(2000); x_product_query2 VARCHAR2(2000); x_product_query3 VARCHAR2(2000); x_product_query4 VARCHAR2(2000); x_product_query5 VARCHAR2(2000); x_sequence_no NUMBER; x_security_function VARCHAR2(255); x_display_flag VARCHAR2(1); x_filter_count NUMBER; x_display_column_count NUMBER; x_product_url VARCHAR2(2000); x_be_code VARCHAR2(30); x_category_code VARCHAR2(30); x_transaction_name VARCHAR2(255); x_header_text VARCHAR2(2000); x_creation_date DATE; x_created_by NUMBER; x_last_update_date DATE; x_last_updated_by NUMBER; x_last_update_login NUMBER; x_object_version_number NUMBER; Step 3: Assign values to the variables

BEGIN x_query_id := NULL; x_application_id := 503; x_query_type_flag := 'T'; x_product_query1 := 'SELECT e.event_offer_name column1, lk1.meaning column2 , lk2.meaning column3, e.source_code column4, e.event_start_date column5, e.event_end_date column6, v.venue_name column7, lk3.meaning column8, lk4.meaning column9, lk5.meaning column10, '''' '''' column11 FROM apps.ams_event_offers_vl e, ams_act_lists a, ams_list_entries le, fnd_lookups lk1, ams_lookups lk2, ams_venues_vl v, fnd_lookups lk3, fnd_lookups lk4, fnd_lookups lk5 WHERE le.party_id = :1 AND a.list_act_type = ''''TARGET'''' AND a.list_used_by = e.event_object_type AND a.list_used_by_id = e.event_offer_id AND le.list_header_id = a.list_header_id AND le.list_entry_source_system_type = ''''PERSON_LIST'''')'; x_product_query2 := 'UNION ALL SELECT e.event_offer_name column1, lk1.meaning column2 , lk2.meaning column3, e.source_code column4, e.event_start_date column5, e.event_end_date column6, v.venue_name column7, lk3.meaning column8, lk4.meaning column9, lk5.meaning column10, '''' '''' column11 FROM ams_event_offers_vl e, ams_event_registrations r, fnd_lookups lk1, ams_lookups lk2, ams_venues_vl v, fnd_lookups lk3, fnd_lookups lk4, fnd_lookups lk5 WHERE e.event_offer_id = r.event_offer_id AND v.venue_id(+) = e.event_venue_id AND lk1.lookup_type = ''''YES_NO'''' AND lk1.lookup_code = e.event_standalone_flag AND lk2.lookup_type(+) = ''''AMS_EVENT_TYPE'''' AND lk2.lookup_code(+) = e.event_type_code AND lk3.lookup_type = ''''YES_NO'''' AND lk4.lookup_type = ''''YES_NO'''' AND lk4.lookup_code = ''''Y'''' AND lk5.lookup_type = ''''YES_NO'''' AND lk5.lookup_code = r.attended_flag AND r.system_status_code <> ''''CANCELLED'''' AND r.attendant_contact_id = :1 AND r.attendant_contact_id = r.attendant_party_id '; x_product_query3 := NULL; x_product_query4 := NULL; x_product_query5 := NULL; x_sequence_no := 2; x_security_function := 'IMC_NG_360_EVENTS'; x_display_flag := 'Y'; x_filter_count := 4; -- the column tells the Transaction engine that 4 filter objects should be created on the -- page. This value should be equal to number of columns designated as filters. x_display_column_count := 11; -- equals to number of columns selected in the query. x_product_url := NULL; x_be_code := 'IMC_TXN_BE_PARTY'; x_category_code := NULL; x_transaction_name := 'Events'; x_header_text := NULL; x_creation_date := SYSDATE; x_created_by := 1; x_last_update_date := SYSDATE; x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; Step 4: Call the PL/SQL API procedure with the variables as parameters imc_three_sixty_query_pkg.insert_row(x_query_id, x_application_id, x_query_type_flag, x_product_query1, x_product_query2, x_product_query3, x_product_query4, x_product_query5, x_sequence_no, x_security_function, x_display_flag, x_filter_count, x_display_column_count, x_product_url, x_be_code, x_category_code,

x_transaction_name, x_header_text, x_creation_date, x_created_by, x_last_update_date, x_last_updated_by, x_last_update_login, x_object_version_number);

For additional sample code, please refer to the attached appendix section Appendix D: Transactions Viewer Sample Code. How to Update the Metadata Metadata can be updated in a similar way using the PL/SQL API call "update_row" for each PL/SQL package. How to Add Filter Queries to a Transaction Type Each column that is returned for a transaction type can be used as a filter for that type in the Transactions Viewer. Filter columns that require a dropdown list of values to choose from must have a filter query to provide those values. These filter queries are stored in the IMC_THREE_SIXTY_QUERY_VL view and adhere to the same guidelines for binding parameters as the transaction queries. For filter queries that receive their parameters directly from the page context, a query type of 'F' is used. For filter queries that receive one or more of their bind parameters from separate SQL queries, a query type of 'EXTF' is used. The identifiers for these queries (the query_id's) from IMC_THREE_SIXTY_QUERY_VL are referenced as the 'filter_query_id' in the IMC_THREE_SIXTY_COLS_VL view for the particular filter column.

Transactions Metadata Model Extension for Non-Oracle Databases


Customer information that spans non-Oracle databases can be viewed using Oracle Heterogeneous Services (OHS). The Oracle database server accepts SQL statements that query data stored in different databases. The Oracle database server, along with the Heterogeneous Services component, processes the SQL statements and passes the appropriate SQL directly to other Oracle databases, or through gateways to nonOracle databases. The Oracle database server then combines the results from different sources to return the ultimate result set. This enables a query to be processed so that it spans local and remote Oracle databases as well as non-Oracle database systems. Oracle Heterogeneous Connectivity Process Architecture Heterogeneous Services is an integrated feature within the Oracle database server, and provides the generic technology for accessing non-Oracle systems from the Oracle database server. Heterogeneous Services enables users to use:

Oracle SQL statements to transparently access data stored in non-Oracle systems as if the data resided within an Oracle database server. Oracle procedure calls to transparently access non-Oracle systems, services, or application programming interfaces (APIs) from your Oracle distributed environment.

The Heterogeneous Services component in the Oracle database server talks to a Heterogeneous Services agent process, which, in turn, talks to the non-Oracle system. A Heterogeneous Service agent is the process through which an Oracle server connects to a non-Oracle system. The agent process that accesses a non-Oracle system is called a gateway. Access to all gateways go through the Heterogeneous Services component in the Oracle server and all gateways contain the same agent-generic code. Each gateway has a different driver that maps the Heterogeneous Services application programming interface (API) to the client API of the non-Oracle system. Generic Connectivity and Oracle Transparent Gateways provide applications direct access to data in non-Oracle databases. Generic Connectivity and Oracle Transparent Gateways provide the ability to transparently access data in non-Oracle databases from an Oracle environment. You can create synonyms for the objects in a non-Oracle database and refer to them without having to specify a physical location. This transparency eliminates the need for application developers to customize their applications to access data from different non-Oracle systems, thus decreasing development efforts and increasing the mobility of the application. This also eliminates the need to upload and download large amounts of data to different locations, thus reducing data duplication and saving disk storage space. Database Links to a Non-Oracle Databases Heterogeneous Services makes a non-Oracle databases appear as a remote Oracle database server. To access or manipulate tables or to execute procedures in the nonOracle system, administrators must create a database link that specifies the connect descriptor for the non-Oracle database. Use the following syntax to create a link to a non-Oracle system (variables in italics):
CREATE DATABASE LINK link_name CONNECT TO user IDENTIFIED BY password USING 'non_oracle_system';

If a non-Oracle system is referenced, then HS translates the SQL statement or PL/SQL remote procedure call into the appropriate statement at the non-Oracle system. You can access tables and procedures at the non-Oracle system by qualifying the tables and procedures with the database link. This operation is identical to accessing tables and procedures at a remote Oracle database server. Consider the following example that accesses a non-Oracle system through a database link:
SELECT * FROM EMP@link_name;

Heterogeneous Services translates the Oracle SQL statement into the SQL dialect of the target system and then executes the translated SQL statement at the non-Oracle system.

Note:

While this information is provided for your benefit, it is important that you thoroughly test transaction queries through a separate database tool prior to seeding the metadata.

Please refer to the Oracle 9i Heterogeneous Connectivity Administrators Guide for additional details.

Data Model Extensibility


The Trading Community Architecture (TCA) data model is comprised of entities and attributes that define a customer and tend to be common among implementing organizations. It is not at all uncommon for an organization to have vastly different business needs for customer information than another organization implementing/using the Customer Data Hub. From a product development perspective, it does not make sense for TCA to incorporate customer-specific attribute requests that are not re-usable by other customers into the data model, but in order to facilitate the ability to track customer data points that are necessary for the deploying organization, flexfields and data model extensibility are available. Flexfields have been available within virtually all TCA tables in the data model for several releases, and will remain. HZ.N/Financials Family Pack G delivers an alternative for implementing organizations to capture customer data points. Data model extensibility provides the deploying organization with the ability to create an unrestricted number of additional attributes, and logically group attributes together for optimal functional coherence. Extensible attributes can currently be added to the HZ_ORGANIZATION_PROFILES, HZ_PERSON_PROFILES, HZ_LOCATIONS and HZ_PARTY_SITES tables in the TCA schema. The benefits of data model extensibility that make their use superior to that of flexfields include: the ability to logically group attributes and infinite extension versus a pre-determined number of additional attributes. It is important to note that the introduction of data model extensibility does not desupport flexfields within the TCA data model, and additionally a migration from flexfields to extensible attributes is not provided. It is TCA Product Developments recommendation that new implementations of the Customer Data Hub favor extensible attributes over flexfields, but it is up to the deploying organization to use extensible attributes, flexfields, or a combination of both to capture customer data points that are not supported within the data model.

Setting up Extensible Attributes


Extending the data model, while it sounds like a task fit only for a Database Administrator, is relatively simple and can usually be accomplished by a functional

resource. Before beginning the process of setting up extensible attributes, the following questions should be answered:

To which entities (organization, person, location and party site) within TCA do we need to add additional attributes? Within each entity, what additional data points do I need to capture? For each data point, do we allow only one value, or do we allow multiple? For each data point, what values are valid if available through a choice list? How are the data points for each entity grouped together? Is it one large group of attributes for the entity, or are there two or more subgroupings per entity? How do I want these groups of attributes assembled on the page when I view customer information through Oracle Customers Online?

The following sections give a brief overview of the extensible attributes setup. Please refer to Oracle Trading Community Architecture Administration Guide (Part No. B10854-04) for complete setup details. Step 1: Define Attribute Group and Attributes Extensible attributes are always grouped together. Whether there is only one extensible attribute or hundreds for a given entity, at least one Attribute Group must be created. Once the Attribute Group is established, attributes may be added to it. When an Attribute Group is later associated to a page, all of the attributes in that Attribute Group will appear. The following screen shots show the extensible attributes console where Attribute Groups and associated attributes are defined:

Step 2: Associate Attribute Groups with a Page After setting up the Attributes Groups and their respective attributes, which adds these new attributes to the data model, the Attribute Groups can be associated with one or more pages. Adding an Attribute Group to a page makes it available for view and update within the Oracle Customers Online application. Pages are embedded in the UI for which they are setup (Organization Profile, for example), and accessed by making a selection from a dropdown within a screen in Oracle Customers Online. The following screen shot displays how a page with extensible attribute Attribute Groups is created:

Integration
Oracle Integration provides a comprehensive, open standards-based solution that simplifies your IT infrastructure and allows you to rapidly deploy your customer data management implementation. Oracle can do this better, quicker, and save you money in the process because it provides an integrated, single-vendor solution for managing the complexities of your infrastructure. That said, the Oracle Customer Data Hub solution is agnostic to your choice of middlewarewhich represents an integral component of the overall solutions architecture essential for messaging, transportation and transformation of the underlying customer data structures as represented within spoke systems vis--vis the architecture, called Oracle Trading Community, that powers the Oracle Customer Data Hub. Oracle Application Server 10g Integration consists of the following solutions that deserve your consideration when implementing the Oracle Customer Data Hub.

Oracle Application Server 10g Integration InterConnect


Oracle Integration InterConnect, provides a complete framework to integrate heterogeneous environments including enterprise applications, legacy systems and databases. It provides a scalable, and easy to use solution to quickly deploy application-to-application integration scenarios. It supports both point-to-point and hub-and-spoke architectures to address data synchronization requirements across disparate source systems.

This solution is suitable for enterprise application integration that primarily involves either data synchronization or data replication between peer systems with little or no need for process orchestration. The emphasis is on connectivity, transport, transformation and quality of service.

Oracle BPEL Process Manager


The Oracle BPEL Process Manager enables enterprises to design and deploy processes using the Business Process Execution Language (BPEL) standard. It is composed of an easy-to-use BPEL designer, a scalable BPEL server, a monitoring console, and a comprehensive JCA adapter framework that provides connectivity to external source systems. BPEL PM is a plug-and-play, standard-based infrastructure for integrating systems, services and people activities into easy-to-change process flows. It can be used to deliver both composite applications (web service orchestration, J2EE process flows) and data integration applications including the Oracle Customer Data Hub. The BPEL Process Manager implements a service-oriented architecture (SOA) by loosely coupling web services to flexibly orchestrate business processes, build composite applications and, in the process, construct a high-quality, customer master identity within the data hub for use throughout your IT landscape. This coupled with standards-based integration messaging infrastructure, which includes connectivity, routing and transport, transformation and guaranteed delivery, makes for a compelling choice for customer data integration purposes.

Building the Customer Data Hub


The initial build of the Customer Data Hub can be performed in a variety of ways depending on a great many variables, including: which CDH features will be leveraged; quality of data in source systems; implementation timeline; and number of source systems. Given the great many variables, CDH Product Management has outlined the various tools that are available for building the Customer Data Hub, as well as the following approach, which outlines the most straightforward, consistent method of building the Customer Data Hub. Given that each implementing organization is unique, it is recommended that all companies implementing the Customer Data Hub review the methodology outlined below, and modify it to suit their business needs.

Tools and Features for Building the Customer Data Hub


The Customer Data Hub application set includes a Bulk Import utility that should be leveraged for the initial and continual building of the CDH. In addition to the core Bulk Import functionality, there are many complementing features that can be leveraged throughout the CDH build process. This section will outline the various features in detail, and will provide the foundation for the recommended build methodology to follow.

Bulk Import Overview The Customer Data Hub Bulk Import utility allows implementing organizations to load data from legacy or external systems in bulk into the Customer Data Hub. This tool greatly streamlines the process for bringing source system customer information into the CDH, and should be used for the initial import of customer records. It is important to note that the Bulk Import tool imports records from TCA interface tables into the core TCA Registry. As suc h, it is the implementing organizations responsibility to extract customer data from external source systems, and import this data into the TCA Interface tables. It is recommended that the implementing organization use either an ETL tool (such as Oracle Warehouse Builder OWB) to extract the data from source systems, transform the data as necessary, and load the data into the TCA Interface tables via a SQL Loader tool. The following steps outline the process for setting up Bulk Import:

Setup source system in Source System Management (SSM). o You must provide a unique ID for each record in the interface table. The unique ID is a combination of: The source system code, defined through Source System Management (SSM) administration, which identifies the source that the imported data comes from. The source ID, which identifies the record in the source system o Perform this step for all source systems you plan to import customer data from. Setup optional Bulk Import functionality, including Bulk Import Deduplication, Automerge, Address Validation, and Data Sharing and Security (to be discussed in detail in subsequent sections). Ensure the following profile options are appropriately set: o HZ: Allow Import of Records with Disabled Lookups o HZ: Allow Updates of Address Records During Import o HZ: Character Value to Indicate NULL During Import o HZ: Date Value to Indicate NULL During Import o HZ: Error Limit for Import o HZ: Execute API Callouts o HZ: Number of Workers for Import o HZ: Numeric Value to Indicate NULL During Import o HZ: Use HR Security During Import o HZ: Validate Flexfields During Import o HZ: Work Unit Size for Import Extract data from Source Systems and load into TCA Interface tables o See the Oracle Trading Community Architecture Reference Guide (Part No. B12311-03) for a comprehensive list of TCA Interface tables. o It is recommended that Oracle Warehouse Builder be used as the ETL tool of choice. However any ETL tool should suffice. Use the CDH Bulk Import console to run the following concurrent programs (when applicable) to perform Bulk Import from TCA Registry. o Import Batch to TCA Registry o Registry De-duplication

Additional details on implementing Bulk Import functionality can be found in the Oracle Trading Community Architecture Administration Guide (Part No. B10854-04), and the Oracle Trading Community Architecture Reference Guide (Part No. B1231103).
Importing Party Records Using TCA Bulk Import and Associated Accounts Using Customer Interface (Account Import)

It was mentioned earlier in this document that accounts related parties to can not be imported using TCA Bulk Import. If an implementing organization would like to import accounts and wishes to take advantage of the Bulk Import feature to load the parties, they will need to embed additional logic within the unique identifier passed as the OSR values. Typical usage of the OS and OSR values in Bulk Import require that each OS/OSR combination is unique across all source systems, but since the Customer Interface does not have the concept of an OS, implementing organizations will be required to ensure that the OSR itself is unique across all sources. The Bulk Import feature will require that both an OS and an OSR are passed for each party, so for organizations that will be loading accounts that are to be associated with imported parties, it is suggested that the organization generate an OSR account value by concatenating the OS and OSR value together as the Party OSR value, or simply ensure that the party OSR values are unique across all systems. Again, because of the Bulk Import requirements, you will still need to pass an OS value on its own when the parties are loaded. Example 1: Party A will be loaded by the organization from a marketing system into TCA using Bulk Import, with no intention to later load any accounts for that party using the Customer Interface. The OS for the Marketing system from which the party record is being imported is MKT (OS = MKT), and the unique identifier for that party in the marketing system is 12345 (OSR = 12345). In such a case, the OS/OSR combination for the imported record would be MKT/12345. Example 2: Party B will be loaded by the organization from a marketing system into TCA using Bulk Import, with the intention to later load accounts for that party using the Customer Interface. Again, the OS for the Marketing system from which the party record is being imported is MKT (OS = MKT), and the unique identifier for that party in the marketing system is 98765. Since accounts will be imported for this party, the OSR that is passed into TCA via Bulk Import should be a concatenation of the OS and the unique identifier from the OS, therefore the OS/OSR combination for the imported record would be MKT/MKT98765 (note that you may employ your own method to ensure that the OSR values will be unique for all parties, across all source systems, if desired). When loading the accounts for an existing party via the Customer Interface, pass the OSR as MKT98765 to associate the accounts to Party B.
Bulk Import Options

The Bulk Import utility offers additional features that can be leveraged to enhance and streamline the customer data import process. The following features can be optionally applied upon Bulk Import.
Bulk Import Options: De-duplication

Bulk Import De-duplication is a CDH feature that provides implementing organizations with two options for finding and resolving duplicates upon import. Duplicate customer records can either be identified among the customer information in the interface tables (e.g. disjoint from information currently in the CDH), or by matching incoming customer data with records already existing in the Customer Data Hub. For both options, duplicate findings can be reviewed and altered via the CDH What-if analysis utility prior to being loaded into the Customer Data Hub. However, for the purposes of an Oracle CDH implementation, it is recommended that implementing organizations do not identify duplicate 3rd party records disjoint from CDH customer records prior to import. The reason being is that if duplicate 3rd party records are found and resolved in the interface tables (without pointing to a TCA record), these records will not receive an OS/OSR mapping them back to the originating source system. Only records that are brought into TCA will have an OS/OSR assignment, thereby pointing the record back to the originating source system. If the implementing organization does not have the ability to resolve duplicates in their source system, it will be important for them to find the duplicates within TCA and resolve them as such. Once resolved, because an OS/OSR is attached to the merging records, the source systems can be notified of the merge and action can be taken if applicable. As such, it is recommended that implementing organizations identify incoming duplicates against records already in the CDH (not in the interface tables), and import the legacy records accordingly. With this import, the implementing organization can use What if analysis tool to determine if a merge request should be created upon import, for resolution by a Data Librarian. In order to leverage Bulk Import De-duplication, the implementing organization will specify a Match Rule (with a Bulk Identification Type) for use in finding like customer records upon import. This match rule is setup in the Customer Data Librarian Match Rule administration console, and is applied to the customer records residing in the TCA interface tables. Once the transformation functions are applied to the respective records in the interface tables, duplicate matching is performed and a match score is generated between the transformed customer records in the interface tables, and the existing customer data residing in the TCA registry (staged schema). Customer matches that meet or exceed the match threshold will be inserted into a System Duplicate Identification Batch that can be reviewed by a Data Librarian for duplicate resolution. Any records in the interface tables that do not reach a match threshold with a record in the TCA registry will be inserted as a new record in the Customer Data Hub.
Bulk Import Options: Automerge

Often times exact matches will exist between records already residing in the Hub, and records that are being imported from a source system. For these cases, implementing organizations can use the Automerge functionality upon Bulk Import. Automerge allows implementing organizations to automatically merge records that have the

highest probability of being duplicates, with no manual intervention. Implementing organizations will specify a granular Match Rule, with an Automerge threshold, and apply this Match Rule upon Bulk Import. For example, lets say the implementing organization creates a match rule whereby if the SSN is an exact match upon import, they would like the records automatically merged. So, a record in the Hub has a Social Security Number of 123-45-6789, and an incoming record residing in the TCA interface tables has the same SSN. Because the Automerge threshold was set on exact match of SSN, the Automerge feature will ensure these records are automatically merged, thereby skipping the step of creating a System Duplicate Identification Batch for Data Librarian review.

Note: Automerge functionality should be used with caution given that there is no way to un-merge a record once it has been merged. Therefore, we recommend only applying Automerge logic to match rules looking for an exact match.

Bulk Import Options: Address Validation

If you choose to validate the addresses in the interface tables before importing them into the TCA Registry, the addresses are validated using address validation adapters. An adapter connects TCA to an external source of information, such as Trillium, which provides the validation service. Each address is validated through the default adapter set up for each country. For example, if the Vision adapter is the default for the United States, then all US addresses in the interface tables are validated against Visions standard addresses. If an address from the interface tables differs from the validated address, it is updated with the validated address only if it is valid above the threshold defined for the adapter. Such an update is called an address correction. For example, if the adapter configuration has the threshold at the municipality level, an address in the interface tables is corrected if at least its city or town is valid.

Note: Aside from the validation threshold, address correction only occurs if the update does not violate other validations, such as tax validation rules.

Recommended CDH Build Approach

With a solid understanding of the various tools available to build the Customer Data Hub, an implementing organization is now ready to put these concepts into practice within their own implementation. Again, because so many business variables exist, TCA Development has outlined the following as a high level methodology, which can be used as a guideline for CDH implementations. Of course, the exact steps and procedures will vary based on the implementing organizations business scenarios and requirements. No matter which CDH Build methodology is utilized, two prerequisites to building the Hub apply: 1. Source System Management (SSM) must be setup for the spoke systems. 2. Bulk Import must be setup to import the customer records in mass into the CDH. In addition, the following general recommendations apply to all CDH implementation scenarios:

Implementing organizations should import, cleanse, and go live with one source system at a time on the Customer Data Hub. If the implementing organization would like to use certain values from their source systems as the Single Source of Truth value, the source system must be created as a content source in SST prior to import. Various optional setups can be done to aid in the import process, including setup of Address Validation, Bulk Import De-duplication, and Automerge. Implementing organizations must use an ETL tool (e.g. Oracle Warehouse Builder) to extract data from source systems and insert into TCA Interface tables.

The following diagram represents a high-level process flow for setting up the first source system on the CDH, as well as subsequent source systems. Please note that Step 1 in this methodology assumes that the implementing organization does not have any customer records in TCA today (e.g. new CDH customer). If the customer is already running parts of the E-Business Suite, and therefore already has customers in TCA, they have essentially already done Step 1 of this process. If this is the case, the implementing organization can follow the high level methodology associated with Step 2.

Step 1: Initial Population of CDH If the implementing organization does not have any records in TCA (e.g. they are not running any E-Business Suite applications), they should begin their CDH build with Step 1: Initial Population of CDH.

The first step of this process is to take a snap-shot of the source system customer database so as to have a static baseline from which to import into the TCA Interface tables. Implementing organizations must then use an ETL tool such as Oracle Warehouse Builder to extract customer data from the legacy source system and import this data into the TCA Interface tables. Once source system customer data is in the Interface tables, implementing organizations should use the TCA Bulk Import utility to load the customer data into the CDH. This utility has been optimized to streamline and expedite the import process. Because there will be a delta in customer information between the time that the snapshot is taken and the time when the CDH is ready to go live, the implementing organization must ensure all changes and updates made to customer data in the source system is identified, or stored in a temporary queue. This queue must be addressed after initial import to ensure all the Customer Data Hub is entirely in sync with the Source System. As soon as all customer records are imported into the Customer Data Hub, the implementing organization is ready to turn on real-time integration via the chosen middleware, thereby enabling bi-directional updates between the Hub and the spoke system based on integration, mapping, and transformation logic. All bi-directional synchronization between the Customer Data Hub and Source Systems will be handled by the selected middleware. Given that the import of external customer records likely brought in a great many duplicates into the CDH, the implementing organization is now ready to take advantage of the Customer Data Hubs data quality utilities. Customer Data Librarian, with its embedded Data Quality Management (DQM) functionality will be used to clean the customer records residing in the Hub. This will entail running a series of System Duplicate Identification Batches to find duplicate customer records in the system. Once duplicate records are found, the Data Librarian can merge these records, thereby creating a duplicate-free customer data repository. Details on implementing Customer Data Librarian and DQM can be found in the Oracle Customer Data Librarian Implementation Guide (Part No. B12313-03). Now that the customer data inside the CDH has been cleaned and de-duped, the implementing organization is ready to enrich and enhance their data within the CDH with any 3rd party data vendor, including D&B, Trillium, etc. If the implementing organization chooses to enrich their information with D&B, they can use the out-ofthe-box D&B integration to purchase enriched financial and marketing information on their customer base. In addition, if the implementing organization has chosen to purchase the Enterprise Data package, Relationship Hierarchies will be created for their customer base within the Customer Data Hub. For detailed information on setting up D&B integration and related functionality, please see the Oracle Customers Online Implementation Guide (Part No. A96193-06).

Finally, now that the implementing organization has a cleansed, enriched, and duplicate free Customer Data Hub TCA repository, they are ready to generate a Single Source of Truth (SST) record for the customers in the Hub. Because the implementing organization has chosen to setup their source system for SST within the Customer Data Hub, they will have the option of selecting any attribute (supported by SST) that was provided via User Entered, D&B, or the Source System, as being the SST attribute for their Customer Data Hub. Step 2: Subsequent Population of CDH Prior to getting to Step 2: Subsequent Population of CDH, it is assumed that the implementing organization has already done one of two things: 1. They are already live on some E-Business Suite applications and have data populated in TCA. 2. They are not live on any E-Business Suite applications but have performed Step 1. Once the implementing organization has customer data populated in the Customer Data Hub (either via E-Business Suite or Bulk Import), they are ready to bring a new source system live on the Customer Data Hub. When deciding how to integrate the subsequent source systems, a similar process as the Step 1 applies, however with subsequent systems, the implementing organization has additional options. Such options include Bulk Import De-duplication, Automerge upon Import, Address Validation upon Import, and when to generate the SST record. It is recommended that if the implementing organization chooses to set up their source systems in SST, that this functionality be enabled prior to running Bulk Import for subsequent source systems. If the organization is unsure as to the quality of the data being imported, and therefore is skeptical of making a given source system the SST value for any particular attribute, then it is recommended that they set the SST rules to display information from a different data source. This will ensure that the source system being imported will populate the SST tables, even if the values inserted into these tables are not displayed through the UI (due to SST rules). However, just by virtue of having SST setup and running prior to the new source system Bulk Import, the incoming values will be stored in the SST tables, thereby allowing the implementing organization to change the SST rules to show attributes from the source at any time in the future. This provides a scalable, flexible option for implementing organizations to leverage their existing data for operational efficiencies with the Customer Data Hub.
Bulk Import De-duplication

Bulk Import De-duplication functionality is a powerful, proactive tool that helps organizations find duplicate customer data on the way into the Customer Data Hub, prior to it actually hitting the operational data store. Once customer data has already populated the Customer Data Hub TCA Registry, it is recommended that Bulk Import De-duplication be used for all new source systems being brought live onto the CDH going forward. This will ensure that the data coming into the Customer Data Hub is as

clean as possible, and will prevent Data Librarians from having to spend extended time finding and merging these duplicates after the records have been inserted into the Hub. In addition, running de-duplication prior to records being inserted into the Customer Data Hub will ensure that minimal unnecessary duplicates are pushed out to source systems without the true intention of the business.
Automerge upon Import

Automerge functionality is an extremely useful tool that can save the implementing organization a great deal of time and effort from a duplicate identification and resolution perspective. When setting up Match Rules for Automerge, it is important that they be created conservatively. In other words, the Match Rules used for Automerge should be tightly defined so as to minimize the risk of automatically merging records that are not truly duplicates. A tightly defined Match Rule would be considered a Match Rule where the primary Transformation being used is Exact Match, and the combination of attributes included in the Match Rule are distinctly unique (e.g. Social Security Number, Tax Payer ID, combination of exact Organization Name and Exact Address, etc.). When the organization is confident in the Match Rules that they have enable for Automerge, they can gain true efficiencies in their Customer Data Hub Bulk Import process.
Address Validation

The Customer Data Hub has out-of-the-box adaptors to third party data enrichment and validation vendors, including a Trillium adaptor for real time Address validation. If the implementing organization has a relationship with a third party address validation vendor, or maintains their own address validation logic, they can apply address validation and make adjustments upon Bulk Import. As with the other optional Bulk Import tools, validating addresses upon import into the Customer Data Hub provides a proactive means of ensuring all customer data entering the CDH is accurate.

Appendix A: Oracle Application Server 10g Integration


Communication between the Customer Data Hub and spoke systems can occur via Oracle Application Server 10g, or another middleware solution. For the purpose of providing a middleware integration example in this document, we will cover the bidirectional flow of information between all spoke systems and the Customer Data Hub via Oracle Application Server 10g InterConnect (OracleAS InterConnect) integration. Although OracleAS InterConnect is prominently mentioned for the remainder of this document, the general concepts discussed herein can be applied to any middleware solution being used by the implementing organization. For detailed information on Oracle Application Server Integration, please refer to the Oracle Application Server 10g InterConnect White Paper.

OracleAS InterConnect - Background


Oracle Application Server 10g InterConnect is a comprehensive application integration framework that enables seamless integration of enterprise software. InterConnect is built on top of the Oracle Application Server 10g platform and leverages its underlying services. This robust engine is designed to integrate heterogeneous systems, be it Oracle Applications, non-Oracle applications, or 3rd party messaging oriented middleware. OracleAS InterConnect consists of the following three core components: OracleAS InterConnect Hub, OracleAS InterConnect Adapters and the OracleAS InterConnect Development Kit. OracleAS InterConnect Hub The OracleAS InterConnect Hub consists of a middle tier repository server program that communicates with a hub database. The repository consists of the following functionality:

At design time, all integration logic defined in iStudio is stored in tables in the repository as metadata. At runtime, the repository provides access to this metadata for adapters to integrate applications.

The repository server is deployed as a stand-alone Java application running outside the database, and its schema contains a set of tables in the Hub database.

Note: The OracleAS InterConnect Hub is different than the Customer Data Hub. In the context of middleware solutions, the term Hub is used to describe the central repository of all integration rules and logic. For the purposes of the Oracle Customer Data Hub solution, CDH will be the true Hub being positioned to the market, whereby the InterConnect Hub will be a key component in storing the integration rules and logic for integrating the CDH with heterogeneous system landscapes.

OracleAS InterConnect Adapters Oracle AS adapters have two major tasks:


Provide connectivity between an application and the OracleAS InterConnect Hub. Transform and route messages between the application and the OracleAS InterConnect Hub.

Adapters are deployed as stand-alone Java applications running outside the database. Adapters can be deployed in several configurations, including:

Co-located with the OracleAS InterConnect Hub. Co-located with the application they are connecting to. On a separate machine altogether.

OracleAS InterConnect Development Kit The Oracle AS InterConnect Development Kit is comprised of the iStudio console. iStudio is a design time integration specification tool, catered towards non-technical users. This tool helps implementing organizations specify the integration logic at a functional level, instead of a technical coding level. iStudio exposes the integration methodology using simple wizards and reduces (or eliminates) the need for writing code to specify integration logic. This reduces the total time required to complete integration amongst heterogeneous systems. iStudio allows implementing organizations to:

Define the applications that need to participate in the integration. Define data to be exchanged across applications. Systematically map data across applications. Optionally, capture any process flows required for integration through Oracle Workflow. Configure and deploy the integration.

For detailed information on OracleAS InterConnect, please refer to the Oracle Application Server InterConnect User Guide (Part No. B10404-01).

OracleAS InterConnect - Implementation


Implementation of OracleAS InterConnect with the Customer Data Hub is comprised of two phases. Each phase is supported by a variety of components and tools that will be described in this section. Phase I: Design Time Phase I of the OracleAS InterConnect implementation is entitled Design Time. During Design Time, the implementing organization uses the iStudio interface to define: the integration objects, the various applications that participate in the integration, and the specifications of the data exchanged between applications. All specifications are stored as metadata in the OracleAS InterConnect Repository. There are many components of the Design Time phase that are addressed with iStudio. For the purposes of this document, we will focus on a few key features required to providing an understanding of the Design Time process for integration with the Customer Data Hub. These concepts and pieces of functionality should be aligned with the implementing organizations unique business practices, to optimally implement the Design Time phase of InterConnect integration.

Additional features and concepts can be found in the Oracle Application Server InterConnect User Guide Sections 2 Using iStudio and Section 3 Creating Applications, Views, and Business Objects.
Key Concepts and Features of iStudio

Feature Applications

Description Each component integrated with OracleAS InterConnect is referred to as an application. When applications are defined, the implementing organization specifies which messages each system is interested in, what internal data type is being used, and how messages should be mapped to or from that internal type to the external world. OracleAS InterConnect follows a hub-and-spoke integration methodology. The common view is the hub view of the integration where each spoke represents an application participating in the integration. The common view consists of the following elements:

Common Views and Business Objects

Business Objects - A collection of logically related integration points. For example, Create Customer, Update Customer, Delete Customer, and Get Customer Info are all integration points that logically belong under a Customer business object. Events - An integration point used to model the Publish/Subscribe paradigm. An event has associated data, which is the common view of all the data to be exchanged through that event. Procedures - An integration point used to model the Request/Reply paradigm. Like events, procedures have associated data that represent the common view of data exchanged through a procedure. Note: This is a modeling paradigm only. No actual procedures are called. Common Data Types - Used to define data for reuse and is especially useful for defining complex hierarchical data.

Events

An event is an integration point used to model the Publish/Subscribe paradigm. An event has associated data that is the common view of all the data to be exchanged through any particular event. In other words, the data associated with an event in the

Feature

Description common view must be a superset of the data of participating applications. The publish/subscribe paradigm is used for asynchronous one-way communication. The sending application is said to publish the event, whereby the receiving application subscribes to the event.

Procedures

A procedure is an integration point used to model the Request/Reply paradigm. This is a modeling paradigm only, in that no actual procedures are called. The request/reply paradigm is used for twoway context sensitive communication. This communication can be either synchronous (the requesting application is blocking until it receives a reply) or asynchronous (the requesting application gets the reply asynchronously, whereby it does not block-wait for the response after sending the request). An application can either invoke a procedure to model sending a request and receiving a reply, or implement a procedure to model receiving a request and sending a reply. Similar to events, a procedure has associated data. While an event is only associated with one data set, a procedure has two data sets--one for the request or IN data and one for the reply or OUT data. Transformations are used to map application views of data to their corresponding common views and vice-a-versa. This functionality is used in the context of publishing/subscribing to an event or invoking/implementing a procedure. There are twenty-seven built-in transformation routines provided with OracleAS InterConnect that are used to build complex mappings. In addition, iStudio allows for new transformation routines to be created by the implementing organization. Content based routing allows implementing organizations to define rules to route messages based on message content. For example, a lead generation system can route leads to different sales force automation systems based on the location of the potential customer. Keys for corresponding entities created in different applications can be correlated through crossreferencing in iStudio. For example, a customer created in an order entry system may have the OSR of OE_Cust_ID. The customer data is then routed to an order fulfillment system, where the system

Transformations and Mappings

Content Based Routing

Cross Reference Tables

Feature

Description assigns a OSR of OF_Cust_ID. OE_Cust_ID and OF_Cust_ID can be cross-referenced in iStudio so that OracleAS InterConnect can correlate communication about this same logical entity in two different systems without each system knowing the OS/OSR of the other system. Note that the Customer Data Hub also provides out of the box cross-reference functionality, known as Source System Management. SSM should be used for all integration scenarios, however, InterConnect Cross Referencing is only necessary to perform certain cross-application functions Later in this document, we will outline the specific scenarios where iStudio Cross Referencing should be leveraged.

Phase II: Runtime OracleAS InterConnect Runtime is an event-based distributed messaging system. An event is any action that initiates communication through messaging between two or more applications integrated through OracleAS InterConnect. Runtime enables inter-application communication through hub and spoke InterConnect integration. This methodology keeps the applications decoupled from each other by integrating them to a central hub rather than to each other directly. The applications are at the spokes of this arrangement and are unaware of any other applications with which they are integrating. To each application, the target of a message is the integration hub not another system. Since each application integrates with the hub, translation of data between the application and hub, in either direction, is sufficient to integrate two or more applications. There are four major components of Runtime, as well as many features that can be leveraged throughout the Customer Data Hub implementation with OracleAS InterConnect. Again, for the purposes of this document, we will focus on a few core features and components required to understand Runtime as it relates to integration with the Customer Data Hub. As with Design Time, the combination of many of these concepts and components should be aligned with the implementing organizations business processes, to effectively implement the Runtime phase of InterConnect. Additional features and concepts can be found in the Oracle Application Server InterConnect User Guide (Part No. B10404-01) Sections 2 Using iStudio and Section 8 Runtime System Concepts and Components.
Components of Runtime

Feature

Description

Feature Adapters

Description Prepackaged adapters help re-purpose applications at runtime to participate in integration without programming effort. Adapters are the run-time component for OracleAS InterConnect, and have the following responsibilities:

Application Connectivity - Connect to applications to transfer data between the application and OracleAS InterConnect. The logical sub-component within an adapter that handles this responsibility is called a bridge. This is the protocol/applicationspecific piece of the adapter that knows how to communicate with the application. For example, the database adapter is capable of connecting to an Oracle database using JDBC and calling SQL APIs. This sub component does not know which APIs to call, only how to call them. Transformations - Transform data from the application view to common view and vice-a-versa as dictated by the repository metadata. In general, adapters are responsible for carrying out all the runtime instructions captured through iStudio as metadata in the repository. Transformations are an important subset of these instructions. The logical sub component within an adapter that handles the runtime instructions is called an agent. This is the generic runtime engine in the adapter that is independent of the application to which the adapter connects. Agents focus on the integration scenario based on the integration metadata in the repository. There is no integration logic coded into the adapter itself; rather, all integration logic is stored in the repository. The repository contains the metadata that drives this sub component. In the database adapter example, this is the sub-component that knows which SQL APIs to call, but not how to call them. All adapters have the same agent code. It is the difference in metadata that each adapter receives from the repository that controls and differentiates the behavior of each adapter.

Repository The repository consists of two components:

Repository Server - A Java application that runs outside of the database. It provides an API layer to create/modify/delete metadata at design time using iStudio and query during runtime by use of adapters. Both adapters and iStudio act as clients to

Feature

Description

communicate with the repository server. Repository Database - The repository server stores metadata in database tables. The server communicates to the database using JDBC.

Advanced Queues

Advanced Queues provide the messaging backbone for OracleAS InterConnect in the hub. In addition to being the store and forward unit, they provide message retention, auditing, tracking, and guaranteed delivery of messages. Oracle Workflow facilitates integration at the business process level through its Business Event System. OracleAS InterConnect and Oracle Workflow are integrated to leverage this facility for business process collaborations across applications.

Oracle Workflow

Key Features of Runtime

Feature Messaging Paradigms

Description The following are the two major messaging paradigms that will be leveraged with OracleAS InterConnect:

Publish/Subscribe - Messaging paradigm for asynchronous one-way communication. Request/Reply - Messaging paradigm for twoway context sensitive communication. Can be either asynchronous or synchronous.

These paradigms can be configured on a per integration point basis. Message Delivery The following are features of message delivery:

Guaranteed Delivery - All messages are guaranteed to be delivered from the source application(s) to the destination application(s). Exactly One Delivery - The messages are neither lost nor duplicated. The destination application(s) will receive each sent message exactly once. In Order Delivery - The messages are delivered in the exact same order as they were sent. This is applicable only when there is one instance of the adapter running per application serviced.

Message Retention

Messages remain in the runtime system until they are delivered. Advanced Queues in the hub provide the message retention. The messages are deleted when each

Feature

Description application that is scheduled to receive a specific message has received that message. For auditing purposes, implementing organizations can configure the system to retain all messages, even after they have been delivered successfully to each application.

Routing Support

Routing is a function of the Advanced Queues in the hub. By default, oai_hub_queue is the only multi-consumer Advanced Queue configured to be the persistent store for all messages for all applications. This will handle all standard, as well as content-based routing needs. Moreover, this queue is created automatically when implementing organizations install the repository in the hub. The only reason to change this configuration is if Advanced Queues becomes a performance bottleneck. For most scenarios, this is unlikely because most of the message processing is done in the adapters, not in the hub. This is the default implicit routing used by OracleAS InterConnect. No explicit rules need to be specified. The built-in rules are as follows:

Event Based Routing

For a particular event (publish/subscribe paradigm), all messages from all publishing applications are routed to all subscribing applications. For a particular procedure (request/reply paradigm), all messages from all invoking (requesting) applications are routed to one of the implementing (replying) applications. Note: Request/Reply in OracleAS InterConnect allows only one implementing (replying) application to reply to the request. This is analogous to the following example lets say John is calling into a 1-800 customer support center. At the center, there are multiple operators standing by to take the call. However, once John is connected to a representative, only that single operator will handled Johns request. OracleAS InterConnect uses a similar approach. A request can be instantiated by system 1, but once a single replying system responds to the message, the paradigm is complete. The precise application that gets the requests out of the pool of all possible reply-ready applications is determined at runtime non-deterministically.

Content

Content-based routing allows organizations to route

Feature Based Routing

Description messages to specific destination applications based on specific message content. For example, an electronic funds transaction settlement application is designed to transmit bank transactions with a specific bank code to identify the destination bank system. When the electronic funds transfer application publishes each message at runtime, the Oracle Application InterConnect runtime component determines the bank code value based on objects stored in the repository, and routes the message to the appropriate recipient system.

Error Several Error Management capabilities are available, Management including:

Resubmission - Errored-out messages can be resubmitted into the integration environment for processing after modification (if required) using the runtime management console. Tracing - Organizations can modify the .ini files of adapters to turn up the tracing level to troubleshoot errors. Users can view the tracing logs by opening up log files through the runtime management console. Tracking - Messages can be tracked by specifying tracking fields using iStudio. The runtime system checkpoints state at certain pre-defined points so that users can monitor where the message is currently in the integration environment. This tracking capability can be utilized through the runtime management console.

InterConnect Design Time and Runtime - Putting it All Together All systems talk to each other via a combination of the elements described above. So, lets put all of these concepts into an example. Lets say that we have three systems that we would like to integrate; Oracle Customer Data Hub, Siebel, and SAP. Heres a step-by-step walkthrough of the process required to integrate these systems. The implementing organization is required to provide the following input to get this integration scenario up and running:

Identify systems that need to be integrated. Create corresponding applications in iStudio. In our case, we will create the following three applications: o Oracle o Siebel o SAP

Identify the protocols that will be used to interface with these applications. Those protocols determine which adapter should be used. In this case, we will use the following: o Oracle database adapter o Siebel Siebel adapter o SAP SAP adapter Identify integration points that will be used by these systems to communicate. This includes the following: o Direction of flow of data o Messaging paradigm to be used publish/subscribe vs. request/reply o APIs and data structures for each endpoint o Event capture (to grab data from application to send out) and event resolution (to apply data received from adapter to the application internals) mechanisms. For example in the case of Oracle, organizations could use Business Event System callouts available on the Customer Data Hub as an event capture mechanism. For event resolution, the PL/SQL APIs exposed by the CDH could be leveraged. For each integration point (assuming publish/subscribe), we will create a common view event in iStudio and decide what the common view data structure looks like. Now that the common view event and its associated data have been defined, we can focus on bringing each application into the mix. Depending upon the direction of the flow, each application will either publish or subscribe to various events. In both cases, the following steps are required for each application: o Define application view. o Map application view to common view (if publishing) or common view to application view (if subscribing).

Now that all of our setup has taken place, we are ready to turn on OracleAS InterConnect in real time with the Customer Data Hub. The following is a visual depiction of how the Customer Data Hub integrates with heterogeneous systems via Oracle Application Server InterConnect.

CDH Implementation Guidelines using OracleAS InterConnect The following section is designed to outline the various considerations and implementation methodologies that should be addressed when implementing the CDH in conjunction with OracleAS InterConnect. Application of the considerations noted herein will be applied to real business flow examples in section 9 Customer Data Hub Business Flows. Again, as with all implementations, actual implementation of the methodologies and concepts noted herein may vary based on organizational business requirements. We will use the following example integration scenario to walk through the guidelines described in this section: The CDH will be connected to 5 spoke systems (A through E) through OracleAS InterConnect. Each spoke system has their own silo of customer information

segmented by regions US, Canada, Mexico. Lets say spoke systems A, B, and C are dedicated to US customers and D and E store Canadian and Mexican customer information respectively.
Messaging Paradigm - Publish/Subscribe

Utilize the Publish/Subscribe messaging paradigm to accommodate most business cases as opposed to the Request/Reply paradigm. The Pub/Sub paradigm has been optimized for CDH business flows.

Publish - To publish events from the CDH, the first step is to detect that a new entity has been created or that an existing entity has been updated in the CDH. The process of detecting and grabbing this new data is called event capture. For event capture in the CDH, use Oracle Workflow Business Event System (BES) callouts. Currently, implementing organizations will need to write custom PL/SQL code to subscribe to the BES events and publish the InterConnect events. An example of this code is provided in the appendix. Note that it is on the CDM development roadmap to productize such code to publish BES events to the middle-tier, but no ETA is available at this time. iStudio will generate PL/SQL code at design time that must be invoked by this custom PL/SQL code to send the data to the adapter. Subscribe - After the CDH receives events that it has subscribed to, the data contained in these events must be applied to the CDH schema. This process of applying the data received from an InterConnect adapter to the application internal schema is termed event resolution. For event resolution in the CDH, use the public PL/SQL APIs. Custom PL/SQL wrappers will be required to invoke these APIs and perform any additional tasks. iStudio will generate this wrapper as PL/SQL stubs at design time. This wrapper will be invoked by the adapter at runtime to transfer data to the CDH. The implementing organization must edit the wrapper to add custom code to invoke the CDH public APIs.

Create Common View mirroring the TCA Application View

The Common View within OracleAS InterConnect should mirror the CDH TCA schema. The reason being is that the Customer Data Hub, based on TCA, should represent a macro-view of all customer information across the enterprise.

Common View must also include a field to capture original system (OS) and original system reference ID (OSR). If the implementing organization is using Content Based Routing (see below), any additional attributes required for routing must be defined in the Common View for that particular event.

Content Based Routing

All event routing rules must be captured using OracleAS Inter Connects Content Based Routing feature. Note that when you use Content Based Routing, organizations must make sure that all routing cases are covered given that all default event based routing rules are invalidated if Content Based Routing is set up. The suggested rules are as follows:

All events published by a spoke system (Create or Update) must go directly to the CDH, whereby: o If sending application is not the CDH, send message to the CDH and no other application. Sending application information can be retrieved from the message header you do not need an extra field in the message payload to capture this information. Common View must also include fields to capture original system and original system record.

The following table describes Create and Update flows with regard to Content Based Routing: Flow Details

Create For Create flows originating from the CDH, deliver events only to those spoke systems that are interested in that particular instance of the flow. For example, if in CDH, a new US customer is created, systems D and E should not get the message. The following are the suggested rules based on the scenario outlined above:

If sending application is the CDH, and original system is CDH, and country is US, send message to A, B, C. If sending application is the CDH, and original system is CDH, and country is Canada, send message to D. If sending application is the CDH, and original system is CDH, and country is Mexico, send message to E.

All Create events published by the CDH in response to a previous Create event published by a spoke system (flow originating in a spoke system), should not be delivered to the spoke system that initiated the flow. In our example, if spoke system A was the originating system where a new US customer was created, that information must be delivered only to systems B and C because only A, B, and C store US customer information. For each spoke system that is being integrated to the CDH, create a rule that sends the message to all spoke systems except the one listed in the original system field in the common view. In our example, this would translate to the following rules:

If sending application is the CDH, and original system is A, send message to B, C. If sending application is the CDH, and original system is B, send message to A, C. If sending application is the CDH, and original system is C, send message to A, B.

Note that no rules are required for original systems D and E

Flow

Details because only CDH is interested in those messages they do not need to be sent back out.

Update For all Update events published by the CDH (whether originating in the CDH or in response to receiving an Update event from a spoke system), the system that generated the update must be identified for use in mitigating the boomerang effect. Suggested routing rules are exactly the same as the Create flow rules described above.
Boomerang

A new US customer is created in the CDH and is sent to systems A, B, and C, and therefore A, B, and C create the new customer internally. However, since they detect a new customer has been created internally, these three systems will in turn send this information back to CDH the spoke systems cannot determine whether CDH is the original system of record or not. This is known as the boomerang effect, whereby messages sent out to spoke systems from the CDH come right back.

For Create flows, boomerang should be utilized to populate the cross reference tables. For Update flows, boomerang messages should be dropped by the CDH.

Cross Referencing

The goal for the implementing organization is to (a) use the CDH as the single source of truth for customer information and (b) keep the customer information in sync with the relevant spoke systems. For the latter, it is essential to keep a mapping between the respective OS/OSRs and the CDH Party IDs. This is called cross-referencing. The CDH business flows use the following methodology for cross-referencing:

The master cross-reference table is maintained within CDH for all flows. Implement CDH Source System Management (SSM)

The following table describes Create and Update flows with regard to Cross Referencing: Flow Details

Create A Create event coming from a spoke system to the CDH could either be a Create that was initiated in the spoke system, or a boomerang message for a previous Create in the CDH. The following methodology has been devised to identify boomerang messages and handle them appropriately. The CDH Party ID (populated or NULL) and the OS/OSR (always populated) are present in the message coming into

Flow

Details CDH. This will be utilized by CDH as follows:

Detect Boomerang: o If the Party ID is present in a message that originated from a spoke system, then the CDH will assume that this is a boomerang message and discard the message. This will end the potential endless cycles of boomerang messages related to a single create event. o If the Party ID is not present in the message, then the CDH can assume that the Create originated in the spoke system. Populate SSM: o If it is a boomerang message, the CDH will update the SSM with the OS/OSR for this spoke system using the Party ID as the key into SSM. o If it is not a boomerang message (Party ID field is NULL), then a new entry will be created in the SSM after the new party is created in the CDH. This entry will have both the Party ID (just created) and the OS/OSR of the specific spoke system (passed through the message).

One of the following three options must be implemented to ensure that the source system customer IDs that are correlated with the CDH Party IDs are passed back to the CDH. In other words, the application view of data that reaches CDH as a result of subscribing to a Create event should have the above fields populated. 1. Specify a field in each spoke system to hold associated Party ID from the CDH. This is the recommended approach, but it does assume flexibility in the spoke systems to designate a particular field to store this information. In addition to storing this value when subscribing to a Create event, the spoke application should be able to send this field value out in a boomerang situation. 2. If Option 1 cannot be accommodated, and if the spoke system has a synchronous API that is utilized by the OracleAS InterConnect adapter to connect to it, use the InterConnect Cross Reference functionality in addition to SSM. Please refer to the InterConnect user guide. The reason this works only with synchronous inbound APIs to the spoke systems is that InterConnect Cross Reference functionality assumes that the OS/OSR can be retrieved by the same adapter thread that called the application API to create the entity. If that API is

Flow

Details asynchronous, InterConnect Cross Referencing cannot be used. 3. If Option 1 cannot be accommodated, and if the OracleAS InterConnect adapter can only use an asynchronous inbound interface to the spoke system, two custom transformations will be needed. These transformations should provide the following functionality: o Transformation 1 - When a spoke system subscribes to a Create event, an iStudio user specified set of fields should be stored in a temporary table (this table can reside in the InterConnect Hub database). The set of fields should include: CDH Party ID . The minimal set of other fields that in the absence of the OS/OSR (yet to be created in the spoke system), can uniquely correlate the CDH Party ID to the entity coming out asynchronously from the spoke system in a boomerang message. For example, this could be the First Name, Last Name, and Address for a customer. o Transformation 2 When the spoke system publishes a boomerang message, this transformation should be able to query the table that was populated by Transformation 1 using the user specified set of fields as a key and be able to retrieve the corresponding CDH Party ID. Note: In case the message is not a boomerang, no corresponding CDH Party ID will be found and this transformation should populate the field with a NULL.

Update Custom transformations will be leveraged to detect boomerang in Update flows. If the Party ID is NULL in an Update message coming into the CDH from a spoke system, the message is a boomerang. Otherwise it is an update initiated by the spoke system. The assumption is, that if an update occurs in the CDH, the Party ID will be passed to the spoke system with the Update event. When the system receiving system makes the update, it will pass back the Party ID with the new Update event. Therefore, if no Party ID is passed, it is assumed that the Update originated in a spoke system. Two custom transformations are needed, which will create/update/delete entries in a custom table with the following

Flow

Details content:

OS/OSR CDH Party ID

The transformations are as follows:

Transform 1 This transformation is invoked when spoke systems subscribe to Update events (CV to spoke system AV). The purpose of the transformation is to create a row in the custom table above with the information that an Update message for the particular OS/OSR has been sent to that particular spoke system. Transform 2 This transform is invoked when a spoke system publishes Update events (spoke system AV to CV). The purpose of this transformation is to query the custom table for the particular OS/OSR for that spoke system. If it is not found (not a boomerang message), do nothing. If found (boomerang message), delete the corresponding row in the custom table created by transform 1 and replace OS/OSR in the CV with NULL. The custom code will discard the boomerang message if those fields are NULL.

For Update flows, the InterConnect cross-referencing feature is not used. The custom code will populate the message with all relevant OS/OSRs. A custom transformation will be required to extract the particular spoke system OS/OSR from the CV. This transformation will be invoked when spoke systems subscribe to Update events (CV to spoke system AV).
Data Granularity

Spoke systems may or may not pass granular level objects to the Customer Data Hub. If the objects being passed are not granular (e.g. address, contact, etc.), but rather are at the macro-customer level, the implementing organization will need to write a custom PL SQL wrapper to facilitate the importing of objects at the correct level into the CDH.

Appendix B: Customer Data Hub Business Flows


In this section, we will apply the CDH axioms and methodologies outlined throughout this guide, to specific business scenarios. The scenarios outlined herein are a representative list of the possible business scenarios that must be addressed when implementing the Customer Data Hub. As with all systems implementations, the exact determination of how CDH features will be implemented is dependent on specific

business requirements. However, the following examples should provide good guidance as to the various alternatives and implementation decisions that should be addressed when implementing the CDH. Note that, for the purpose of the examples described in this appendix, Oracle Application Server 10g Interconnect will act as the middleware solution. If using another middleware solution, the specific setup steps will differ, but most of the underlying concepts will remain relevant.

Party Created in Source System

Step # Process Details 1

Implementation Considerations

Enter required customer information in Source System 1. Create Customer event raised in Source System 1.

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source system record ID (e.g.

Step # Process Details

Implementation Considerations OS/OSR).

OracleAS InterConnect processing: Customer information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, calls the relevant destination application adapters. These adapters get the message in the CV format. Customer Data Hub subscribes to Create Customer event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then

Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR.

Content Based Routing is used to determine subscribing systems.

Based on Content Based Routing, only the CDH should be privy to the customer creation in SS 1. *All new information must be passed through CDH first.

Step # Process Details delivered to the CDH by invoking PL/SQL APIs (see step 6). OS and OSR is attached to new record in CDH during create flow. New customer record created in CDH. Note: Party ID was null in CV so Create Customer occurs.

Implementation Considerations

A custom PL SQL wrapper is required to call the Create Customer and Map SSM APIs. Note that the wrapper skeleton will be created by iStudio at design time. The implementer must populate this wrapper with code to do the above.

Create Customer BES Event raised in CDH. This event is raised as a result of the customer created in Step 6. Customer Data Hub publishes Create Customer event to OracleAS InterConnect.

BES Event needs to be published to InterConnect via custom PL SQL (include OS/OSR to prevent routing to the spoke system that created the customer in the first place). Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used.

Step # Process Details 9

Implementation Considerations

OracleAS InterConnect processing: Customer information is passed to the adapter servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format Source System 2 subscribes to Create Customer event. Based on transformations created in iStudio, the adapter servicing Spoke System 2 transforms the CV message received from the InterConnect hub to the AV format for that system. It is then delivered to Spoke System 2. New Customer created in

Content Based Routing logic configured so that if a particular source systems OS/OSR is already populated in the CV, that InterConnect will not deliver the message to that particular source system. This will ensure that only the source systems that did not generate this record are receiving the new customer information. Note: OS/OSR has been included in CV.

10

Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Spoke System 2 (this methodology can be extended to multiple spoke systems).

11

12

Step # Process Details Source System 2. 13

Implementation Considerations

Steps 2-5 above are repeated. The only difference is that this time, the event is published by Source System 2 rather than Source System 1. The CDH will receive the message and: o Detect the boomerang situation because Party ID in the CDH AV will be populated - this is not a new customer so none will be created in the CDH. o 2. Update SSM (using Party ID as the key) with the OS/OSR for customer created in Source System 2.

See steps 2-5 above

14

Boomerang detection and updating SSM must be done in the custom SQL wrapper created in Step 6. One of the Transformations needs to make sure that the TCA Party ID is already populated so that CDH knows this is not a new customer but rather a duplicate message. This can be done one of three ways. Since a boomerang message was detected, no new flow will be initiated by the CDH.

Party Created in Customer Data Hub

Step # Process Details 1

Implementation Considerations

New customer record created in CDH. Create Customer BES Event raised in CDH. Customer Data Hub publishes Create Customer event to OracleAS InterConnect.

BES Event needs to be published to InterConnect via custom PL SQL. Include OS/OSR CDH/Party ID, so that this can be used to detect boomerang messages coming from spoke systems. Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used. Content Based Routing

OracleAS InterConnect

Step # Process Details processing: Customer information is passed to the adapter servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub OracleAS InterConnect passes customer information to subscribing systems The hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format Source System 1 subscribes to Create Customer event. Based on transformations created in iStudio, the adapter servicing Spoke System 2 transforms the CV message received from the InterConnect hub to the AV format for that system. It is then delivered to Source System 1. New Customer created in Source System 1. Create Customer event

Implementation Considerations logic configured so that if a particular source systems OS/OSR is already populated in the CV, that InterConnect will not deliver the message to that particular source system. This will ensure that only the spoke systems that did not generate this record are receiving the new customer information. Note: OS/OSR has been included in CV. Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Spoke System 1 (this methodology can be extended to multiple spoke systems).

Events will be

Step # Process Details raised in Source System 1.

Implementation Considerations configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source system record ID. Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR.

OracleAS InterConnect processing: Customer information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV

Content Based Routing is used to determine subscribing systems .

Step # Process Details format. 10

Implementation Considerations

Customer Data Hub subscribes to Create Customer event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then delivered to the CDH by invoking PL/SQL APIs (see step 6). OS and OSR is attached to new record in CDH during create flow. The CDH will receive the message and: o Detect the boomerang situation because Party ID in the CDH AV will be populated this is not a new customer so none will be created in the CDH. o Update SSM (using Party ID as the key) with the OS/OSR for customer created in Spoke System 2.

Based on Content Based Routing, only the CDH should be privy to the customer creation in Spoke System 1. *All new information must be passed through CDH first.

11

Boomerang detection and updating SSM must be done in a custom PL SQL wrapper. Note that the wrapper skeleton will be created by iStudio at design time. The implementer must populate this wrapper with code to do the above. One of the Transformations needs to make sure that the TCA Party ID is already populated so that CDH knows this is not a new customer but rather a duplicate message. This can be done one of three ways. Since a boomerang message was detected, no new flow will be

Step # Process Details

Implementation Considerations initiated by the CDH.

Child Entity Created in Source System

Step # Process Details 1

Implementation Considerations

Enter customer child entity information in Source System 1 (e.g. address). Create Address event raised in Source System 1.

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system and record (e.g. OS/OSR). Publish/Subscribe messaging paradigm to

OracleAS InterConnect processing: Customer

Step # Process Details address information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. 4

Implementation Considerations

be used. TCA should be created as the Common View. Common View should contain OS/OSR.

OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format. CDH subscribes to Create Address event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then delivered to the CDH by invoking PL/SQL APIs (see step 6). OS and OSR is attached to new record in CDH during

Content Based Routing is used to determine subscribing systems.

Based on Content Based Routing, only the CDH should be privy to the address creation in SS 1. *All new information must be passed through CDH first.

Step # Process Details create flow. 6


Implementation Considerations

New address record created in CDH. Note: Party ID was null in CV so Create Address occurs.

A custom PL SQL wrapper is required to call the Create Customer and Map SSM APIs. Note that the wrapper skeleton will be created by iStudio at design time. The implementing organization must code this wrapper to execute accordingly.

Create Address BES Event raised in CDH. This event is raised as a result of the customer created in Step 6. Customer Data Hub publishes Create Address event to OracleAS InterConnect.

BES Event needs to be published to InterConnect via custom PL SQL (include OS/OSR to prevent routing to the spoke system that created the customer in the first place). Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used. Content Based Routing logic configured so that if a particular source systems

OracleAS InterConnect processing: Customer address information is passed to the adapter

Step # Process Details servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. 10

Implementation Considerations OS/OSR is already populated in the CV, that InterConnect will not deliver the message to that particular source system. This will ensure that only the source systems that did not generate this record are receiving the new customer information. Note: OS/OSR has been included in CV. Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Source System 2 (this methodology can be extended to multiple spoke systems).

OracleAS InterConnect passes address information to subscribing systems The hub AQ, based on the recipient list in the message header, calls relevant destination application adapters. These adapters get the message in the CV format. Source System 2 subscribes to Create Address event. Based on transformations created in iStudio, the adapter servicing Source System 2 transforms the CV message received from the InterConnect hub to the AV format for that system. It is then delivered to Source System 2. New Customer created in Source System 2. Steps 2-5 above are repeated. The only

11

12

13

See steps 2-5 above.

Step # Process Details difference is that this time, the event is published by Source System 2 rather than Source System 1. 14

Implementation Considerations

The CDH will receive the message and: o Detect the boomerang situation because Party ID in the CDH AV will be populated - this is not a new customer so none will be created in the CDH. o Update SSM (using Party ID as the key) with the OS/OSR for customer created in Source System 2.

Boomerang detection and updating SSM must be done in the custom SQL wrapper created in Step 6. One of the Transformations needs to make sure that the TCA Party ID is already populated so that CDH knows this is not a new customer but rather a duplicate message. This can be done one of three ways. A boomerang message was detected, so no new flow is initiated by the CDH.

Child Entity Created in Customer Data Hub

Step # Process Details 1

Implementation Considerations

New customer child entity (e.g. address) created in CDH. Create Address BES Event raised in CDH. Customer Data Hub publishes Create Address event to OracleAS InterConnect.

BES Event needs to be published to InterConnect via custom PL SQL. Include OS/OSR CDH/Party ID, so that this can be used to detect boomerang messages coming from spoke systems. Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used Content Based Routing logic configured so that if a particular source systems OS/OSR is already populated in the CV, that InterConnect will not deliver the message to that particular source system. This will ensure that only the spoke systems that did not generate this record are receiving the new customer information. Note: OS/OSR has

OracleAS InterConnect processing: Customer address information is passed to the adapter servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect

Step # Process Details

Implementation Considerations

Hub. OracleAS InterConnect passes customer information to subscribing systems The hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format Source System 1 subscribes to Create Address event. Based on transformations created in iStudio, the adapter servicing Source System 2 transforms the CV message received from the InterConnect hub to the AV format for that system. It is then delivered to Source System 1. New Customer created in Source System 1. Create Customer event raised in Source System 1.

been included in CV. Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Spoke System 1 (this methodology can be extended to multiple spoke systems).

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source system record ID.

Step # Process Details 8

Implementation Considerations

OracleAS InterConnect processing: Customer address information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, calls the relevant destination application adapters. These adapters get the message in the CV format. Customer Data Hub subscribes to Create Address event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then delivered to the CDH by invoking PL/SQL APIs

Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR.

Content Based Routing is used to determine subscribing systems.

10

Based on Content Based Routing, only the CDH should be privy to the customer creation in Source System 1. *All new information must be passed through CDH first.

Step # Process Details

Implementation Considerations

(see step 6). OS and OSR is attached to new record in CDH during create flow. The CDH will receive the message and: o Detect the boomerang situation because Party ID in the CDH AV will be populated - this is not a new customer child entity so no creation will occur in the CDH. o Update SSM (using Party ID as the key) with the OS/OSR for customer created in Source System 2.

11

Boomerang detection and updating SSM must be done in a custom PL SQL wrapper. Note that the wrapper skeleton will be created by iStudio at design time. The implementer must populate this wrapper with code execute accordingly. One of the Transformations needs to make sure that the TCA Party ID is already populated so that CDH knows this is not a new customer but rather a duplicate message. This can be done one of three ways. Since a boomerang message was detected, no new flow will be initiated by the CDH.

Party Updated in Source System

Step # Process Details 1

Implementation Considerations

Update customer information in source system. Update Customer event raised.

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source system record ID. Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR .

OracleAS InterConnect processing: Customer information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems

Step # Process Details Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. 4

Implementation Considerations

OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, calls the relevant destination application adapters. These adapters get the message in the CV format. Customer Data Hub subscribes to Update Customer event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA

Content Based Routing is used to determine subscribing systems.

Based on Content Based Routing, only the CDH should be privy to the customer update in Source System 1. *All new information must be passed through CDH first.

Step # Process Details view). It is then delivered to the CDH by invoking PL/SQL APIs (see step 6). 6

Implementation Considerations

Customer record is updated in the CDH. Note: OS/OSR was not NULL in TCA AV so Update Customer occurs. Update Customer BES Event raised in CDH. This event is raised as a result of the customer update in Step 6 above. Customer Data Hub publishes Update Customer event to OracleAS InterConnect.

A custom PL SQL wrapper is required to call the Update Customer APIs. Note that the wrapper skeleton will be created by iStudio at design time. The implementer must populate this wrapper with code to do the above.

BES Event needs to be published to InterConnect via custom PL SQL (the OS in the TCA AV should contain the name of the source system that initiated the update this value is stored in the CDH in Step 5 above, to prevent routing to the source system that updated the customer in the first place). Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used. Content Based Routing logic configured so that if a particular spoke systems OS is already populated in the CV, InterConnect will not deliver the message to that particular source system. This will ensure that only the

OracleAS InterConnect processing: Customer information is passed to the adapter servicing the CDH. This adapter gets the

Step # Process Details data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. 9

Implementation Considerations source systems that did not generate this record are receiving the new customer information.

OracleAS InterConnect passes customer information to subscribing systems The hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format. Source System 2 subscribes to Update Customer event. Based on transformations created in iStudio, the adapter servicing Source System 2 transforms the CV message received from the InterConnect hub to the AV format for

Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Source System 2 (this methodology can be extended to multiple spoke systems).

10

Custom transforms are invoked so that boomerang can be detected later by CDH.

Step # Process Details that system. It is then delivered to Source System 2. 11

Implementation Considerations

Customer updated in Source System 2. Steps 2-5 above are repeated. The only difference is that this time, the event is published by Source System 2 rather of Source System 1. The CDH will receive the message and detect the boomerang situation because OS/OSR in the CDH AV will be NULL this is not a new update.

12

See steps 2-5 above.

13

Boomerang detection must be done in the custom SQL wrapper created in Step 6 above. A custom transform is needed to help detect the boomerang by populating the OS/OSR in the CDH AV with NULL. Since a boomerang message was detected, no new flow will be initiated by the CDH. The flow will end.

Party Updated in Customer Data Hub

Step # Process Details 1

Implementation Considerations

Customer record updated in the CDH. Update Customer BES Event raised in the CDH. Customer Data Hub publishes Update Customer event to OracleAS InterConnect.

BES Event needs to be published to InterConnect via custom PL SQL (the OS in the TCA AV should contain the name of the system that initiated the update in this case the value is CDH). Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used. Content Based Routing logic configured so that if a particular systems OS is already populated in the CV, that InterConnect will not

OracleAS InterConnect processing: Customer information is passed

Step # Process Details to the adapter servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format. Source System 1 and 2 subscribe to Update Customer event. Based on transformations created in iStudio, the adapter servicing Source System 1 transforms the CV message received from the InterConnect hub to

Implementation Considerations deliver the message to that particular system. This will ensure that only the source systems that did not generate this record are receiving the new customer information. In this case, the OS is CDH. Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Source System 1 (this methodology can be extended to multiple source systems).

Custom transforms are invoked so that boomerang can be detected later by CDH.

Step # Process Details the AV format for that system. It is then delivered to Source System 1 and 2. 6

Implementation Considerations

Customer updated in Source System 1 and Source System 2. Update Customer event raised in Source System 1 and Source System 2.

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source system record ID. Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR. Content Based Routing is used to determine subscribing systems.

OracleAS InterConnect processing: Customer information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ

Step # Process Details

Implementation Considerations

in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, calls the relevant destination application adapters. These adapters get the message in the CV format. Customer Data Hub subscribes to Update Customer event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then delivered to the CDH by invoking PL/SQL APIs. The CDH will receive the message and detect the boomerang situation because OS/OSR in the CDH AV will be NULL this is not a new update.

Based on Content Based Routing, only the CDH should be privy to the customer updates in Spoke System 1. *All new information must be passed through CDH first.

10

Boomerang detection must be done in the custom SQL wrapper. Note that the wrapper skeleton will be created by iStudio at design time. The implementer must populate this wrapper with code to detect boomerang. A custom transform is needed to help detect the

Step # Process Details

Implementation Considerations boomerang by populating the OS/OSR in the CDH AV with NULL. Since a boomerang message was detected, no new flow will be initiated by the CDH.

Child Entity Updated in Source System

Step # Process Details 1

Implementation Considerations

Update customer child entity (e.g. Address) information in source system. Update Address event raised.

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source

Step # Process Details

Implementation Considerations system record ID.

OracleAS InterConnect processing: Address information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format. Customer Data Hub subscribes to Update Address

Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR.

Content Based Routing is used to determine subscribing systems.

Based on Content Based Routing, only the CDH should be privy to the

Step # Process Details

Implementation Considerations customer update in Source System 1. *All new information must be passed through CDH first.

event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then delivered to the CDH by invoking PL/SQL APIs (see step 6). Customer address record is updated in the CDH. Note: OS/OSR was not NULL in TCA AV so Update Customer occurs. Update Customer BES Event raised in CDH. This event is raised as a result of the customer update in Step 6 above. Customer Data Hub publishes Update Address event to OracleAS InterConnect.

A custom PL SQL wrapper is required to call the Update Address APIs. Note that the wrapper skeleton will be created by iStudio at design time The implementer must populate this wrapper with code to do the above.

BES Event needs to be published to InterConnect via custom PL SQL (the OS in the TCA AV should contain the name of the source system that initiated the update this value is stored in the CDH in Step 5 above, to prevent routing to the source system that updated the customer address in the first place). Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter

Step # Process Details

Implementation Considerations to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used. Content Based Routing logic configured so that if a particular source systems OS is already populated in the CV, InterConnect will not deliver the message to that particular source system. This will ensure that only the source systems that did not generate this record are receiving the new customer information.

OracleAS InterConnect processing: Customer address information is passed to the adapter servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer address information to subscribing systems The hub AQ, based on the recipient list in the message header, wakes up relevant destination application adapters. These adapters get the message in the CV format. Source System 2

Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Source System 2 (this methodology can be extended to multiple spoke systems).

10

Custom transforms are

Step # Process Details subscribes to Update Address event. Based on transformations created in iStudio, the adapter servicing Source System 2 transforms the CV message received from the InterConnect hub to the AV format for that system. It is then delivered to Source System 2 Address updated in Source System 2. Steps 2-5 above are repeated. The only difference is that this time, the event is published by Source System 2 rather than Source System 1. The CDH will receive the message and detect the boomerang situation because OS/OSR in the CDH AV will be NULL this is not a new update.

Implementation Considerations invoked so that boomerang can be detected later by CDH.

11

12

See steps 2-5 above.

13

Boomerang detection must be done in the custom SQL wrapper created in Step 6 above A custom transform is needed to help detect the boomerang by populating the OS/OSR in the CDH AV with NULL. Since a boomerang message was detected, no new flow will be initiated by the CDH. The flow will end.

Child Entity Updated in Customer Data Hub

Step # Process Details 1

Implementation Considerations

Customer child entity (e.g. Address) record updated in the CDH. Update Address BES Event raised in the CDH. Customer Data Hub publishes Update Address event to OracleAS InterConnect.

BES Event needs to be published to InterConnect via custom PL SQL (the OS in the TCA AV should contain the name of the system that initiated the update in this case the value is CDH). Note that iStudio will generate PL/SQL code that must be called to deliver the message to the CDH database adapter to be sent to the InterConnect Hub. Publish/Subscribe messaging paradigm to be used. Content Based Routing logic configured so that if a particular systems OS is

OracleAS InterConnect processing: Address

Step # Process Details information is passed to the adapter servicing the CDH. This adapter gets the data in the CDH Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The hub AQ, based on the recipient list in the message header, calls relevant destination application adapters. These adapters get the message in the CV format. Source System 1 and 2 subscribe to Update Address event. Based on transformations created in iStudio, the adapter servicing Source System 1 transforms the CV message received from the

Implementation Considerations already populated in the CV, that InterConnect will not deliver the message to that particular system. This will ensure that only the source systems that did not generate this record are receiving the new customer address information. In this case, the OS is CDH. Content Based Routing used to determine subscribing systems. Let us assume that the only subscribing system is Source System 1 (this methodology can be extended to multiple source systems).

Custom transforms are invoked so that boomerang can be detected later by CDH.

Step # Process Details InterConnect hub to the AV format for that system. It is then delivered to Source System 1 and 2. 6

Implementation Considerations

Customer updated in Source System 1 and Source System 2. Update Address event raised in Source System 1 and Source System 2.

Events will be configured for all action that are to be tracked via CDH integration. These events will either be out of the box or built custom, and must be configured during implementation. Published events must pass the unique ID for the source system as well as the source system record ID. Publish/Subscribe messaging paradigm to be used. TCA should be created as the Common View. Common View should contain OS/OSR. Content Based Routing is used to determine subscribing systems.

OracleAS InterConnect processing: Address information is passed to the adapter servicing the spoke system. This adapter gets the data in the spoke systems Application View (AV) format. It then maps this AV to the Common View (CV) format as defined in iStudio. Any routing rules (Content Based or Event Based) are evaluated and the resulting recipient list is stored in the message header. The message is then forwarded to the AQ

Step # Process Details

Implementation Considerations

in InterConnect Hub. OracleAS InterConnect passes customer information to subscribing systems The InterConnect hub AQ, based on the recipient list in the message header, calls the relevant destination application adapters. These adapters get the message in the CV format. Customer Data Hub subscribes to Update Address event. Based on transformations created in iStudio, the adapter servicing the CDH transforms the CV message received from the InterConnect hub to the AV format for the CDH (TCA view). It is then delivered to the CDH by invoking PL/SQL APIs. The CDH will receive the message and detect the boomerang situation because OS/OSR in the CDH AV will be NULL this is not a new update.

Based on Content Based Routing, only the CDH should be privy to the customer updates in Spoke System 1. *All new information must be passed through CDH first.

10

Boomerang detection must be done in the custom SQL wrapper. Note that the wrapper skeleton will be created by iStudio at design time. The implementer must populate this wrapper with code to detect boomerang. A custom transform is needed to help detect the

Step # Process Details

Implementation Considerations boomerang by populating the OS/OSR in the CDH AV with NULL. Since a boomerang message was detected, no new flow will be initiated by the CDH

Appendix C: Sample Code


PL/SQL to Publish BES Events to InterConnect
FUNCTION send_person ( p_subscription_guid IN RAW ,p_event IN OUT WF_EVENT_T) RETURN VARCHAR2 IS -- local TCA record types l_person_rec HZ_PARTY_V2PUB.PERSON_REC_TYPE; ...... BEGIN -- get data from TCA tables: SELECT pp.ATTRIBUTE_CATEGORY, pp.ATTRIBUTE1, pp.ATTRIBUTE2, pp.ATTRIBUTE3, pp.ATTRIBUTE4, pp.ATTRIBUTE5, pp.ATTRIBUTE6, pp.ATTRIBUTE7, pp.ATTRIBUTE8, pp.ATTRIBUTE9, pp.ATTRIBUTE10, pp.ATTRIBUTE11, pp.ATTRIBUTE12, pp.ATTRIBUTE13, pp.ATTRIBUTE14, pp.ATTRIBUTE15, pp.ATTRIBUTE16, pp.ATTRIBUTE17, pp.ATTRIBUTE18, pp.ATTRIBUTE19, pp.ATTRIBUTE20, pp.INTERNAL_FLAG, pp.PERSON_PRE_NAME_ADJUNCT, pp.PERSON_FIRST_NAME, pp.PERSON_MIDDLE_NAME, pp.PERSON_LAST_NAME, pp.PERSON_NAME_SUFFIX, pp.PERSON_TITLE, pp.PERSON_ACADEMIC_TITLE, pp.PERSON_PREVIOUS_LAST_NAME, pp.PERSON_INITIALS,

pp.KNOWN_AS, pp.PERSON_NAME_PHONETIC, pp.PERSON_FIRST_NAME_PHONETIC, pp.PERSON_LAST_NAME_PHONETIC, pp.TAX_REFERENCE, pp.JGZZ_FISCAL_CODE, pp.PERSON_IDEN_TYPE, pp.PERSON_IDENTIFIER, pp.DATE_OF_BIRTH, pp.PLACE_OF_BIRTH, pp.DATE_OF_DEATH, pp.GENDER, pp.DECLARED_ETHNICITY, pp.MARITAL_STATUS, pp.MARITAL_STATUS_EFFECTIVE_DATE, pp.PERSONAL_INCOME, pp.HEAD_OF_HOUSEHOLD_FLAG, pp.HOUSEHOLD_INCOME, pp.HOUSEHOLD_SIZE, pp.RENT_OWN_IND, pp.LAST_KNOWN_GPS, pp.CONTENT_SOURCE_TYPE, pp.KNOWN_AS2, pp.KNOWN_AS3, pp.KNOWN_AS4, pp.KNOWN_AS5, pp.MIDDLE_NAME_PHONETIC, pp.CREATED_BY_MODULE, pp.APPLICATION_ID, pp.ACTUAL_CONTENT_SOURCE INTO 1_person_rec.ATTRIBUTE_CATEGORY, 1_person_rec.ATTRIBUTE1, 1_person_rec.ATTRIBUTE2, 1_person_rec.ATTRIBUTE3, 1_person_rec.ATTRIBUTE4, 1_person_rec.ATTRIBUTE5, 1_person_rec.ATTRIBUTE6, 1_person_rec.ATTRIBUTE7, 1_person_rec.ATTRIBUTE8, 1_person_rec.ATTRIBUTE9, 1_person_rec.ATTRIBUTE10, 1_person_rec.ATTRIBUTE11, 1_person_rec.ATTRIBUTE12, 1_person_rec.ATTRIBUTE13, 1_person_rec.ATTRIBUTE14, 1_person_rec.ATTRIBUTE15, 1_person_rec.ATTRIBUTE16, 1_person_rec.ATTRIBUTE17, 1_person_rec.ATTRIBUTE18, 1_person_rec.ATTRIBUTE19, 1_person_rec.ATTRIBUTE20, 1_person_rec.INTERNAL_FLAG, 1_person_rec.PERSON_PRE_NAME_ADJUNCT, 1_person_rec.PERSON_FIRST_NAME, 1_person_rec.PERSON_MIDDLE_NAME, 1_person_rec.PERSON_LAST_NAME, 1_person_rec.PERSON_NAME_SUFFIX, 1_person_rec.PERSON_TITLE, 1_person_rec.PERSON_ACADEMIC_TITLE, 1_person_rec.PERSON_PREVIOUS_LAST_NAME, 1_person_rec.PERSON_INITIALS,

1_person_rec.KNOWN_AS, 1_person_rec.PERSON_NAME_PHONETIC, 1_person_rec.PERSON_FIRST_NAME_PHONETIC, 1_person_rec.PERSON_LAST_NAME_PHONETIC, 1_person_rec.TAX_REFERENCE, 1_person_rec.JGZZ_FISCAL_CODE, 1_person_rec.PERSON_IDEN_TYPE, 1_person_rec.PERSON_IDENTIFIER, 1_person_rec.DATE_OF_BIRTH, 1_person_rec.PLACE_OF_BIRTH, 1_person_rec.DATE_OF_DEATH, 1_person_rec.GENDER, 1_person_rec.DECLARED_ETHNICITY, 1_person_rec.MARITAL_STATUS, 1_person_rec.MARITAL_STATUS_EFFECTIVE_DATE, 1_person_rec.PERSONAL_INCOME, 1_person_rec.HEAD_OF_HOUSEHOLD_FLAG, 1_person_rec.HOUSEHOLD_INCOME, 1_person_rec.HOUSEHOLD_SIZE, 1_person_rec.RENT_OWN_IND, 1_person_rec.LAST_KNOWN_GPS, 1_person_rec.CONTENT_SOURCE_TYPE, 1_person_rec.KNOWN_AS2, 1_person_rec.KNOWN_AS3, 1_person_rec.KNOWN_AS4, 1_person_rec.KNOWN_AS5, 1_person_rec.MIDDLE_NAME_PHONETIC, 1_person_rec.CREATED_BY_MODULE, 1_person_rec.APPLICATION_ID, 1_person_rec.ACTUAL_CONTENT_SOURCE FROM HZ_PERSON_PROFILES pp ,HZ_PARTIES p WHERE pp.party_id = p.party_id AND p.party_id = p_party_id; SELECT PARTY_ID, PARTY_NUMBER, VALIDATED_FLAG, ATTRIBUTE_CATEGORY, ATTRIBUTE1, ATTRIBUTE2, ATTRIBUTE3, ATTRIBUTE4, ATTRIBUTE5, ATTRIBUTE6, ATTRIBUTE7, ATTRIBUTE8, ATTRIBUTE9, ATTRIBUTE10, ATTRIBUTE11, ATTRIBUTE12, ATTRIBUTE13, ATTRIBUTE14, ATTRIBUTE15, ATTRIBUTE16, ATTRIBUTE17, ATTRIBUTE18, ATTRIBUTE19, ATTRIBUTE20, ATTRIBUTE21, ATTRIBUTE22, ATTRIBUTE23,

ATTRIBUTE24, ORIG_SYSTEM_REFERENCE, CATEGORY_CODE, SALUTATION INTO 1_person_rec.party_rec.PARTY_ID, 1_person_rec.party_rec.PARTY_NUMBER, 1_person_rec.party_rec.VALIDATED_FLAG, 1_person_rec.party_rec.ATTRIBUTE_CATEGORY, 1_person_rec.party_rec.ATTRIBUTE1, 1_person_rec.party_rec.ATTRIBUTE2, 1_person_rec.party_rec.ATTRIBUTE3, 1_person_rec.party_rec.ATTRIBUTE4, 1_person_rec.party_rec.ATTRIBUTE5, 1_person_rec.party_rec.ATTRIBUTE6, 1_person_rec.party_rec.ATTRIBUTE7, 1_person_rec.party_rec.ATTRIBUTE8, 1_person_rec.party_rec.ATTRIBUTE9, 1_person_rec.party_rec.ATTRIBUTE10, 1_person_rec.party_rec.ATTRIBUTE11, 1_person_rec.party_rec.ATTRIBUTE12, 1_person_rec.party_rec.ATTRIBUTE13, 1_person_rec.party_rec.ATTRIBUTE14, 1_person_rec.party_rec.ATTRIBUTE15, 1_person_rec.party_rec.ATTRIBUTE16, 1_person_rec.party_rec.ATTRIBUTE17, 1_person_rec.party_rec.ATTRIBUTE18, 1_person_rec.party_rec.ATTRIBUTE19, 1_person_rec.party_rec.ATTRIBUTE20, 1_person_rec.party_rec.ATTRIBUTE21, 1_person_rec.party_rec.ATTRIBUTE22, 1_person_rec.party_rec.ATTRIBUTE23, 1_person_rec.party_rec.ATTRIBUTE24, 1_person_rec.party_rec.ORIG_SYSTEM_REFERENCE, 1_person_rec.party_rec.CATEGORY_CODE, 1_person_rec.party_rec.SALUTATION FROM HZ_PARTIES WHERE PARTY_ID = p_party_id; -- Construct InterConnect Message Object. XXAS_Person.crMsg_createPerson_OAI_V1 ( messageObjectID => l_moid, aoID => l_aoid, CREATED_BY_MODULE => 'TCA' ); l_dummy := XXAS_Person.cr_XXAS_PERSON_REC_TYPE_PERSON ( l_person_rec.person_pre_name_adjunct , l_person_rec.person_first_name , l_person_rec.person_middle_name , l_person_rec.person_last_name , l_person_rec.person_name_suffix , l_person_rec.person_title , l_person_rec.person_academic_title , l_person_rec.person_previous_last_name , l_person_rec.person_initials , l_person_rec.known_as , l_person_rec.known_as2 , l_person_rec.known_as3 , l_person_rec.known_as4 , l_person_rec.known_as5 ,

l_person_rec.person_name_phonetic , l_person_rec.person_first_name_phonetic , l_person_rec.person_last_name_phonetic , l_person_rec.middle_name_phonetic , l_person_rec.tax_reference , l_person_rec.jgzz_fiscal_code , l_person_rec.person_iden_type , l_person_rec.person_identifier , l_person_rec.date_of_birth , l_person_rec.place_of_birth , l_person_rec.date_of_death , l_person_rec.gender , l_person_rec.declared_ethnicity , l_person_rec.marital_status , l_person_rec.marital_status_effective_date , l_person_rec.personal_income , l_person_rec.head_of_household_flag , l_person_rec.household_income , l_person_rec.household_size , l_person_rec.rent_own_ind , l_person_rec.last_known_gps , l_person_rec.content_source_type , l_person_rec.internal_flag , l_person_rec.attribute_category , l_person_rec.attribute1 , l_person_rec.attribute2 , l_person_rec.attribute3 , l_person_rec.attribute4 , l_person_rec.attribute5 , l_person_rec.attribute6 , l_person_rec.attribute7 , l_person_rec.attribute8 , l_person_rec.attribute9 , l_person_rec.attribute10 , l_person_rec.attribute11 , l_person_rec.attribute12 , l_person_rec.attribute13 , l_person_rec.attribute14 , l_person_rec.attribute15 , l_person_rec.attribute16 , l_person_rec.attribute17 , l_person_rec.attribute18 , l_person_rec.attribute19 , l_person_rec.attribute20 , l_person_rec.created_by_module , l_person_rec.application_id , l_person_rec.actual_content_source , l_moid, l_aoid ); l_dummy := XXAS_Person.cr_XXAS_PARTY_REC_TYPE_PARTY_R( l_person_rec.party_rec.PARTY_ID , l_person_rec.party_rec.PARTY_NUMBER , l_person_rec.party_rec.VALIDATED_FLAG , l_person_rec.party_rec.ORIG_SYSTEM_REFERENCE , l_person_rec.party_rec.STATUS , l_person_rec.party_rec.CATEGORY_CODE , l_person_rec.party_rec.SALUTATION , l_person_rec.party_rec.ATTRIBUTE_CATEGORY , l_person_rec.party_rec.ATTRIBUTE1 , l_person_rec.party_rec.ATTRIBUTE2 ,

l_person_rec.party_rec.ATTRIBUTE3 , l_person_rec.party_rec.ATTRIBUTE4 , l_person_rec.party_rec.ATTRIBUTE5 , l_person_rec.party_rec.ATTRIBUTE6 , l_person_rec.party_rec.ATTRIBUTE7 , l_person_rec.party_rec.ATTRIBUTE8 , l_person_rec.party_rec.ATTRIBUTE9 , l_person_rec.party_rec.ATTRIBUTE10 , l_person_rec.party_rec.ATTRIBUTE11 , l_person_rec.party_rec.ATTRIBUTE12 , l_person_rec.party_rec.ATTRIBUTE13 , l_person_rec.party_rec.ATTRIBUTE14 , l_person_rec.party_rec.ATTRIBUTE15 , l_person_rec.party_rec.ATTRIBUTE16 , l_person_rec.party_rec.ATTRIBUTE17 , l_person_rec.party_rec.ATTRIBUTE18 , l_person_rec.party_rec.ATTRIBUTE19 , l_person_rec.party_rec.ATTRIBUTE20 , l_person_rec.party_rec.ATTRIBUTE21 , l_person_rec.party_rec.ATTRIBUTE22 , l_person_rec.party_rec.ATTRIBUTE23 , l_person_rec.party_rec.ATTRIBUTE24 , l_moid , l_dummy ); ...... Construct other entities such as addresses, phones, etc -- Send message XXAS_Person.pub_createPerson_OAI_V1( messageObject => l_moid ,srcAppName => 'CDH' ); COMMIT; END;

PL/SQL on a E-Business Suite system to subscribe to events from InterConnect


PROCEDURE sub_CreatePerson ( Person IN PERSON_REC_TYPE ) IS -- local TCA record types /* Definition of object type PERSON_REC_TYPE: CREATE OR REPLACE TYPE PERSON_REC_TYPE IS OBJECT ( PARTY_REC PARTY_REC_TYPE, PERSON_PROFILE_REC PERSON_PROFILE_REC_TYPE, ADDRESS_REC ADDRESS_REC_TYPE_Arr, CONTACT_POINT_REC CONTACT_POINT_REC_TYPE_Arr, SOURCE_SYSTEM_REC SOURCE_SYSTEM_REC_TYPE_Arr ); */ BEGIN l_party_rec.validated_flag := Person.PARTY_REC.VALIDATED_FLAG; l_party_rec.orig_system_reference := Person.SOURCE_SYSTEM_REC(1).ORIGINAL_SYSTEM_REFERENCE; l_party_rec.orig_system := Person.SOURCE_SYSTEM_REC(1).ORIGINAL_SYSTEM; l_party_rec.status := 'A'; l_party_rec.category_code := Person.PARTY_REC.CATEGORY_CODE;

l_party_rec.salutation := Person.PARTY_REC.SALUTATION; l_party_rec.attribute_category := Person.PARTY_REC.ATTRIBUTE_CATEGORY; l_party_rec.attribute1 := Person.PARTY_REC.ATTRIBUTE1; l_party_rec.attribute2 := Person.PARTY_REC.ATTRIBUTE2; l_party_rec.attribute3 := Person.PARTY_REC.ATTRIBUTE3; l_party_rec.attribute4 := Person.PARTY_REC.ATTRIBUTE4; l_party_rec.attribute5 := Person.PARTY_REC.ATTRIBUTE5; l_party_rec.attribute6 := Person.PARTY_REC.ATTRIBUTE6; l_party_rec.attribute7 := Person.PARTY_REC.ATTRIBUTE7; l_party_rec.attribute8 := Person.PARTY_REC.ATTRIBUTE8; l_party_rec.attribute9 := Person.PARTY_REC.ATTRIBUTE9; l_party_rec.attribute10 := Person.PARTY_REC.ATTRIBUTE10; l_party_rec.attribute11 := Person.PARTY_REC.ATTRIBUTE11; l_party_rec.attribute12 := Person.PARTY_REC.ATTRIBUTE12; l_party_rec.attribute13 := Person.PARTY_REC.ATTRIBUTE13; l_party_rec.attribute14 := Person.PARTY_REC.ATTRIBUTE14; l_party_rec.attribute15 := Person.PARTY_REC.ATTRIBUTE15; l_party_rec.attribute16 := Person.PARTY_REC.ATTRIBUTE16; l_party_rec.attribute17 := Person.PARTY_REC.ATTRIBUTE17; l_party_rec.attribute18 := Person.PARTY_REC.ATTRIBUTE18; l_party_rec.attribute19 := Person.PARTY_REC.ATTRIBUTE19; l_party_rec.attribute20 := Person.PARTY_REC.ATTRIBUTE20; l_party_rec.attribute21 := Person.PARTY_REC.ATTRIBUTE21; l_party_rec.attribute22 := Person.PARTY_REC.ATTRIBUTE22; l_party_rec.attribute23 := Person.PARTY_REC.ATTRIBUTE23; l_party_rec.attribute24 := Person.PARTY_REC.ATTRIBUTE24; l_party_rec.created_by_module := 'TCA CDH'; l_person_rec.person_pre_name_adjunct := Person.PERSON_PROFILE_REC.PERSON_PRE_NAME_ADJUNCT; l_person_rec.person_first_name := Person.PERSON_PROFILE_REC.PERSON_FIRST_NAME; l_person_rec.person_middle_name := Person.PERSON_PROFILE_REC.PERSON_MIDDLE_NAME; l_person_rec.person_last_name := Person.PERSON_PROFILE_REC.PERSON_LAST_NAME; l_person_rec.person_name_suffix := Person.PERSON_PROFILE_REC.PERSON_NAME_SUFFIX; l_person_rec.person_title := Person.PERSON_PROFILE_REC.PERSON_TITLE; l_person_rec.person_academic_title := Person.PERSON_PROFILE_REC.PERSON_ACADEMIC_TITLE; l_person_rec.person_previous_last_name := Person.PERSON_PROFILE_REC.PERSON_PREVIOUS_LAST_NAME; l_person_rec.person_initials := Person.PERSON_PROFILE_REC.PERSON_INITIALS; l_person_rec.known_as := Person.PERSON_PROFILE_REC.KNOWN_AS; l_person_rec.known_as2 := Person.PERSON_PROFILE_REC.KNOWN_AS2; l_person_rec.known_as3 := Person.PERSON_PROFILE_REC.KNOWN_AS3; l_person_rec.known_as4 := Person.PERSON_PROFILE_REC.KNOWN_AS4; l_person_rec.known_as5 := Person.PERSON_PROFILE_REC.KNOWN_AS5; l_person_rec.person_name_phonetic := Person.PERSON_PROFILE_REC.PERSON_NAME_PHONETIC; l_person_rec.person_first_name_phonetic := Person.PERSON_PROFILE_REC.PERSON_FIRST_NAME_PHONETIC; l_person_rec.person_last_name_phonetic := Person.PERSON_PROFILE_REC.PERSON_LAST_NAME_PHONETIC; l_person_rec.middle_name_phonetic := Person.PERSON_PROFILE_REC.MIDDLE_NAME_PHONETIC; l_person_rec.tax_reference := Person.PERSON_PROFILE_REC.TAX_REFERENCE;

l_person_rec.jgzz_fiscal_code := Person.PERSON_PROFILE_REC.JGZZ_FISCAL_CODE; l_person_rec.person_iden_type := Person.PERSON_PROFILE_REC.PERSON_IDEN_TYPE; l_person_rec.person_identifier := Person.PERSON_PROFILE_REC.PERSON_IDENTIFIER; l_person_rec.date_of_birth := Person.PERSON_PROFILE_REC.DATE_OF_BIRTH; l_person_rec.place_of_birth := Person.PERSON_PROFILE_REC.PLACE_OF_BIRTH; l_person_rec.date_of_death := Person.PERSON_PROFILE_REC.DATE_OF_DEATH; l_person_rec.deceased_flag := Person.PERSON_PROFILE_REC.DECEASED_FLAG; l_person_rec.gender := Person.PERSON_PROFILE_REC.GENDER; l_person_rec.declared_ethnicity := Person.PERSON_PROFILE_REC.DECLARED_ETHNICITY; l_person_rec.marital_status := Person.PERSON_PROFILE_REC.MARITAL_STATUS; l_person_rec.marital_status_effective_date := Person.PERSON_PROFILE_REC.MARITAL_STATUS_EFFECTIVE_DATE; l_person_rec.personal_income := Person.PERSON_PROFILE_REC.PERSONAL_INCOME; l_person_rec.head_of_household_flag := Person.PERSON_PROFILE_REC.HEAD_OF_HOUSEHOLD_FLAG; l_person_rec.household_income := Person.PERSON_PROFILE_REC.HOUSEHOLD_INCOME; l_person_rec.household_size := Person.PERSON_PROFILE_REC.HOUSEHOLD_SIZE; l_person_rec.rent_own_ind := Person.PERSON_PROFILE_REC.RENT_OWN_IND; l_person_rec.last_known_gps := Person.PERSON_PROFILE_REC.LAST_KNOWN_GPS; l_person_rec.internal_flag := Person.PERSON_PROFILE_REC.INTERNAL_FLAG; l_person_rec.attribute_category := Person.PERSON_PROFILE_REC.ATTRIBUTE_CATEGORY; l_person_rec.attribute1 := Person.PERSON_PROFILE_REC.ATTRIBUTE1; l_person_rec.attribute2 := Person.PERSON_PROFILE_REC.ATTRIBUTE2; l_person_rec.attribute3 := Person.PERSON_PROFILE_REC.ATTRIBUTE3; l_person_rec.attribute4 := Person.PERSON_PROFILE_REC.ATTRIBUTE4; l_person_rec.attribute5 := Person.PERSON_PROFILE_REC.ATTRIBUTE5; l_person_rec.attribute6 := Person.PERSON_PROFILE_REC.ATTRIBUTE6; l_person_rec.attribute7 := Person.PERSON_PROFILE_REC.ATTRIBUTE7; l_person_rec.attribute8 := Person.PERSON_PROFILE_REC.ATTRIBUTE8; l_person_rec.attribute9 := Person.PERSON_PROFILE_REC.ATTRIBUTE9; l_person_rec.attribute10 := Person.PERSON_PROFILE_REC.ATTRIBUTE10; l_person_rec.attribute11 := Person.PERSON_PROFILE_REC.ATTRIBUTE11; l_person_rec.attribute12 := Person.PERSON_PROFILE_REC.ATTRIBUTE12; l_person_rec.attribute13 := Person.PERSON_PROFILE_REC.ATTRIBUTE13; l_person_rec.attribute14 := Person.PERSON_PROFILE_REC.ATTRIBUTE14; l_person_rec.attribute15 := Person.PERSON_PROFILE_REC.ATTRIBUTE15; l_person_rec.attribute16 := Person.PERSON_PROFILE_REC.ATTRIBUTE16; l_person_rec.attribute17 := Person.PERSON_PROFILE_REC.ATTRIBUTE17; l_person_rec.attribute18 := Person.PERSON_PROFILE_REC.ATTRIBUTE18; l_person_rec.attribute19 := Person.PERSON_PROFILE_REC.ATTRIBUTE19; l_person_rec.attribute20 := Person.PERSON_PROFILE_REC.ATTRIBUTE20; l_person_rec.created_by_module := 'TCA CDH'; l_person_rec.party_rec := l_party_rec; /* Create person */ HZ_PARTY_V2PUB.create_person (

p_person_rec => l_person_rec, x_party_id => l_party_id, x_party_number => l_party_number, x_profile_id => l_profile_id, x_return_status => l_return_status, x_msg_count => l_msg_count, x_msg_data => l_msg_data ); COMMIT; IF l_return_status <> FND_API.g_ret_sts_success THEN RAISE_APPLICATION_ERROR(-20001,l_msg_data); END IF; IF Person.ADDRESS_REC is not null THEN /* Create addresses (locations and party sites) for each record passed in */ FOR i IN 1..Person.ADDRESS_REC.COUNT LOOP l_location_rec.orig_system_reference := Person.ADDRESS_REC(i).LOCATION_REC.ORIG_SYSTEM_REFERENCE; l_location_rec.orig_system := Person.SOURCE_SYSTEM_REC(1).ORIGINAL_SYSTEM; l_location_rec.country := Person.ADDRESS_REC(i).LOCATION_REC.COUNTRY; l_location_rec.country := 'US'; l_location_rec.address1 := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS1; l_location_rec.address2 := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS2; l_location_rec.address3 := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS3; l_location_rec.address4 := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS4; l_location_rec.city := Person.ADDRESS_REC(i).LOCATION_REC.CITY; l_location_rec.postal_code := Person.ADDRESS_REC(i).LOCATION_REC.POSTAL_CODE; l_location_rec.state := Person.ADDRESS_REC(i).LOCATION_REC.STATE; l_location_rec.province := Person.ADDRESS_REC(i).LOCATION_REC.PROVINCE; l_location_rec.county := Person.ADDRESS_REC(i).LOCATION_REC.COUNTY; l_location_rec.address_key := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS_KEY; l_location_rec.address_style := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS_STYLE; l_location_rec.validated_flag := Person.ADDRESS_REC(i).LOCATION_REC.VALIDATED_FLAG; l_location_rec.address_lines_phonetic := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS_LINES_PHONETIC; l_location_rec.po_box_number := Person.ADDRESS_REC(i).LOCATION_REC.PO_BOX_NUMBER; l_location_rec.house_number := Person.ADDRESS_REC(i).LOCATION_REC.HOUSE_NUMBER; l_location_rec.street_suffix := Person.ADDRESS_REC(i).LOCATION_REC.STREET_SUFFIX; l_location_rec.street := Person.ADDRESS_REC(i).LOCATION_REC.STREET; l_location_rec.street_number := Person.ADDRESS_REC(i).LOCATION_REC.STREET_NUMBER; l_location_rec.floor := Person.ADDRESS_REC(i).LOCATION_REC.FLOOR; l_location_rec.suite := Person.ADDRESS_REC(i).LOCATION_REC.SUITE; l_location_rec.postal_plus4_code := Person.ADDRESS_REC(i).LOCATION_REC.POSTAL_PLUS4_CODE; l_location_rec.position :=

Person.ADDRESS_REC(i).LOCATION_REC.POSITION; l_location_rec.location_directions := Person.ADDRESS_REC(i).LOCATION_REC.LOCATION_DIRECTIONS; l_location_rec.address_effective_date := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS_EFFECTIVE_DATE; l_location_rec.address_expiration_date := Person.ADDRESS_REC(i).LOCATION_REC.ADDRESS_EXPIRATION_DATE; l_location_rec.clli_code := Person.ADDRESS_REC(i).LOCATION_REC.CLLI_CODE; l_location_rec.language := Person.ADDRESS_REC(i).LOCATION_REC.LANGUAGE; l_location_rec.short_description := Person.ADDRESS_REC(i).LOCATION_REC.SHORT_DESCRIPTION; l_location_rec.description := Person.ADDRESS_REC(i).LOCATION_REC.DESCRIPTION; l_location_rec.geometry_status_code := Person.ADDRESS_REC(i).LOCATION_REC.GEOMETRY_STATUS_CODE; l_location_rec.loc_hierarchy_id := Person.ADDRESS_REC(i).LOCATION_REC.LOC_HIERARCHY_ID; l_location_rec.sales_tax_geocode := Person.ADDRESS_REC(i).LOCATION_REC.SALES_TAX_GEOCODE; l_location_rec.sales_tax_inside_city_limits := Person.ADDRESS_REC(i).LOCATION_REC.SALES_TAX_INSIDE_CITY_LIMITS; l_location_rec.fa_location_id := Person.ADDRESS_REC(i).LOCATION_REC.FA_LOCATION_ID; l_location_rec.attribute_category := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE_CATEGORY; l_location_rec.attribute1 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE1; l_location_rec.attribute2 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE2; l_location_rec.attribute3 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE3; l_location_rec.attribute4 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE4; l_location_rec.attribute5 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE5; l_location_rec.attribute6 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE6; l_location_rec.attribute7 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE7; l_location_rec.attribute8 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE8; l_location_rec.attribute9 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE9; l_location_rec.attribute10 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE10; l_location_rec.attribute11 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE11; l_location_rec.attribute12 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE12; l_location_rec.attribute13 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE13; l_location_rec.attribute14 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE14; l_location_rec.attribute15 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE15; l_location_rec.attribute16 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE16; l_location_rec.attribute17 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE17;

l_location_rec.attribute18 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE18; l_location_rec.attribute19 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE19; l_location_rec.attribute20 := Person.ADDRESS_REC(i).LOCATION_REC.ATTRIBUTE20; l_location_rec.timezone_id := Person.ADDRESS_REC(i).LOCATION_REC.TIMEZONE_ID; l_location_rec.created_by_module := 'TCA CDH'; l_location_rec.delivery_point_code := Person.ADDRESS_REC(i).LOCATION_REC.DELIVERY_POINT_CODE; HZ_LOCATION_V2PUB.create_location ( p_location_rec => l_location_rec, x_location_id => l_location_id, x_return_status => l_return_status, x_msg_count => l_msg_count, x_msg_data => l_msg_data ); COMMIT; IF l_return_status <> FND_API.g_ret_sts_success THEN RAISE_APPLICATION_ERROR(-20001,l_msg_data); END IF; l_party_site_rec.party_id := l_party_id; l_party_site_rec.location_id := l_location_id; l_party_site_rec.party_site_number := Person.ADDRESS_REC(i).PARTY_SITE_REC.PARTY_SITE_NUMBER; l_party_site_rec.orig_system_reference := Person.ADDRESS_REC(i).PARTY_SITE_REC.ORIG_SYSTEM_REFERENCE; l_party_site_rec.mailstop := Person.ADDRESS_REC(i).PARTY_SITE_REC.MAILSTOP; l_party_site_rec.identifying_address_flag := Person.ADDRESS_REC(i).PARTY_SITE_REC.IDENTIFYING_ADDRESS_FLAG; l_party_site_rec.status := 'A'; l_party_site_rec.party_site_name := Person.ADDRESS_REC(i).PARTY_SITE_REC.PARTY_SITE_NAME; l_party_site_rec.attribute_category := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE_CATEGORY; l_party_site_rec.attribute1 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE1; l_party_site_rec.attribute2 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE2; l_party_site_rec.attribute3 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE3; l_party_site_rec.attribute4 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE4; l_party_site_rec.attribute5 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE5; l_party_site_rec.attribute6 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE6; l_party_site_rec.attribute7 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE7; l_party_site_rec.attribute8 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE8; l_party_site_rec.attribute9 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE9; l_party_site_rec.attribute10 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE10; l_party_site_rec.attribute11 :=

Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE11; l_party_site_rec.attribute12 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE12; l_party_site_rec.attribute13 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE13; l_party_site_rec.attribute14 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE14; l_party_site_rec.attribute15 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE15; l_party_site_rec.attribute16 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE16; l_party_site_rec.attribute17 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE17; l_party_site_rec.attribute18 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE18; l_party_site_rec.attribute19 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE19; l_party_site_rec.attribute20 := Person.ADDRESS_REC(i).PARTY_SITE_REC.ATTRIBUTE20; l_party_site_rec.language := Person.ADDRESS_REC(i).PARTY_SITE_REC.LANGUAGE; l_party_site_rec.addressee := Person.ADDRESS_REC(i).PARTY_SITE_REC.ADDRESSEE; l_party_site_rec.created_by_module := 'TCA CDH'; HZ_PARTY_SITE_V2PUB.create_party_site ( p_party_site_rec => l_party_site_rec, x_party_site_id => l_party_site_id, x_party_site_number => l_party_site_number, x_return_status => l_return_status, x_msg_count => l_msg_count, x_msg_data => l_msg_data ); COMMIT; IF l_return_status <> FND_API.g_ret_sts_success THEN RAISE_APPLICATION_ERROR(-20001,l_msg_data); END IF; END IF; /* IF Person.ADDRESS_REC is not null */ IF Person.CONTACT_POINT_REC is not null THEN /* Create phone numbers */ FOR i IN 1..Person.CONTACT_POINT_REC.COUNT LOOP l_contact_point_rec.status := 'A'; l_contact_point_rec.owner_table_name := 'HZ_PARTIES'; l_contact_point_rec.owner_table_id := l_party_id; l_contact_point_rec.primary_flag := Person.CONTACT_POINT_REC(i).PRIMARY_FLAG; l_contact_point_rec.orig_system_reference := Person.CONTACT_POINT_REC(i).ORIG_SYSTEM_REFERENCE; l_contact_point_rec.attribute_category := Person.CONTACT_POINT_REC(i).ATTRIBUTE_CATEGORY; l_contact_point_rec.attribute1 := Person.CONTACT_POINT_REC(i).ATTRIBUTE1; l_contact_point_rec.attribute2 := Person.CONTACT_POINT_REC(i).ATTRIBUTE2; l_contact_point_rec.attribute3 := Person.CONTACT_POINT_REC(i).ATTRIBUTE3; l_contact_point_rec.attribute4 := Person.CONTACT_POINT_REC(i).ATTRIBUTE4;

l_contact_point_rec.attribute5 := Person.CONTACT_POINT_REC(i).ATTRIBUTE5; l_contact_point_rec.attribute6 := Person.CONTACT_POINT_REC(i).ATTRIBUTE6; l_contact_point_rec.attribute7 := Person.CONTACT_POINT_REC(i).ATTRIBUTE7; l_contact_point_rec.attribute8 := Person.CONTACT_POINT_REC(i).ATTRIBUTE8; l_contact_point_rec.attribute9 := Person.CONTACT_POINT_REC(i).ATTRIBUTE9; l_contact_point_rec.attribute10 := Person.CONTACT_POINT_REC(i).ATTRIBUTE10; l_contact_point_rec.attribute11 := Person.CONTACT_POINT_REC(i).ATTRIBUTE11; l_contact_point_rec.attribute12 := Person.CONTACT_POINT_REC(i).ATTRIBUTE12; l_contact_point_rec.attribute13 := Person.CONTACT_POINT_REC(i).ATTRIBUTE13; l_contact_point_rec.attribute14 := Person.CONTACT_POINT_REC(i).ATTRIBUTE14; l_contact_point_rec.attribute15 := Person.CONTACT_POINT_REC(i).ATTRIBUTE15; l_contact_point_rec.attribute16 := Person.CONTACT_POINT_REC(i).ATTRIBUTE16; l_contact_point_rec.attribute17 := Person.CONTACT_POINT_REC(i).ATTRIBUTE17; l_contact_point_rec.attribute18 := Person.CONTACT_POINT_REC(i).ATTRIBUTE18; l_contact_point_rec.attribute19 := Person.CONTACT_POINT_REC(i).ATTRIBUTE19; l_contact_point_rec.attribute20 := Person.CONTACT_POINT_REC(i).ATTRIBUTE20; l_contact_point_rec.contact_point_purpose := Person.CONTACT_POINT_REC(i).CONTACT_POINT_PURPOSE; l_contact_point_rec.primary_by_purpose := Person.CONTACT_POINT_REC(i).PRIMARY_BY_PURPOSE; l_contact_point_rec.created_by_module := 'TCA CDH'; IF Person.CONTACT_POINT_REC(i).CONTACT_POINT_TYPE = 'PHONE' THEN l_contact_point_rec.contact_point_type := 'PHONE'; l_phone_rec.phone_calling_calendar := Person.CONTACT_POINT_REC(i).PHONE_CALLING_CALENDAR; l_phone_rec.last_contact_dt_time := Person.CONTACT_POINT_REC(i).LAST_CONTACT_DT_TIME; l_phone_rec.timezone_id := Person.CONTACT_POINT_REC(i).TIMEZONE_ID; l_phone_rec.phone_area_code := Person.CONTACT_POINT_REC(i).PHONE_AREA_CODE; l_phone_rec.phone_country_code := Person.CONTACT_POINT_REC(i).PHONE_COUNTRY_CODE; l_phone_rec.phone_number := Person.CONTACT_POINT_REC(i).PHONE_NUMBER; l_phone_rec.phone_extension := Person.CONTACT_POINT_REC(i).PHONE_EXTENSION; l_phone_rec.phone_line_type := Person.CONTACT_POINT_REC(i).PHONE_LINE_TYPE; l_phone_rec.raw_phone_number := Person.CONTACT_POINT_REC(i).RAW_PHONE_NUMBER; HZ_CONTACT_POINT_V2PUB.create_phone_contact_point ( p_contact_point_rec => l_contact_point_rec, p_phone_rec => l_phone_rec, x_contact_point_id => l_contact_point_id,

x_return_status => l_return_status, x_msg_count => l_msg_count, x_msg_data => l_msg_data ); COMMIT; IF l_return_status <> FND_API.g_ret_sts_success THEN RAISE_APPLICATION_ERROR(-20001,l_msg_data); END IF; ELSIF Person.CONTACT_POINT_REC(i).CONTACT_POINT_TYPE = 'EMAIL' THEN l_contact_point_rec.contact_point_type := 'EMAIL'; l_email_rec.email_format := Person.CONTACT_POINT_REC(i).EMAIL_FORMAT; l_email_rec.email_address := Person.CONTACT_POINT_REC(i).EMAIL_ADDRESS; HZ_CONTACT_POINT_V2PUB.create_email_contact_point ( p_contact_point_rec => l_contact_point_rec, p_email_rec => l_email_rec, x_contact_point_id => l_contact_point_id, x_return_status => l_return_status, x_msg_count => l_msg_count, x_msg_data => l_msg_data ); IF l_return_status <> FND_API.g_ret_sts_success THEN RAISE_APPLICATION_ERROR(-20001,l_msg_data); END IF; COMMIT; END IF; END LOOP; END IF; /* IF Person.CONTACT_POINT_REC is not null */ END sub_CreatePerson;

Appendix D: Transactions Viewer Sample Code


The following sample code demonstrates how to seed metadata for the Transactions Viewer. This particular transaction query binds one or more parameters from a separate external source system query.

Inserting into IMC_THREE_SIXTY_QUERY_VL


DECLARE x_query_id NUMBER; x_application_id NUMBER; x_query_type_flag VARCHAR2(32767); x_product_query1 VARCHAR2(32767); x_product_query2 VARCHAR2(32767); x_product_query3 VARCHAR2(32767); x_product_query4 VARCHAR2(32767); x_product_query5 VARCHAR2(32767); x_sequence_no NUMBER; x_security_function VARCHAR2(32767); x_display_flag VARCHAR2(32767); x_filter_count NUMBER;

x_display_column_count NUMBER; x_product_url VARCHAR2(32767); x_be_code VARCHAR2(30); x_category_code VARCHAR2(30); x_transaction_name VARCHAR2(32767); x_header_text VARCHAR2(32767); x_creation_date DATE; x_created_by NUMBER; x_last_update_date DATE; x_last_updated_by NUMBER; x_last_update_login NUMBER; x_object_version_number NUMBER; BEGIN x_query_id := NULL; x_application_id := 503; x_query_type_flag := 'EXTT'; x_product_query1 := 'SELECT e.event_offer_name column1, e.event_end_date column2, v.venue_name column3 FROM ext_event_offers_vl@ext_sourc where event_id=:1; x_product_query2 := NULL; x_product_query3 := NULL; x_product_query4 := NULL; x_product_query5 := NULL; x_sequence_no := 2; x_security_function := IMC_NG_360_EVENTS; x_display_flag := 'Y'; x_filter_count := 4; -- the column tells the Transaction engine that 4 filter objects should be created on the -- page. This value should be equal to number of columns designated as filters. x_display_column_count := 11; -- equals to number of columns selected in the query. x_product_url := NULL; x_be_code := 'IMC_TXN_BE_PARTY'; x_category_code := NULL; x_transaction_name := 'Events'; x_header_text := NULL; x_creation_date := SYSDATE; x_created_by := 1; x_last_update_date := SYSDATE; x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; --Now call PL/SQL API imc_three_sixty_query_pkg.insert_row( x_query_id, x_application_id, x_query_type_flag, x_product_query1, x_product_query2, x_product_query3, x_product_query4, x_product_query5, x_sequence_no, x_security_function, x_display_flag, x_filter_count, x_display_column_count,

x_product_url, x_be_code, x_category_code, x_transaction_name, x_header_text, x_creation_date, x_created_by, x_last_update_date, x_last_updated_by, x_last_update_login, x_object_version_number); --Output the results EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SubStr('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM, 1, 255)); RAISE; END;

Inserting into IMC_THREE_SIXTY_COLS_VL


*Note: if the column is used as a filter, then the appropriate column should be populated. DECLARE x_column_id NUMBER; x_query_id NUMBER; x_filter_query_id NUMBER; x_column_name VARCHAR2(32767); x_column_data_type VARCHAR2(32767); x_column_length NUMBER; x_filter_flag VARCHAR2(32767); x_range_filter_flag VARCHAR2(32767); x_hyperlink_flag VARCHAR2(32767); x_display_flag VARCHAR2(32767); x_sort_flag VARCHAR2(32767); x_security_function VARCHAR2(32767); x_seq_no NUMBER; x_column_label VARCHAR2(32767); x_creation_date DATE; x_created_by NUMBER; x_last_update_date DATE; x_last_updated_by NUMBER; x_last_update_login NUMBER; x_object_version_number NUMBER; BEGIN x_column_id := NULL; x_query_id := 2; x_filter_query_id := 10; -- this is the id of the query in the Header table to be used to get the LOV of this --- filter. If query id is present, then the filter is a drop down/LOV filter. If you specify a query id, make -----sure FILTER_FLAG is set to Y x_column_name := 'COLUMN1'; x_column_data_type := 'VARCHAR2'; x_column_length := 240;

x_filter_flag := 'Y'; -- this indicates that the column is also used as filter x_range_filter_flag := 'N'; x_hyperlink_flag := 'N'; x_display_flag := 'Y'; -- this attribute indicates that the column should be displayed in the transaction. x_sort_flag := 'Y'; x_security_function := IMC_NG_360_EVENTS; x_seq_no := 1; x_column_label := 'Name'; x_creation_date := SYSDATE; x_created_by := 1; x_last_update_date := SYSDATE; x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; --Now call the stored program imc_three_sixty_cols_pkg.insert_row( x_column_id, x_query_id, x_filter_query_id, x_column_name, x_column_data_type, x_column_length, x_filter_flag, x_range_filter_flag, x_hyperlink_flag, x_display_flag, x_sort_flag, x_security_function, x_seq_no, x_column_label, x_creation_date, x_created_by, x_last_update_date, x_last_updated_by, x_last_update_login, x_object_version_number); --Output the results EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SubStr('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM, 1, 255)); RAISE; END;

Inserting into IMC_THREE_SIXTY_SSM_QUERY


DECLARE x_ssm_query_id number; x_application_id number; x_ssm_query_query_string varchar2(2000); x_creation_date := SYSDATE; x_created_by := 1; x_last_update_date := SYSDATE;

x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; BEGIN x_ssm_query_id := NULL; x_application_id :=518; x_ssm_query_query_string := select owner_table_id from hz_orig_sys_references where orig_sys_referene=1000 and orig_system = EXT and owner_table_name= :1 ; x_creation_date := SYSDATE; x_created_by := 1; x_last_update_date := SYSDATE; x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; --Now call PL/SQL API imc_three_sixty_ssm_query_pkg.procedure INSERT_ROW ( X_SSM_QUERY_ID, X_APPLICATION_ID, X_SSM_QUERY_STRING, X_CREATION_DATE, X_CREATED_BY, X_LAST_UPDATE_DATE, X_LAST_UPDATED_BY, X_LAST_UPDATE_LOGIN, X_OBJECT_VERSION_NUMBER); --Output the results EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SubStr('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM, 1, 255)); RAISE; END;

Inserting into IMC_THREE_SIXTY_SSM_QUERY_MAP


DECLARE x_query_id number; x_application_id number; x_ssm_query_id number; x_param_source varchar2(30); x_param_position number; x_creation_date DATE; x_created_by NUMBER; x_last_update_date DATE; x_last_updated_by NUMBER; x_last_update_login NUMBER; x_object_version_number NUMBER; BEGIN x_query_id:=2; -- Assume the query_id for the inserted query to IMC_THREE_SIXTY_QUERY_VL is '2'

x_application_id :=518; x_ssm_query_id := 500; -- Assume the ssm_query_id for the inserted query to IMC_THREE_SIXTY_SSM_QUERY is '500' x_param_source := SQL; x_param_position:=0; x_creation_date := SYSDATE; x_created_by := 1; x_last_update_date := SYSDATE; x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; --Now call PL/SQL API IMC_THREE_SIXTY_QUERY_MAP_PKG procedure INSERT_ROW ( X_QUERY_, X_APPLICATION_ID, X_SSM_QUERY_ID, X_PARAM_SOURCE, X_PARAM_POSITION, X_CREATION_DATE, X_CREATED_BY, X_LAST_UPDATE_DATE, X_LAST_UPDATED_BY, X_LAST_UPDATE_LOGIN, X_OBJECT_VERSION_NUMBER); --Output the results EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SubStr('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM, 1, 255)); RAISE; END;

Inserting into IMC_THREE_SIXTY_PARAMS


DECLARE x_query_id number; x_application_id number; x_ssm_query_id number; x_param_position number; x_param_name varchar2(60); x_creation_date DATE; x_created_by NUMBER; x_last_update_date DATE; x_last_updated_by NUMBER; x_last_update_login NUMBER; x_object_version_number NUMBER; BEGIN x_query_id ::= NULL; x_application_id := 518; x_ssm_query_id := 500; x_param_position :=0; x_param_name := EventTableName; x_creation_date := SYSDATE;

x_created_by := 1; x_last_update_date := SYSDATE; x_last_updated_by := 1; x_last_update_login := 1; x_object_version_number := NULL; --Now call the stored program IMC_THREE_SIXTY_PARAMS_PKG.procedure INSERT_ROW ( X_QUERY_ID, X_APPLICATION_ID, X_SSM_QUERY_ID, X_PARAM_POSITION, X_PARAM_NAME, X_CREATION_DATE, X_CREATED_BY, X_LAST_UPDATE_DATE, X_LAST_UPDATED_BY, X_LAST_UPDATE_LOGIN, X_OBJECT_VERSION_NUMBER); --Output the results EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SubStr('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM, 1, 255)); RAISE; END;

Change Record
Date 11-May-2005 Description of Change Created document.

Oracle Corporation
Author and Date Customer Data Management Product Management (May 2005) Copyright Information Copyright 2004, 2005, Oracle. All rights reserved. Disclaimer This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or

by any means, electronic or mechanical, for any purpose, without our prior written permission. Trademark Information Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

S-ar putea să vă placă și