Sunteți pe pagina 1din 7

What To Do When Your Production Database Is Filling Up With Order Entry Data

William M. Wilcox Rosemount Inc. Abstract


The audience will be introduced to the technical considerations for integrating an Order Entry archive process with Oracle standard purge functionality. This discussion will focus on three areas: a brief overview of the Oracle standard purge, archive design issues and archive implementation technical considerations. Why was the volume of order entry data so high? Due to Rosemounts product configuration, each sales order line could generate up to 50 option lines, which are also stored in the SO_LINES table, with the PARENT_LINE_ID pointing to the original order line. Also, Rosemount generates many quote orders in addition to actual customer orders. Due to business and government requirements, Rosemount could not just purge data after a set time frame. System response times were an equally important issue. User areas were reporting that the response times for the Oracle system were insufficient in keeping up with the data entry demands. This concern was especially true for activities involving access to the Order Entry tables. While other tables have more records that Order Entry, no other modules were close when compared to the usage of OE for on-line and reporting. Several attempts had already been made to improve response time, including: adding an applications server to offload activity, increasing the number of CPUs in the hardware, exhaustive tuning efforts, and creating a copy of the production database for adhoc reporting. None of these approaches had been successful in achieving the needed improvements. The logic for discussing data archiving to address this performance issue was that every record removed means one less record in the indexes used for order activity, thus improving response times.

Introduction
Rosemount Inc. is a manufacturing company whose Measurement Division in Eden Prairie, Minnesota produces highly configured process instrumentation. The company had little experience with relational database and UNIX technology prior to implementing the suite of Oracle Applications. Rosemount installed Oracle Financials version 10.5 in December 1995 and Oracle Manufacturing and Order Entry version 10.5 in January 1997. Rosemount has bolted several applications solutions onto the Oracle Applications, some through third party vendors and some with the assistance of Oracle Services. These customizations added to the level of effort needed in the archiving process to ensure referential integrity, but they will be considered outside the scope of this paper. Any other companies contemplating an archive process must keep custom database objects in mind during the design of their process. There were two factors influencing Rosemounts decision to look at archiving: disk utilization growth projections and system response times. Storage rapidly became a concern when after five months of production activity, we had several million sales order lines in our production database. Like many companies, the production database is propagated to other databases used for development, testing, reporting, and recovery. The volume of data in a 50-60 gigabyte database would be too large to copy to other databases. Every row of data removed from production would mean five rows removed in total, once the space had been reclaimed through a database reorganization.

Technical and User Archiving requirements


While the following technical and user defined archiving requirements are specific to Rosemount, they have been included because they impact the design of the archiving process and because they may be similar to requirements developers will find in other companies. Technically, the data had to be removed from the production database to a warehousing database at Rosemount, due to the storage and response time issues previously discussed. Only the order entry tables were being archived. This meant that users of the warehouse database would not have Oracle Applications available for their use. The development staff was instructed to minimize the amount of customization to the Oracle Applications, since the support and upgrade capabilities of Oracle were important reasons for selection of the

applications in the first place. Any archiving process must handle quotes (which are never closed) as well as fully processed production orders. The standard purge process in Oracle has been constructed to only handle closed orders. Finally, and most importantly, maintaining database referential integrity was the most important technical criteria in Rosemounts requirements gathering process. The user community also had several requirements, before granting their approval for data to be moved from the production database. First, the migrated data must be available to users in the production database. Also, some types of orders are needed in production for a relatively long period of time. As a result, users needed the ability to have orders ignored by the archiving process, whether individually identified or based on order type logic. Since quotes are often turned into actual orders, even well after the individual quote date, the user community needed the ability to perform order copies on previously archived quotes and orders. Finally, due to year-to-date reporting needs, data created in the current fiscal year must be maintained as a logical unit, no matter if the data resides on the production or archival database.

entry tables based on the records held in the SO_PURGE_ORDERS table. (See figure 1)

ORACLE Standard Purge Flow


Purge Selection Figure 1 Purge Orders

Orders in the Purge selection process may be selected based on any of the following criteria: Order category Order type Order number (range) Creation date (range) Order date (range) Customer Number of days closed However, not every order that meets the selection criteria will be purged. Each order is checked for several conditions before the SO_PURGE_ORDERS table is populated. The order must be closed. Also, there can be no open demand, open work orders, or open invoices for the order in question. Also, no open returns can exist for the order. Closed orders that pass all these criteria are passed to the SO_PURGE_ORDERS table. The user must then run the concurrent program for the Order Purge. The purge program is a package of stored database procedures and functions that process a cursor to retrieve the HEADER_ID from the SO_PURGE_ORDERS table, and then run through the following OE tables to see what data is tied to the header that is going to be removed. All of the data is removed as a logical group to preserve referential integrity. The tables purged by the Oracle Order Entry purge process include (listed alphabetically): SO_EXCEPTIONS SO_FREIGHT_CHARGES SO_HEADERS SO_HOLD_RELEASES SO_HOLD_SOURCES SO_LINE_APPROVALS SO_LINE_DETAILS SO_LINE_SERVICE_DETAILS SO_LINES SO_NOTE_REFERENCES

Oracles Order Entry purge process


AUTHORS NOTE: The archiving process in meant to be integrated with the Oracle Purge process. Discussion of the Oracle Purge is intended to raise awareness of what applications code is available from Oracle and to provide background for later sections of the paper. Purge information is based on documentation provided by Oracle during the design phase of Rosemounts process in early 1997. The author encourages interested parties to work directly with Oracle Support for up-todate information for any purge related questions, since the standard Oracle design and functionality may have changed since Rosemounts information was acquired. Recognizing to need to address data in the Order Entry database, Oracle created patch #418675 to address the purging of closed order entry data for users of versions 10.4.2, 10.5, and 10.6 of the Oracle Applications. This code was incorporated as standard functionality in version 10.7 of Oracle Order Entry. The Oracle Purge process consists of two concurrent manager programs that execute stored database procedures. The first concurrent job is a purge selection step, which looks at purge criteria and populates the SO_PURGE_ORDERS table. The second process is the actual purge program, which deletes data from order

SO_ORDER_APPROVALS SO_ORDER_CANCELLATIONS SO_ORDER_HOLDS SO_PICKING_BATCHES SO_PICKING_CANCELLATIONS SO_PICKING_HEADERS SO_PICKING_LINE_DETAILS SO_PICKING_LINES SO_PICKING_RULES SO_PRICE_ADJUSTMENTS SO_SALES_CREDITS

archiving in the production database is that any copy of production made also brings across the archive data to the new database. Generating a test propagates test archive tables and a reporting database would have a reporting copy of the archive tables. This could be beneficial in certain environments. As a final general comment, Oracles standard purge processing is set up as two concurrent manager programs with no logic to link the two processes. Archiving at Rosemount was developed as two concurrent manager sets, one for orders and one for quote orders. The use of sets improves the control of the purge process, since archiving introduces job dependencies that must be more rigorously monitored. The key point is that any errors in the archive process will cause the concurrent program set to fail, which in turn stops the final purge step from processing.

Obviously, the key to archiving any of the order entry data involves processing the data in the SO_PURGE_ORDERS table before the data is removed in the Order Purge program. Now that the purge process has been reviewed, we can discuss how archiving can be incorporated into the standard Oracle process.

Sales Order Archiving Design


As displayed in Figure 2, the sales archiving design introduces two additional steps into the Oracle Purge process: moving data to the archiving database and comparing the data in the archive to the production database. Archiving speeds production transactions by removing records from the highly used order entry tables, thus shrinking the number of fetches made against indexes or during full table scans. In the archiving design, it does not matter whether the data is moved to a separate set of tables (with different names) in the production database or if a separate database is created to hold the data. The advantage of a separate database is in space savings. Moving the data to an archive database makes the production database smaller, and also every copy of the production data (reporting, system test, development, recovery, upgrade) will also be that much smaller.

Archive Design Components - Step by Step


First of all, the author highly recommends that no changes be made to the Oracle standard purge database procedures. Our procedure at Rosemount was to disable the concurrent manager tasks so that the standard Oracle Purge could not be accidentally run. Copies of the PL/SQL code were created as a starting point, owned by our CUSTOM OE database user that I will refer to as CUSTOE. The Purge selection criteria were modified slightly from the standard Oracle functionality in the select_purge_orders database procedure. We preferred to use a date_closed range as our primary selection criteria for orders, instead of the standard criteria. Date closed was somewhat clearer to understand than the number of days closed option. Also, a new procedure was created in the Purge Selection package to handle Quote order types, since the Oracle process does not handle open orders. The quote database procedure used a date_created date range for processing. The rest of the edit checks performed in the Oracle package were not modified, other than the fact that the package and procedure names were changed and were owned by CUSTOE. The archive step consists of a database procedure that inserts records into the archive database based on the header ids in the SO_PURGE_ORDERS table. The twenty-one Order Entry tables that will later be removed in the purge are addressed in the archive insert process. Rosemount has some custom tables keyed from the SO_LINES.LINE_ID, and these tables had to be moved as well. Each company looking at an archiving process

Order Entry Archive Flow


Modified Purge Selection Move data to Archive database Compare production vs. Archive Oracle Purge Orders

Figure 2

While Rosemount put the data into a separate database because of our requirements, a compelling case can be made for either method. The advantage of keeping

needs to keep their custom objects in mind during a design creation. Here are some recommendations for the insert process. First, since columns could be added to the Order Entry tables in later software versions, I highly recommend that the individual column names be listed in the INSERT INTO ... SELECT statement. Also, columns that are defined as LONG datatypes need special processing, since they cannot just be listed in the INSERT INTO...SELECT statement. Third, the inserting is done based on a cursor against the SO_PURGE_ORDERS table. A commit is recommended after each header id in that table has been processed to keep all records for that HEADER_ID together as a logical unit of work. Commits will also help keep the rollback segments from filling up during the archiving process. The Production/Archive compare step does a record count for each Order Entry table based on the SO_PURGE_ORDERS table, to make sure every record that was in production is included in the archive. If the totals are different, the concurrent manager set processing is terminated. In the compare step, the SO_HOLD_SOURCES and SO_HOLD_RELEASES tables are not counted, since it is possible for rows on these tables to be tied to more than one order (one order could still be in production and the other is moved to archive). In the previous INSERT step, logic was added to these two tables to ignore DUPLICATE RECORD errors. Additionally, these two tables are ignored in the Purge Step. Finally, the purge process was altered at Rosemount, but only because we have the custom tables that are referentially tied to the SO_LINES table.

design is that creation of a testing archive database, or moving the archived data from one database to another simply requires the movement of the data and the modifications of the public synonyms. None of the other code discussed in this section needs to be changed at all. An example of a naming convention for the synonym would be to reference so_lines@ARCHIVE as SO_LINES_CUSTARCH. Once the synonyms have been created, views are created for every table that will be archive/purged. An example of a naming convention for the views would be to have the view for sales order lines data labeled CUSTOE.SO_LINES_CUSTALL. These views are owned by the CUSTOE schema and are created as a SELECT * from production UNION ALL SELECT * from the archive database public synonym. If desired, the multi-org ORG_ID logic can be incorporated into these custom views, just as the APPS views are created. It was critical for our reporting that the data be available for the users as a whole, instead of as two independent databases. Since we intended to use close date for our selection process, we had to handle situations where two orders were created on the same day, but closed in widely different time periods. Many of our reports summarize or group information by factors other than date closed. Use of the synonyms and the views means that a user does not need to know where the data resides. From their perspective, all of the data is accessible from production. The use of synonyms gives the DBA staff the flexibility to move the location of the data without requiring the user to change their forms or reporting. Basically, designing the archive process in this manner sets up three rules for data access. To see the production data, the developer or user access the tables (or APPS views in 10.7) as is currently done. To access archive data only, such as in Order copy processing, the public synonyms _CUSTARCH are referenced. Finally, to see data for both databases together, the views owned by the CUSTOE user are referenced. Creation of the VIEWS handles reporting needs, but additional programming was necessary to allow for inquiry and copy functions from within the Oracle Applications. Copies of the standard Oracle order copy and order inquiry forms (and stored procedures) were created. These forms are accessible under a new menu option for any user responsibilities that needed to have access to Navigation ==> Archive ==> Orders. Due to the

Using Archived Data from the Production Database


Now that a process has been designed to address archiving Order Entry tables, the means of accessing these tables must be addresses. At Rosemount, the user requirements were that these orders must be available for inquiry, copy, and reporting needs. This section will reviews how these needs were met for our organization. First, since there was no intent to have Oracle applications code duplicated for the archiving database instance, the data had to be accessible from the production applications. Public synonyms were created in production to reference the same table names within the archive database (@ARCHIVE). The benefit of this

number of forms that are accessed in Order Inquiry processing, the following is a list of forms that must be copied and modified to have an Archive version of the forms: OEXOECOR - order copy OEXOETOP - order inquiry Also, the following are all called from OEXOETOP OEXOEOIN - invoice inquiry OEXOEOSI - view cycle status OEXOEOII - view order and return status OEXOESPB - view shipping status OEXOERMA - order entry RMA form OEXOEMOE - order entry form Some people may ask why the design encourages the creation of a second form, instead of the modification of the standard Oracle form to use the views that show the combined information. The key reasons are that modifications of the standard forms would jeopardize our ability to receive Oracle support and would make installation of patches more complex. Also, a combined form would leave the organization in a high risk position if problems with the archive database or with SQL*NET ever made the archive data inaccessible. In the current design, unavailability of archive data does not prevent the user from performing standard production activity.

standard Oracle Purge procedures are not yet installed in your organization, the SO_PURGE_ORDERS table and concurrent job create scripts oecretab.sql and oeorcatv.sql must be run. Grants must be run against the production tables, to ensure that the CUSTOE user has access to read and delete from the production tables. Once this is completed, the public synonyms and views for archiving can be created. At some point, the concurrent manager programs and executables must be created for the following stored procedures: Order archive selection (Standard or Custom) Custom quote archive selection Archive orders Compare orders Purge Orders (Standard or Custom) Additionally, the quote and order archiving report sets must be created. Make sure that all programs and sets are made incompatible with each other. Making the sets alone incompatible is insufficient, since error correction may force the rerunning of steps outside the sets. Finally, the PL/SQL code to install the database package of archiving procedures must be installed under the CUSTOE database owner. Grants must be performed against these procedures, so that OE (in 10.5) or APPS (in 10.7) have access to the new code. Additionally, the modified forms must be registered to the Oracle Applications and menu options must be added to make forms accessible. Any user responsibilities that need access to the new screens must have their menu options modified to include the new archive menu option.

Implementation Steps
Now that the different design issues of Order Entry archiving have been discussed, here are the implementation steps that were followed to install archiving in Rosemount. The key is to make sure that all of the steps have been completed to create a fully functional archive process. The sequencing of the tasks is less critical for several of the steps. After this section, the lessons learned during this process will be reviewed. The first technical objects that must be created in archiving are the tables that will be the recipients of the production data. Also, if a table is used to hold the record counts from the compare step, that table must be created in the production instance. Make sure that critical indexes from the production environment, such as indexes based on HEADER_ID, are added to the new tables. Also, grants on these tables must be created, to allow the later creation of the public synonyms and reporting views. Next, since the modified forms will be owned by the database schema CUSTOE, messages used in the order copy and inquiry forms must be added to the message files for CUSTOE in the application. Also, if the

Technical Lessons Learned


Like any implementation, there are some lessons that we learned after implementing archiving that would have been more easily address before implementation. This section will discuss several of these challenges. First, it is very important to have an archive report set cleanup/restart process. If the archive concurrent reporting set errors in the Compare process, the data exists in both the production and archive locations. This will cause overstatement of data in reports. Rosemount created scripts to delete data from the archive tables whenever the set did not complete successfully. A cleanup process should be investigated for report set errors in the Archive, Compare, or Purge steps. Errors

in the Selection step are less of an issue, since no data has been moved or deleted at this stage of processing. Second, make sure that business processes are fully understood. If your organization allows returns up to a year after shipment, there may be issues if you archive orders before the end of that year. The OEXOERMA form cannot process returns if the original order is no longer on the production database. Also, if archiving of quotes is desired, make sure that the life span of various types of quotes is fully understood. Additionally, since Order Entry is a widely used module within Oracle, special pains should be taken to ensure that the user community understands the standard timetable for archiving orders and how to access these orders. In any organization, some thought should be given as to whether a reverse archive process is needed, primarily to limit the risk of archiving data that is later identified as necessary. This process would take SELECTED orders out of the archive database and would put them back in the production database. As a reminder, if a process like this is created, make sure that any insert or update triggers on the Order Entry tables have been disabled, or they will fire when the data is reinserted into production. A great deal of testing is necessary to ensure that the performance of the views is acceptable to the organization. Some performance issues can be resolved by including more of the production indexes against the archive table, if EXPLAIN PLANS show that full table scans are being performed by the Oracle optimizer. Additionally, we are still investigating performance issues related to queries that involve local and remote database joins in the same select statement. The failure condition is that the remote table access converts to a full table scan, even though the appropriate indexes exist. Hints have helped in certain queries, but the issue is not completely understood at this point. Available documentation indicates that Oracle database version 7.3 should handle this situation, but that is not Rosemounts current experience. As of the writing of this paper, an open TAR is still outstanding within Oracle Support.

completed in order to migrate Order Entry archiving from an earlier version of the applications to 10.7. First, if archiving has been implemented using a separate archiving database (but keeping the standard Oracle table names), the table names in the archive database instance must be changed to match the 10.7 instance names. For example, SO_LINES must be changed to SO_LINES_ALL, similar to what is occurring in the AUTOINSTALL conversion process. Second, an organization implementing a multi-org environment (which Rosemount is doing) needs to create scripts to populate the ORG_ID field with the value of the operating unit. This is done automatically for the production database when the ADADMIN utility is run to create the multi-org environment, but the Oracle process will not populate the custom archive tables. Third, in the custom archiving views, the table names in the UNION statements must be changed to match the new Oracle _ALL names that are present in the 10.7 database. Fourth, a review must be made of the standard Oracle tables to see if columns have been added to the tables that will be archived. Any table alterations must be duplicated in the archive tables and also must be addressed in the INSERT scripts in the Copy to Archive step.

Final Comments
One question remains: Was Order Entry Archiving a success at Rosemount. While the final jury is still out, the answer is yes, but not an unqualified success. The growth rate of disk utilization has flattened out on our production systems. Additionally, performance in order entry on-line and reporting tasks has improved. The amount of the improvement is difficult to quantify, since CPU and other hardware changes have also been made to our production environment, making an applesto-apples comparison extremely difficult. Also, there is not immediate feedback on space savings, since these savings are not fully realized until a database reorganization has been completed. The inquiry and copy processing steps are working as designed. The users have the required visibility to the archived data. However, due to the unresolved performance issues discussed in the technical lessons learned, reporting still provides opportunities for

Upgrade Issues
A key factor in any modification of Oracle Applications is how product upgrades will affect the implemented changes. The author has had an opportunity to review Oracle version 10.7, and below are tasks that must be

improvement. If these remain unresolved, we may have better success changing our architecture from the separate database approach to the archive tables within the production database approach. This would eliminate the local-remote query problems we are having. If the performance issues are then resolved at a later date, the data could again be moved to a separate database. Additionally, I was somewhat aggressive in moving data to the archive database. There have been instances where returns were accepted by the organization after the original order had already been archived. This situation required special process. There is no such thing as understanding the business processes too well. In many organizations, a great deal of reports will have to be changed to support the use of the custom archiving views. In the final analysis, each company must weigh the cost of code changes and increased code maintenance responsibilities against any storage and database performance issues they are experiencing.

About the Author


Bill Wilcox has worked for Rosemount Inc. for over two years. He is currently employed as a Lead Programmer/Analyst focusing primarily on Order Entry applications. Bill has four years of Oracle tools experience and eleven years of programming experience. He has worked with the Oracle Applications version 10.5 for over a year. He is currently involved with Rosemounts upgrade to version 10.7 of the Oracle Applications.

S-ar putea să vă placă și