Documente Academic
Documente Profesional
Documente Cultură
1. DATASTAGE QUESTIONS
2. DATASTAGE FAQ from GEEK INTERVIEW QUESTIONS
3. DATASTAGE FAQ
4. TOP 10 FEATURES IN DATASTAGE HAWK
5. DATASTAGE NOTES
6. DATASTAGE TUTORIAL
7. LEARN FEATURES OF DATASTAGE
8. INFORMATICA vs DATASTAGE:
9. BEFORE YOU DESIGN YOUR APPLICATION
10. DATASTAGE 7.5x1 GUI FEATURES
11. DATASTAGE & DWH INTERVIEW QUESTIONS
12. DATASTAGE ROUTINES
13. SET_JOB_PARAMETERS_ROUTINE
________________________________________
DATASTAGE QUESTIONS
________________________________________
1. What is the flow of loading data into fact & dimensional tables?
A) Fact table - Table with Collection of Foreign Keys corresponding to the Primary
Keys in Dimensional table. Consists of fields with numeric values.
Dimension table - Table with Unique Primary Key.
Load - Data should be first loaded into dimensional table. Based on the primary key
values in dimensional table, the data should be loaded into Fact table.
2. What is the default cache size? How do you change the cache size if needed?
A. Default cache size is 256 MB. We can increase it by going into Datastage
Administrator and selecting the Tunable Tab and specify the cache size over there.
Dynamic files do not perform as well as a well, designed static file, but do
perform better than a badly designed one. When creating a dynamic file you can
specify the following
Although all of these have default values)
11. How to run a Shell Script within the scope of a Data stage job?
A) By using "ExcecSH" command at Before/After job properties.
Ascential DataStage
Ascential DataStage EE (3)
Ascential DataStage EE MVS
Ascential DataStage TX
Ascential QualityStage
Ascential MetaStage
Ascential RTI (2)
Ascential ProfileStage
Ascential AuditStage
Ascential Commerce Manager
Industry Solutions
Connectivity
Files
RDBMS
Real-time
PACKs
EDI
Other
Server Components:
? Data Stage Engine
? Meta Data Repository
? Package Installer
Q 33 What is sequencer?
It sets the sequence of execution of server jobs.
Q 35 What is ODS?
Operational Data Store is a staging area where data can be rolled back.
Q 40 What is container?
A container is a group of stages and links. Containers enable you to simplify and
modularize your server job designs by replacing complex areas of the diagram with a
single container stage. You can also use shared containers as a way of
incorporating server job functionality into parallel jobs.
DataStage provides two types of container:
? Local containers. These are created within a job and are only accessible by
that job. A local container is edited in a tabbed page of the job?s Diagram window.
? Shared containers. These are created separately and are stored in the
Repository in the same way that jobs are. There are two types of shared container
Q 41 What is function? ( Job Control ? Examples of Transform Functions )
Functions take arguments and return a value.
? BASIC functions: A function performs mathematical or string manipulations on
the arguments supplied to it, and return a value. Some functions have 0 arguments;
most have 1 or more. Arguments are always in parentheses, separated by commas, as
shown in this general syntax:
FunctionName (argument, argument)
? DataStage BASIC functions: These functions can be used in a job control
routine, which is defined as part of a job?s properties and allows other jobs to be
run and controlled from the first job. Some of the functions can also be used for
getting status information on the current job; these are useful in active stage
expressions and before- and after-stage subroutines.
Q 42 What is Routines?
Routines are stored in the Routines branch of the Data Stage Repository, where you
can create, view or edit. The following programming components are classified as
routines:
Transform functions, Before/After subroutines, Custom UniVerse functions, ActiveX
(OLE) functions, Web Service routines
________________________________________
Question: What is the flow of loading data into fact & dimensional tables?
Answer:
Fact table - Table with Collection of Foreign Keys corresponding to the Primary
Keys in Dimensional table. Consists of fields with numeric values.
Dimension table - Table with Unique Primary Key.
Load - Data should be first loaded into dimensional table. Based on the primary key
values in dimensional table, then data should be loaded into Fact table.
Question: What is the default cache size? How do you change the cache size if
needed?
Answer:
Default cache size is 256 MB. We can increase it by going into Datastage
Administrator and selecting the Tunable Tab and specify the cache size over there.
Question: What are Static Hash files and Dynamic Hash files?
Answer:
As the names itself suggest what they mean. In general we use Type-30 dynamic Hash
files. The Data file has a default size of 2GB and the overflow file is used if the
data exceeds the 2GB size.
ODBC PLUG-IN
Poor Performance Good Performance
Can be used for Variety of Databases Database Specific (only one database)
Can handle Stored Procedures Cannot handle Stored Procedures
Question: How do you execute datastage job from command line prompt?
Answer:
Using "dsjob" command as follows.
dsjob -run -jobstatus projectname jobname
Question: What are the command line functions that import and export the DS jobs?
Answer:
? dsimport.exe - imports the DataStage components.
? dsexport.exe - exports the DataStage components.
Question: How to run a Shell Script within the scope of a Data stage job?
Answer:
By using "ExcecSH" command at Before/After job properties.
Question: What are OConv () and Iconv () functions and where are they used?
Answer:
IConv() - Converts a string to an internal storage format
OConv() - Converts an expression to an output format.
Link Collector: It collects the data coming from partitions, merges it into a
single data flow and loads to target.
Question: Did you Parameterize the job or hard-coded the values in the jobs?
Answer:
Always parameterized the job. Either the values are coming from Job Properties or
from a ?Parameter Manager? ? a third part tool. There is no way you will hard?code
some parameters in your jobs. The often Parameterized variables in a job are: DB
DSN name, username, password, dates W.R.T for the data to be looked against at.
Question: Have you ever involved in updating the DS versions like DS 5.X, if so
tell us some the steps you have taken in doing so?
Answer:
Yes.
The following are some of the steps:
Definitely take a back up of the whole project(s) by exporting the project as a
.dsx file
See that you are using the same parent folder for the new version also for your old
jobs using the hard-coded file path to work.
After installing the new version import the old project(s) and you have to compile
them all again. You can use 'Compile All' tool for this.
Make sure that all your DB DSN's are created with the same name as old ones. This
step is for moving DS from one machine to another.
In case if you are just upgrading your DB from Oracle 8i to Oracle 9i there is tool
on DS CD that can do this for you.
Do not stop the 6.0 server before the upgrade, version 7.0 install process collects
project information during the upgrade. There is NO rework (recompilation of
existing jobs/routines) needed after the upgrade.
Question: What are other Performance tunings you have done in your last project to
increase the performance of slowly running jobs?
Answer:
? Staged the data coming from ODBC/OCI/DB2UDB stages or any database on the
server using Hash/Sequential files for optimum performance also for data recovery
in case job aborts.
? Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical
values for faster inserts, updates and selects.
? Tuned the 'Project Tunables' in Administrator for better performance.
? Used sorted data for Aggregator.
? Sorted the data as much as possible in DB and reduced the use of DS-Sort for
better performance of jobs.
? Removed the data not used from the source as early as possible in the job.
? Worked with DB-admin to create appropriate Indexes on tables for better
performance of DS queries.
? Converted some of the complex joins/business in DS to Stored Procedures on DS
for faster execution of the jobs.
? If an input file has an excessive number of rows and can be split-up then use
standard logic to run jobs in parallel.
? Before writing a routine or a transform, make sure that there is not the
functionality required in one of the standard routines supplied in the sdk or ds
utilities categories.
? Constraints are generally CPU intensive and take a significant amount of time
to process. This may be the case if the constraint calls routines or external
macros but if it is inline code then the overhead will be minimal.
? Try to have the constraints in the 'Selection' criteria of the jobs itself.
This will eliminate the unnecessary records even getting in before joins are made.
? Tuning should occur on a job-by-job basis.
? Use the power of DBMS.
? Try not to use a sort stage when you can use an ORDER BY clause in the
database.
? Using a constraint to filter a record set is much slower than performing a
SELECT ? WHERE?.
? Make every attempt to use the bulk loader for your particular database. Bulk
loaders are generally faster than using ODBC or OLE.
Question: Tell me one situation from your last project, where you had faced problem
and How did u solve it?
Answer:
1. The jobs in which data is read directly from OCI stages are running extremely
slow. I had to stage the data before sending to the transformer to make the jobs
run faster.
2. The job aborts in the middle of loading some 500,000 rows. Have an option
either cleaning/deleting the loaded data and then run the fixed job or run the job
again from the row the job has aborted. To make sure the load is proper we opted
the former.
Question: What are Routines and where/how are they written and have you written any
routines before?
Answer:
Routines are stored in the Routines branch of the DataStage Repository, where you
can create, view or edit.
The following are different types of Routines:
1. Transform Functions
2. Before-After Job subroutines
3. Job Control Routines
Question: What will you in a situation where somebody wants to send you a file and
use that file as an input or reference and then run job.
Answer:
? Under Windows: Use the 'WaitForFileActivity' under the Sequencers and then run
the job. May be you can schedule the sequencer around the time the file is expected
to arrive.
? Under UNIX: Poll for the file. Once the file has start the job or sequencer
depending on the file.
Question: What is the utility you use to schedule the jobs on a UNIX server other
than using Ascential Director?
Answer:
Use crontab utility along with dsexecute() function along with proper parameters
passed.
Question: How would call an external Java function which are not supported by
DataStage?
Answer:
Starting from DS 6.0 we have the ability to call external Java functions using a
Java package from Ascential. In this case we can even use the command line to
invoke the Java function and write the return values from the Java program (if any)
and use that files as a source in DataStage job.
Question: How will you determine the sequence of jobs to load into data warehouse?
Answer:
First we execute the jobs that load the data into Dimension tables, then Fact
tables, then load the Aggregator tables (if any).
Question: The above might raise another question: Why do we have to load the
dimensional tables first, then fact tables:
Answer:
As we load the dimensional tables the keys (primary) are generated and these keys
(primary) are Foreign keys in Fact tables.
Question: Does the selection of 'Clear the table and Insert rows' in the ODBC stage
send a Truncate statement to the DB or does it do some kind of Delete logic.
Answer:
There is no TRUNCATE on ODBC stages. It is Clear table blah blah and that is a
delete from statement. On an OCI stage such as Oracle, you do have both Clear and
Truncate options. They are radically different in permissions (Truncate requires
you to have alter table permissions where Delete doesn't).
Question: How do you rename all of the jobs to support your new File-naming
conventions?
Answer:
Create an Excel spreadsheet with new and old names. Export the whole project as a
dsx. Write a Perl program, which can do a simple rename of the strings looking up
the Excel file. Then import the new dsx file probably into a new project for
testing. Recompile all jobs. Be cautious that the name of the jobs has also been
changed in your job control jobs or Sequencer jobs. So you have to make the
necessary changes to these Sequencers.
Question: What are the main differences between Ascential DataStage and Informatica
PowerCenter?
Answer:
Chuck Kelley?s Answer: You are right; they have pretty much similar functionality.
However, what are the requirements for your ETL tool? Do you have large sequential
files (1 million rows, for example) that need to be compared every day versus
yesterday? If so, then ask how each vendor would do that. Think about what process
they are going to do. Are they requiring you to load yesterday?s file into a table
and do lookups? If so, RUN!! Are they doing a match/merge routine that knows how to
process this in sequential files? Then maybe they are the right one. It all depends
on what you need the ETL to do. If you are small enough in your data sets, then
either would probably be OK.
Les Barbusinski?s Answer: Without getting into specifics, here are some differences
you may want to explore with each vendor:
? Does the tool use a relational or a proprietary database to store its Meta
data and scripts? If proprietary, why?
? What add-ons are available for extracting data from industry-standard ERP,
Accounting, and CRM packages?
? Can the tool?s Meta data be integrated with third-party data modeling and/or
business intelligence tools? If so, how and with which ones?
? How well does each tool handle complex transformations, and how much external
scripting is required?
? What kinds of languages are supported for ETL script extensions?
Almost any ETL tool will look like any other on the surface. The trick is to find
out which one will work best in your environment. The best way I?ve found to make
this determination is to ascertain how successful each vendor?s clients have been
using their product. Especially clients who closely resemble your shop in terms of
size, industry, in-house skill sets, platforms, source systems, data volumes and
transformation complexity.
Ask both vendors for a list of their customers with characteristics similar to your
own that have used their ETL product for at least a year. Then interview each
client (preferably several people at each site) with an eye toward identifying
unexpected problems, benefits, or quirkiness with the tool that have been
encountered by that customer. Ultimately, ask each customer ? if they had it all to
do over again ? whether or not they?d choose the same tool and why? You might be
surprised at some of the answers.
Joyce Bischoff?s Answer: You should do a careful research job when selecting
products. You should first document your requirements, identify all possible
products and evaluate each product against the detailed requirements. There are
numerous ETL products on the market and it seems that you are looking at only two
of them. If you are unfamiliar with the many products available, you may refer to
www.tdan.com, the Data Administration Newsletter, for product lists.
If you ask the vendors, they will certainly be able to tell you which of their
product?s features are stronger than the other product. Ask both vendors and
compare the answers, which may or may not be totally accurate. After you are very
familiar with the products, call their references and be sure to talk with
technical people who are actually using the product. You will not want the vendor
to have a representative present when you speak with someone at the reference site.
It is also not a good idea to depend upon a high-level manager at the reference
site for a reliable opinion of the product. Managers may paint a very rosy picture
of any selected product so that they do not look like they selected an inferior
product.
Question: How many places u can call Routines?
Answer:
Four Places u can call
1. Transform of routine
a. Date Transformation
b. Upstring Transformation
2. Transform of the Before & After Subroutines
3. XML transformation
4. Web base transformation
Question: What is the Batch Program and how can generate?
Answer: Batch program is the program it's generate run time to maintain by the
Datastage itself but u can easy to change own the basis of your requirement
(Extraction, Transformation, Loading) .Batch program are generate depends your job
nature either simple job or sequencer job, you can see this program on job control
option.
Question: Suppose that 4 job control by the sequencer like (job 1, job 2, job 3,
job 4 ) if job 1 have 10,000 row ,after run the job only 5000 data has been loaded
in target table remaining are not loaded and your job going to be aborted then..
How can short out the problem?
Answer:
Suppose job sequencer synchronies or control 4 job but job 1 have problem, in this
condition should go director and check it what type of problem showing either data
type problem, warning massage, job fail or job aborted, If job fail means data type
problem or missing column action .So u should go Run window ->Click-> Tracing-
>Performance or In your target table ->general -> action-> select this option here
two option
(i) On Fail -- Commit , Continue
(ii) On Skip -- Commit, Continue.
First u check how much data already load after then select on skip option then
continue and what remaining position data not loaded then select On Fail , Continue
...... Again Run the job defiantly u gets successful massage
Question: How do you rename all of the jobs to support your new File-naming
conventions?
Answer: Create a Excel spreadsheet with new and old names. Export the whole project
as a dsx. Write a Perl program, which can do a simple rename of the strings looking
up the Excel file. Then import the new dsx file probably into a new project for
testing. Recompile all jobs. Be cautious that the name of the jobs has also been
changed in your job control jobs or Sequencer jobs. So you have to make the
necessary changes to these Sequencers.
Question: What will you in a situation where somebody wants to send you a file and
use that file as an input or reference and then run job.
Answer: A. Under Windows: Use the 'WaitForFileActivity' under the Sequencers and
then run the job. May be you can schedule the sequencer around the time the file is
expected to arrive.
B. Under UNIX: Poll for the file. Once the file has start the job or sequencer
depending on the file
Question: What are Sequencers?
Answer: Sequencers are job control programs that execute other jobs with preset Job
parameters.
Question: How did you handle an 'Aborted' sequencer?
Answer: In almost all cases we have to delete the data inserted by this from DB
manually and fix the job and then run the job again.
Question34: What is the difference between the Filter stage and the Switch stage?
Ans: There are two main differences, and probably some minor ones as well. The two
main differences are as follows.
1) The Filter stage can send one input row to more than one output link. The
Switch stage can not - the C switch construct has an implicit break in every case.
2) The Switch stage is limited to 128 output links; the Filter stage can have a
theoretically unlimited number of output links. (Note: this is not a challenge!)
Question: How can i achieve constraint based loading using datastage7.5.My target
tables have inter dependencies i.e. Primary key foreign key constraints. I want my
primary key tables to be loaded first and then my foreign key tables and also
primary key tables should be committed before the foreign key tables are executed.
How can I go about it?
Ans:1) Create a Job Sequencer to load you tables in Sequential mode
In the sequencer Call all Primary Key tables loading Jobs first and followed by
Foreign key tables, when triggering the Foreign tables load Job trigger them only
when Primary Key load Jobs run Successfully ( i.e. OK trigger)
2) To improve the performance of the Job, you can disable all the constraints on
the tables and load them. Once loading done, check for the integrity of the data.
Which does not meet raise exceptional data and cleanse them.
This only a suggestion, normally when loading on constraints are up, will
drastically performance will go down.
3) If you use Star schema modeling, when you create physical DB from the model, you
can delete all constraints and the referential integrity would be maintained in the
ETL process by referring all your dimension keys while loading fact tables. Once
all dimensional keys are assigned to a fact then dimension and fact can be loaded
together. At the same time RI is being maintained at ETL process level.
Question: How do you merge two files in DS?
Ans: Either use Copy command as a Before-job subroutine if the metadata of the 2
files are same or create a job to concatenate the 2 files into one, if the metadata
is different.
Question: How do you eliminate duplicate rows?
Ans: Data Stage provides us with a stage Remove Duplicates in Enterprise edition.
Using that stage we can eliminate the duplicates based on a key column.
Question: How do you pass filename as the parameter for a job?
Ans: While job development we can create a parameter 'FILE_NAME' and the value can
be passed while
Question: How did you handle an 'Aborted' sequencer?
Ans: In almost all cases we have to delete the data inserted by this from DB
manually and fix the job and then run the job again.
Question: Is there a mechanism available to export/import individual DataStage ETL
jobs from the UNIX command line?
Ans: Try dscmdexport and dscmdimport. Won't handle the "individual job"
requirement. You can only export full projects from the command line.
You can find the export and import executables on the client machine usually
someplace like: C:\Program Files\Ascential\DataStage.
A master record and an update record are merged only if both of them have the same
values for the merge key column(s) that we specify .Merge key columns are one or
more columns that exist in both the master and update records.
DATASTAGE FAQ
________________________________________
Data stage manager is used for to import & export the project to view & edit the
contents of the repository.
Data stage administrator is used for creating the project, deleting the project &
setting the environment variables.
Data stage director is use for to run the jobs, validate the jobs, scheduling the
jobs.
Server components
DS server: runs executable server jobs, under the control of the DS director, that
extract, transform, and load data into a DWH.
DS Package installer: A user interface used to install packaged DS jobs and plug-
in;
Repository or project: a central store that contains all the information required
to build DWH or data mart.
3. I have some jobs every month automatically delete the log details what r the
steps u have to take for that
4. I want to run the multiple jobs in the single job. How can u handle.
VSS is designed by Microsoft but the disadvantage is only one user can access at a
time, other user can wait until the first user complete the operation.
CVSS, by using this many users can access concurrently. When compared to VSS, CVSS
cost is high.
6. What is the difference between clear log file and clear status file?
Clear log--- we can clear the log details by using the DS Director. Under job menu
clear log option is available. By using this option we can clear the log details of
particular job.
Clear status file---- lets the user remove the status of the record associated with
all stages of selected jobs.(in DS Director)
7. I developed 1 job with 50 stages, at the run time one stage is missed how can u
identify which stage is missing?
By using usage analysis tool, which is available in DS manager, we can find out the
what r the items r used in job.
8. My job takes 30 minutes time to run, I want to run the job less than 30 minutes?
What r the steps we have to take?
By using performance tuning aspects which are available in DS, we can reduce time.
Tuning aspect
And also use link partitioner & link collector stage in between passive stages
Pivot stage is used to transposition purpose. Pivot is an active stage that maps
sets of columns in an input table to a single column in an output table.
10. If a job locked by some user, how can you unlock the particular job in DS?
We can unlock the job by using clean up resources option which is available in DS
Director. Other wise we can find PID (process id) and kill the process in UNIX
server.
11. What is a container? How many types containers are available? Is it possible to
use container as look up?
A container is a group of stages and links. Containers enable you to simplify and
modularize your server job designs by replacing complex areas of the diagram with a
single container stage.
DataStage provides two types of container:
? Local containers. These are created within a job and are only accessible by that
job only.
? Shared containers. These are created separately and are stored in the Repository
in the same way that jobs are. Shared containers can use any job in the project.
To deconstruct the shared container, first u have to convert the shared container
to local container. And then deconstruct the container.
13. I am getting input value like X = Iconv(?31 DEC 1967?,?D?)? What is the X
value?
X value is Zero.
Iconv Function Converts a string to an internal storage format.It takes 31 dec 1967
as zero and counts days from that date(31-dec-1967).
14. What is the Unit testing, integration testing and system testing?
Unit testing: As for Ds unit test will check the data type mismatching,
Size of the particular data type, column mismatching.
Integration testing: According to dependency we will put all jobs are integrated in
to one sequence. That is called control sequence.
System testing: System testing is nothing but the performance tuning aspects in Ds.
15. What are the command line functions that import and export the DS jobs?
16. How many hashing algorithms are available for static hash file and dynamic hash
file?
17. What happens when you have a job that links two passive stages together?
Obviously there is some process going on. Under covers Ds inserts a cut-down
transformer stage between the passive stages, which just passes data straight from
one stage to the other.
19. I have three jobs A,B,C . Which are dependent on each other? I want to run A &
C jobs daily and B job runs only on Sunday. How can u do it?
First you have to schedule A & C jobs Monday to Saturday in one sequence.
Next take three jobs according to dependency in one more sequence and schedule that
job only Sunday.
________________________________________
DataStage Tips:
1. Aggregator stage does not support more than one source, if you try to do this
you will get error, ?The destination stage cannot support any more stream input
links?.
2. You can give N number input links to transformer stage, but you can give
sequential file stage as reference link. You can give only one sequential file
stage as primary link and number other links as reference link. If you try to give
sequential file stage as reference link you will get error as, ?The destination
stage cannot support any more stream input links? because reference link represent
a lookup table, but sequential file does not use as lookup table, Hashed file can
be use as lookup table.
DATABASE Stages:
ODBC Stage:
You can use an ODBC stage to extract, write, or aggregate data. Each ODBC stage can
have any number of inputs or outputs. Input links specify the data you are writing.
Output links specify the data you are extracting and any aggregations required. You
can specify the data on an input link using an SQL statement constructed by
DataStage, a generated query, a stored procedure, or a user-defined SQL query.
? GetSQLInfo: is used to get quote character and schema delimiters of your data
source. Optionally specify the quote character used by the data source. By default,
this is set to " (double quotes). You can also click the Get SQLInfo button to
connect to the data source and retrieve the Quote character it uses. An entry of
000 (three zeroes) specifies that no quote character should be used.
Optionally specify the schema delimiter used by the data source. By default this is
set to. (period) but you can specify a different schema delimiter, or multiple
schema delimiters. So, for example, identifiers have the form
Node:Schema.Owner;TableName you would enter :.; into this field. You can also click
the Get SQLInfo button to connect to the data source and retrieve the Schema
delimiter it uses.
? NLS tab: You can define a character set map for an ODBC stage using the NLS
tab of the ODBC Stage
The ODBC stage can handle the following SQL Server data types:
? GUID
? Timestamp
? SmallDateTime
? Update action. Specifies how the data is written. Choose the option you want
from the drop-down list box:
? Clear the table, then insert rows. Deletes the contents of the table and adds
the new rows.
? Insert rows without clearing. Inserts the new rows in the table.
? Insert new or update existing rows. New rows are added or, if the insert
fails, the existing rows are updated.
? Replace existing rows completely. Deletes the existing rows, then adds the new
rows to the table.
? Update existing rows only. Updates the existing data rows. If a row with the
supplied key does not exist in the table then the table is not updated but a
warning is logged.
? Update existing or insert new rows. The existing data rows are updated or, if
this fails, new rows are added.
? Call stored procedure. Writes the data using a stored procedure. When you
select this option, the Procedure name field appears.
? User-defined SQL. Writes the data using a user-defined SQL statement. When you
select this option, the View SQL tab is replaced by the Enter SQL tab.
? Create table in target database. Select this check box if you want to
automatically create a table in the target database at run time. A table is created
based on the defined column set for this stage. If you select this option, an
additional tab, Edit DDL, appears. This shows the SQL CREATE statement to be used
for table generation.
? Transaction Handling. This page allows you to specify the transaction handling
features of the stage as it writes to the ODBC data source. You can choose whether
to use transaction grouping or not, specify an isolation level, the number of rows
written before each commit, and the number of rows written in each operation.
? Isolation Levels: Read Uncommitted, Read Committed, Repeatable Read,
Serializable, Versioning, and Auto-Commit.
? Rows per transaction field. This is the number of rows written before the data
is committed to the data table. The default value is 0, that is, all the rows are
written before being committed to the data table.
? Parameter array size field. This is the number of rows written at a time. The
default is 1, that is, each row is written in a separate operation.
==
PROCESSING Stages:
TRANSFORMER Stage:
Transformer stages do not extract data or write data to a target database. They are
used to handle extracted data, perform any conversions required, and pass data to
another Transformer stage or a stage that writes data to a target data table.
Transformer stages can have any number of inputs and outputs. The link from the
main data input source is designated the primary input link. There can only be one
primary input link, but there can be any number of reference inputs.
Input Links
The main data source is joined to the Transformer stage via the primary link, but
the stage can also have any number of reference input links.
A reference link represents a table lookup. These are used to provide information
that might affect the way the data is changed, but do not supply the actual data to
be changed.
Reference input columns can be designated as key fields. You can specify key
expressions that are used to evaluate the key fields. The most common use for the
key expression is to specify an equi-join, which is a link between a primary link
column and a reference link column. For example, if your primary input data
contains names and addresses, and a reference input contains names and phone
numbers, the reference link name column is marked as a key field and the key
expression refers to the primary link?s name column. During processing, the name in
the primary input is looked up in the reference input. If the names match, the
reference data is consolidated with the primary data. If the names do not match,
i.e., there is no record in the reference input whose key matches the expression
given, all the columns specified for the reference input are set to the null value.
Where a reference link originates from a UniVerse or ODBC stage, you can look up
multiple rows from the reference table. The rows are specified by a foreign key, as
opposed to a primary key used for a single-row lookup.
Output Links
You can have any number of output links from your Transformer stage.
You may want to pass some data straight through the Transformer stage unaltered,
but it?s likely that you?ll want to transform data from some input columns before
outputting it from the Transformer stage.
The source of an output link column is defined in that column?s Derivation cell
within the Transformer Editor. You can use the Expression Editor to enter
expressions or transforms in this cell. You can also simply drag an input column to
an output column?s Derivation cell, to pass the data straight through the
Transformer stage.
In addition to specify derivation details for individual output columns, you can
also specify constraints that operate on entire output links. A constraint is a
BASIC expression that specifies criteria that data must meet before it can be
passed to the output link. You can also specify a reject link, which is an output
link that carries all the data not output on other links, that is, columns that
have not met the criteria.
Constraint expressions on different links are independent. If you have more than
one output link, an input row may result in a data row being output from some,
none, or all of the output links.
For example, if you consider the data that comes from a paint shop, it could
include information about any number of different colors. If you want to separate
the colors into different files, you would set up different constraints. You could
output the information about green and blue paint on LinkA, red and yellow paint on
LinkB, and black paint on LinkC.
When an input row contains information about yellow paint, the LinkA constraint
expression evaluates to FALSE and the row is not output on LinkA. However, the
input data does satisfy the constraint criterion for LinkB and the rows are output
on LinkB.
If the input data contains information about white paint, this does not satisfy any
constraint and the data row is not output on Links A, B or C, but will be output on
the reject link. The reject link is used to route data to a table or file that is a
?catch-all? for rows that are not output on any other link. The table or file
containing these rejects is represented by another stage in the job design.
The first link to a Transformer stage is always designated as the primary input
link. However, you can choose an alternative link to be the primary link if
necessary. To do this:
1. Select the current primary input link in the Diagram window.
2. Choose Convert to Reference from the Diagram window shortcut menu.
3. Select the reference link that you want to be the new primary input link.
4. Choose Convert to Stream from the Diagram window shortcut menu.
==
AGGREGATOR Stage:
Aggregator stages classify data rows from a single input link into groups and
compute totals or other aggregate functions for each group. The summed totals for
each group are output from the stage via an output link.
If you want to aggregate the input data in a number of different ways, you can have
several output links, each specifying a different set of properties to define how
the input data is grouped and summarized.
==
FOLDER Stage:
Folder stages are used to read or write data as files in a directory located on the
DataStage server.
The folder stages can read multiple files from a single directory and can deliver
the files to the job as rows on an output link. The folder stage can also write
rows of data as files to a directory. The rows arrive at the stage on an input
link.
Note: The behavior of the Folder stage when reading folders that contain other
folders is undefined.
In an NLS environment, the user running the job must have write permission on the
folder so that the NLS map information can be set up correctly.
The Columns tab defines the data arriving on the link to be written in files to the
directory. The first column on the Columns tab must be defined as a key, and gives
the name of the file. The remaining columns are written to the named file, each
column separated by a newline. Data to be written to a directory would normally be
delivered in a single column.
The Columns tab defines a maximum of two columns. The first column must be marked
as the Key and receives the file name. The second column, if present, receives the
contents of the file.
==
IPC Stage:
The output link connecting IPC stage to the stage reading data can be opened as
soon as the input link connected to the stage writing data has been opened.
You can use Inter-process stages to join passive stages together. For example you
could use them to speed up data transfer between two data sources:
In this example the job will run as two processes, one handling the communication
from sequential file stage to IPC stage, and one handling communication from IPC
stage to ODBC stage. As soon as the Sequential File stage has opened its output
link, the IPC stage can start passing data to the ODBC stage. If the job is running
on a multi processor system, the two processor can run simultaneously so the
transfer will be much faster.
The Properties tab allows you to specify two properties for the IPC stage:
? Buffer Size. Defaults to 128 Kb. The IPC stage uses two blocks of memory; one
block can be written to while the other is read from. This property defines the
size of each block, so that by default 256 Kb is allocated in total.
? Timeout. Defaults to 10 seconds. This gives time limit for how long the stage
will wait for a process to connect to it before timing out. This normally will not
need changing, but may be important where you are prototyping multi-processor jobs
on single processor platforms and there are likely to be delays.
==
The Link Partitioner stage is an active stage which takes one input and allows you
to distribute partitioned rows to up to 64 output links. The stage expects the
output links to use the same meta data as the input link.
In order for this job to compile and run as intended on a multi-processor system
you must have inter-process buffering turned on, either at project level using the
DataStage Administrator, or at job level from the Job Properties dialog box.
The General tab on the Stage page contains optional fields that allow you to define
routines to use which are executed before or after the stage has processed the
data.
? Before-stage subroutine and Input Value. Contain the name (and value) of a
subroutine that is executed before the stage starts to process any data. For
example, you can specify a routine that prepares the data before processing starts.
? After-stage subroutine and Input Value. Contain the name (and value) of a
subroutine that is executed after the stage has processed the data. For example,
you can specify a routine that sends an electronic message when the stage has
finished.
Choose a routine from the drop-down list box. This list box contains all the
routines defined as a Before/After Subroutine under the Routines branch in the
Repository. Enter an appropriate value for the routine?s input argument in the
Input Value field.
If you choose a routine that is defined in the Repository, but which was edited but
not compiled, a warning message reminds you to compile the routine when you close
the Link Partitioner Stage dialog box.
A return code of 0 from the routine indicates success, any other code indicates
failure and causes a fatal error when the job is run.
The Properties tab allows you to specify two properties for the Link Partitioner
stage:
? Partitioning Algorithm. Use this property to specify the method the stage uses
to partition data. Choose from:
? Round-Robin. This is the default method. Using the round-robin method the
stage will write each incoming row to one of its output links in turn.
? Random. Using this method the stage will use a random number generator to
distribute incoming rows evenly across all output links.
? Hash. Using this method the stage applies a hash function to one or more input
column values to determine which output link the row is passed to.
? Modulus. Using this method the stage applies a modulus function to an integer
input column value to determine which output link the row is passed to.
? Partitioning Key. This property is only significant where you have chosen a
partitioning algorithm of Hash or Modulus. For the Hash algorithm, specify one or
more column names separated by commas. These keys are concatenated and a hash
function applied to determine the destination output link. For the Modulus
algorithm, specify a single column name which identifies an integer numeric column.
The value of this column value determines the destination output link.
The Link Partitioner stage can have one input link. This is where the data to be
partitioned arrives.
The Inputs page has two tabs: General and Columns.
? General. The General tab allows you to specify an optional description of the
stage.
? Columns. The Columns tab contains the column definitions for the data on the
input link. This is normally populated by the meta data of the stage connecting on
the input side. You can also Load a column definition from the Repository, or type
one in yourself (and Save it to the Repository if required). Note that the meta
data on the input link must be identical to the meta data on the output links.
The Link Partitioner stage can have up to 64 output links. Partitioned data flows
along these links. The Output Name drop-down list on the Outputs pages allows you
to select which of the 64 links you are looking at.
The Outputs page has two tabs: General and Columns.
? General. The General tab allows you to specify an optional description of the
stage.
? Columns. The Columns tab contains the column definitions for the data on the
input link. You can Load a column definition from the Repository, or type one in
yourself (and Save it to the Repository if required). Note that the meta data on
the output link must be identical to the meta data on the input link. So the meta
data is identical for all the output links.
==
The Link Collector stage is an active stage which takes up to 64 inputs and allows
you to collect data from this links and route it along a single output link. The
stage expects the output link to use the same meta data as the input links.
The Link Collector stage can be used in conjunction with a Link Partitioner stage
to enable you to take advantage of a multi-processor system and have data processed
in parallel. The Link Partitioner stage partitions data, it is processed in
parallel, then the Link Collector stage collects it together again before writing
it to a single target. To really understand the benefits you need to know a bit
about how DataStage jobs are run as processes, see ?DataStage Jobs and Processes?.
In order for this job to compile and run as intended on a multi-processor system
you must have inter-process buffering turned on, either at project level using the
DataStage Administrator, or at job level from the Job Properties dialog box.
The Properties tab allows you to specify two properties for the Link Collector
stage:
? Collection Algorithm. Use this property to specify the method the stage uses
to collect data. Choose from:
? Round-Robin. This is the default method. Using the round-robin method the
stage will read a row from each input link in turn.
? Sort/Merge. Using the sort/merge method the stage reads multiple sorted inputs
and writes one sorted output.
? Sort Key. This property is only significant where you have chosen a collecting
algorithm of Sort/Merge. It defines how each of the partitioned data sets are known
to be sorted and how the merged output will be sorted. The key has the following
format:
In an NLS environment, the collate convention of the locale may affect the sort
order. The default collate convention is set in the DataStage Administrator, but
can be set for individual jobs in the Job Properties dialog box.
For example:
FIRSTNAME d, SURNAME D
Specifies that rows are sorted according to FIRSTNAME column and SURNAME column in
descending order.
The Link Collector stage can have up to 64 input links. This is where the data to
be collected arrives. The Input Name drop-down list on the Inputs page allows you
to select which of the 64 links you are looking at.
________________________________________
DATASTAGE TUTORIAL
________________________________________
1. About DataStage
2. Client Components
3. DataStage Designer.
4. DataStage Director
5. DataStage Manager
6. DataStage Administrator
7. DataStage Manager Roles
8. Server Components
9. DataStage Features
10. Types of Jobs
11. DataStage NLS
12. JOB
13. Aggregator
14. Hashed File
15. UniVerse
16. UniData.
17. ODBC
18. Sequential File
19. Folder Stage
20. Transformer
21. Container
22. IPC Stage
23. Link Collector Stage
24. Link Partitioner Stage
25. Server Job Properties
26. Containers
27. Local containers
28. Shared containers
29. Job Sequences
________________________________________
About DataStage
DataStage is a tool set for designing, developing, and running applications that
populate one or more tables in a data warehouse or data mart. It consists of client
and server components.
Client Components
DataStage Designer.
A design interface used to create DataStage applications (known as jobs). Each job
specifies the data sources, the transforms required, and the destination of the
data. Jobs are compiled to create executables that are scheduled by the Director
and run by the Server.
DataStage Director.
A user interface used to validate, schedule, run, and monitor DataStage jobs.
DataStage Manager.
A user interface used to view and edit the contents of the Repository.
DataStage Administrator
A user interface used to configure DataStage
DataStage Manager Roles
? Import table or stored procedure definitions
? Create table or stored procedure definitions, data elements, custom
transforms, server job routines, mainframe routines, machine profiles, and plug-ins
There are also more specialized tasks that can only be performed from the DataStage
Manager. These include:
? Perform usage analysis queries.
? Report on Repository contents.
? Importing, exporting and packaging DataStage jobs.
Server Components
There are three server components which are installed on a server:
? Repository. A central store that contains all the information required to
build a data mart or data warehouse.
? DataStage Server. Runs executable jobs that extract, transform, and load data
into a data warehouse.
? DataStage Package Installer. A user interface used to install packaged
DataStage jobs and plug-ins.
DataStage Features
Extracts data from any number or types of database
Handles all the meta data definitions required to define your data warehouse.
Aggregates data. You can modify SQL SELECT statements used to extract data.
Transforms data. DataStage has a set of predefined transforms and functions you can
use to convert your data.
Loads the data warehouse
Types of jobs
There are three basic types of DataStage job:
? Server jobs. These are compiled and run on the DataStage server. A server job
will connect to databases on other machines as necessary, extract data, process it,
then write the data to the target data warehouse.
? Parallel jobs. These are available only if you have Enterprise Edition
installed. Parallel jobs are compiled and run on a DataStage UNIX server, and can
be run in parallel on SMP, MPP, and cluster systems.
? Mainframe jobs. These are available only if you have Enterprise MVS Edition
installed. A mainframe job is compiled and run on the mainframe. Data extracted by
such jobs is then loaded into the data warehouse.
There are two other entities that are similar to jobs in the way they appear in the
DataStage Designer, and are handled by it. These are:
Shared containers.
These are reusable job elements. They typically comprise a number of stages and
links. Copies of shared containers can be used in any number of server jobs and
edited as required.
Job Sequences.
A job sequence allows you to specify a sequence of DataStage jobs to be executed,
and actions to take depending on results
DataStage NLS
? Process data in a wide range of languages
? Accept data in any character set into most DataStage fields
? Use local formats for dates, times, and money (Server Jobs)
? Sort data according to local rules
JOB
A job consists of stages linked together which describe the flow of data from a
data source to a final data warehouse.
Built-In Stages ? Server Jobs
Aggregator.
Aggregator stages are active stages that classify data rows from a single input
link into groups and compute totals or other aggregate functions for each group.
The summed totals for each group are output from the stage via an output link.
Hashed File.
Extracts data from or loads data into databases that contain hashed files. Also
acts as an intermediate stage for quick lookups.
Hashed File stages represent a hashed file, i.e., a file that uses a hashing
algorithm for distributing records in one or more groups on disk. You can use a
Hashed File stage to extract or write data, or to act as an intermediate file in a
job. The primary role of a Hashed File stage is as a reference table based on a
single key field.
Each Hashed File stage can have any number of inputs or outputs. Input links
specify the data you are writing. Output links specify the data you are extracting.
UniVerse.
? Extracts data from or loads data into UniVerse databases.
UniData.
? Extracts data from or loads data into UniData databases.
ODBC.
Extracts data from or loads data into databases that support the industry standard
Open Database Connectivity API. This stage is also used as an intermediate stage
for aggregating data.
ODBC stages are used to represent a database that supports the industry standard
Open Database Connectivity API. You can use an ODBC stage to extract, write, or
aggregate data.
Each ODBC stage can have any number of inputs or outputs. Input links specify the
data you are writing. Output links specify the data you are extracting and any
aggregations required.
Sequential File.
Extracts data from or loads data into "flat files" in the Windows NT file system.
Sequential File stages are used to extract data from, or write data to, a text file
in the server file system. The text file can be created or exist on any drive that
is either local or mapped to the server. Each Sequential File stage can have any
number of inputs or outputs.
Folder Stage.
Folder stages are used to read or write data as files in a directory located on the
DataStage server.
Folder stages are used to read or write data as files in a directory.
The folder stages can read multiple files from a single directory and can deliver
the files to the job as rows on an output link. By default, the file content is
delivered with newlines converted to char(254) field marks. The folder stage can
also write rows of data as files to a directory. The rows arrive at the stage on an
input link.
Transformer.
Receives incoming data, transforms it in a variety of ways, and outputs it to
another stage in the job.
Transformer stages do not extract data or write data to a target database. They are
used to handle extracted data, perform any conversions required, and pass data to
another Transformer stage or a stage that writes data to a target data table.
Transformer stages in server jobs can have any number of inputs and outputs. The
link from the main data input source is designated the primary input link. There
can only be one primary input link, but there can be any number of reference
inputs.
Container.
Represents a group of stages and links. The group is replaced by a single Container
stage in the Diagram window.
IPC Stage.
Provides a communication channel between DataStage processes running simultaneously
in the same job. It allows you to design jobs that run on SMP systems with great
performance benefits
.An inter-process (IPC) stage is a passive stage which provides a communication
channel between DataStage processes running simultaneously in the same job. It
allows you to design jobs that run on SMP systems with great performance benefits.
To understand the benefits of using IPC stages, you need to know a bit about how
DataStage jobs actually run as processes, see Chapter 2 of the Server Job
Developer's Guide for information.
The output link connecting IPC stage to the stage reading data can be opened as
soon as the input link connected to the stage writing data has been opened.
You can use Inter-process stages to join passive stages together. For example you
could use them to speed up data transfer between two data sources
Link Collector Stage.
Takes up to 64 inputs and allows you to collect data from these links and route it
along a single output link.
The Link Collector stage is an active stage which takes up to 64 inputs and allows
you to collect data from these links and route it along a single output link. The
stage expects the output link to use the same meta data as the input links
Link Partitioner Stage.
Takes one input and allows you to distribute partitioned rows to up to 64 output
links.
The Link Partitioner stage is an active stage which takes one input and allows you
to distribute partitioned rows to up to 64 output links. The stage expects the
output links to use the same meta data as the input link.
? Container Input and Container Output. Represent the interface that links a
container stage to the rest of the job design.
Server Job Properties
The Job Properties dialog box appears. The dialog box differs depending on whether
it is a server job, a mainframe job, or a job sequence.
A server job has up to six pages: General, Parameters, Job control, NLS,
Performance, and Dependencies. Note that the NLS page is not available if you open
the dialog box from the Manager, even if you have NLS installed.
Containers
A container is a group of stages and links. Containers enable you to simplify and
modularize your server job designs by replacing complex areas of the diagram with a
single container stage. You can also use shared containers as a way of
incorporating server job functionality into parallel jobs.
DataStage provides two types of container:
Local containers.
? These are created within a job and are only accessible by that job. A local
container is edited in a tabbed page of the job?s Diagram window.
Shared containers.
? These are created separately and are stored in the Repository in the same way
that jobs are. There are two types of shared container:
Job Sequences
DataStage provides a graphical Job Sequencer which allows you to specify a sequence
of server or parallel jobs to run. The sequence can also contain control
information, for example, you can specify different courses of action to take
depending on whether a job in the sequence succeeds or fails. Once you have defined
a job sequence, it can be scheduled and run using the DataStage Director. It
appears in the DataStage Repository and in the DataStage Director client as a job.
________________________________________
LEARN FEATURES OF DATASTAGE
________________________________________
DATASTAGE:
DataStage has the following features to aid the design and processing required to
build a data warehouse:
Uses graphical design tools. With simple point-and-click techniques you can draw a
scheme to represent your processing requirements.
Extracts data from any number or type of database.
Handles all the metadata definitions required to define your data warehouse. You
can view and modify the table definitions at any point during the design of your
application.
Aggregates data. You can modify SQL SELECT statements used to extract data.
Transforms data. DataStage has a set of predefined transforms and functions you can
use to convert your data. You can easily extend the functionality by defining your
own transforms to use.
Loads the data warehouse.
COMPONENTS OF DATASTAGE:
DataStage consists of a number of client and server components. DataStage has four
client components
SERVER COMPONENTS:
There are three server components:
DATASTAGE PROJECTS:
You always enter DataStage through a DataStage project. When you start a DataStage
client you are prompted to attach to a project. Each project contains:
? DataStage jobs.
? Built-in components. These are predefined components used in a job.
? User-defined components. These are customized components created using the
DataStage Manager. Each user-defined component performs a specific task in a job.
DATASTAGE JOBS:
1. Server jobs. These are compiled and run on the DataStage server. A server job
will connect to databases on other machines as necessary, extract data, process it,
then write the data to the target datawarehouse.
2. Parallel jobs. These are compiled and run on the DataStage server in a
similar way to server jobs, but support parallel processing on SMP, MPP, and
cluster systems.
3. Mainframe jobs. These are available only if you have Enterprise MVS Edition
installed. A mainframe job is compiled and run on the mainframe. Data extracted by
such jobs is then loaded into the data warehouse.
SPECIAL ENTITIES:
? Shared containers. These are reusable job elements. They typically comprise a
number of stages and links. Copies of shared containers can be used in any number
of server jobs or parallel jobs and edited as required.
? Job Sequences. A job sequence allows you to specify a sequence of DataStage
jobs to be executed, and actions to take depending on results.
TYPES OF STAGES:
? Built-in stages. Supplied with DataStage and used for extracting aggregating,
transforming, or writing data. All types of job have these stages.
? Plug-in stages. Additional stages that can be installed in DataStage to
perform specialized tasks that the built-in stages do not support Server jobs and
parallel jobs can make use of these.
? Job Sequence Stages. Special built-in stages which allow you to define
sequences of activities to run. Only Job Sequences have these.
DATASTAGE NLS:
DataStage has built-in National Language Support (NLS). With NLS installed,
DataStage can do the following:
? Process data in a wide range of languages
? Accept data in any character set into most DataStage fields
? Use local formats for dates, times, and money (server jobs)
? Sort data according to local rules
Before you create any DataStage jobs, you must set up your project by entering
information about your data. This includes the name and location of the tables or
files holding your data and a definition of the columns they contain. Information
is stored in table definitions in the Repository.
TO CONNECT TO A PROJECT:
1. Enter the name of your host in the Host system field. This is the name of the
system where the DataStage Server components are installed.
2. Enter your user name in the User name field. This is your user name on the
server system.
3. Enter your password in the Password field.
4. Choose the project to connect to from the Project drop-down list box.
5. Click OK. The DataStage Designer window appears with the New dialog box open,
ready for you to create a new job:
CREATING A JOB:
Jobs are created using the DataStage Designer. For this example, you need to create
a server job, so double-click the New Server Job icon.
For most data sources, the quickest and simplest way to specify a table definition
is to import it directly from your data source or data warehouse.
2. Choose data Source Name from the DSN drop-down list box.
3. Click OK. The updated Import Metadata ( ODBC Tables) dialog box displays all
the files for the chosen data source name:
4. Select project.EXAMPLE1 from the Tables list box, where project is the name
of your DataStage project.
5. Click OK. The column information from EXAMPLE1 is imported into DataStage.
6. A table definition is created and is stored under the Table Definitions ??
ODBC ? DSNNAME branch in the Repository. The updated DataStage Designer window
displays the new table definition entry in the Repository window.
DEVELOPING A JOB:
Jobs are designed and developed using the Designer. The job design is developed in
the Diagram window (the one with grid lines). Each data source, the data warehouse,
and each processing step is represented by a stage in the job design. The stages
are linked together to show the flow of data.
For Example we can develop a job with the following three stages:
Adding Stages:
Stages are added using the tool palette. This palette contains icons that represent
the components you can add to a job. The palette has different groups to organize
the tools available.
To add a stage:
1. Click the stage button on the tool palette that represents the stage type you
want to add.
2. Click in the Diagram window where you want the stage to be positioned. The stage
appears in the Diagram window as a square. You can also drag items from the palette
to the Diagram window.
We recommend that you position your stages as follows:
Data sources on the left
Data warehouse on the right
Transformer stage in the center
When you add stages, they are automatically assigned default names. These names are
based on the type of stage and the number of the item in the Diagram window. You
can use the default names in the example.
Once all the stages are in place, you can link them together to show the flow of
data.
Linking Stages
Links are always made in the direction the data will flow, that is, usually left to
right. When you add links, they are assigned default names. You can use the default
names in the example.
To add a link:
1. Right-click the first stage, hold the mouse button down and drag the link to the
transformer stage. Release the mouse button.
2. Right-click the Transformer stage and drag the link to the Sequential File
stage.
The following screen shows how the Diagram window looks when you have added the
stages and links:
Your job design currently displays the stages and the links between them. You must
edit each stage in the job to specify the data to use and what to do with it.
Stages are edited in the job design by double-clicking each stage in turn. Each
stage type has its own editor.
The data source (EXAMPLE1) is represented by a UniVerse stage. You must specify the
data you want to extract from this file by editing the stage.
Double-click the stage to edit it. The UniVerse Stage dialog box appears:
? Outputs. Contains information describing the data flowing from the stage. You
edit this page to describe the data you want to extract from the file. In this
example, the output from this stage goes to the Transformer stage. To edit the
Universe stage:
1. Check that you are displaying the General tab on the Stage page.
Choose localuv from the Data source name drop-down list. Localuv is where EXAMPLE1
is copied to during installation.
The remaining parameters on the General and Details tabs are used to enter logon
details and describe where to find the file. Because EXAMPLE1 is installed in
localuv, you do not have to complete these fields, which are disabled.
2. Click the Outputs tab. The Outputs page appears:
The Outputs page contains the name of the link the data flows along and the
following four tabs:
? General. Contains the name of the table to use and an optional description of the
link.
? Columns. Contains information about the columns in the table.
? Selection. Used to enter an optional SQL SELECT clause (an Advanced procedure).
? View SQL. Displays the SQL SELECT statement used to extract the data.
3. Choose dstage.EXAMPLE1 from the Available tables drop-down list.
4. Click Add to add dstage.EXAMPLE1 to the Table names field.
5. Click the Columns tab. The Columns tab appears at the front of the dialog box.
You must specify the columns contained in the file you want to use. Because the
column definitions are stored in a table definition in the Repository, you can load
them directly.
6. Click Load?. The Table Definitions window appears with then UniVerse ??localuv
branch highlighted.
7. Select dstage.EXAMPLE1. The Select Columns dialog box appears, allowing you to
select which column definitions you want to load.
8. In this case you want to load all available columns definitions, so just click
OK. The column definitions specified in the table definition are copied to the
stage. The Columns tab contains definitions for the four columns in EXAMPLE1:
9. You can use the Data Browser to view the actual data that is to be output from
the UniVerse stage. Click the View Data? button to open the Data Browser window.
The Transformer stage performs any data conversion required before the data is
output to another stage in the job design. In this example, the Transformer stage
is used to convert the data in the DATE column from an YYYYMM-DD date in internal
date format to a string giving just the year and month (YYYY-MM).
There are two links in the stage:
? The input from the data source (EXAMPLE1)
? The output to the Sequential File stage
To enable the use of one of the built-in DataStage transforms, you will assign data
elements to the DATE columns input and output from the Transformer stage. A
DataStage data element defines more precisely the kind of data that can appear in a
given column. In this example, you assign the Date data element to the input
column, to specify the date is input to the transform in internal format, and the
MONTH.TAG data element to the output column, to specify that the transform produces
a string of the format YYYY-MM. Double-click the Transformer stage to edit it. The
Transformer Editor appears:
1. Working in the upper-left pane of the Transformer Editor, select the input
columns that you want to derive output columns from. Click on the CODE, DATE, and
QTY columns while holding down the Ctrl key.
2. Click the left mouse button again and, keeping it held down, drag the selected
columns to the output link in the upper-right pane. Drop the columns over the
Column Name field by releasing the mouse button. The columns appear in the top pane
and the associated metadata appears in the lower-right pane:
3. In the Data element field for the DSLink3.DATE column, select Date from the
drop-down list.
4. In the SQL type field for the DSLink4 DATE column, select Char from the drop-
down list.
5. In the Length field or the DSLink4 DATE column, enter 7.
6. In the Data element field for the DSLink4 DATE column, select MONTH.TAG from the
drop-down list. Next you will specify the transform to apply to the input DATE
column to produce the output DATE column. You do this in the upper right pane of
the Transformer Editor.
7. Double-click the Derivation field for the DSLink4 DATE column. The Expression
Editor box appears. At the moment, the box contains the text DSLink3.DATE, which
indicates that the output is directly derived from the input DATE column. Select
the text DSLink3 and delete it by pressing the Delete key.
10. Select the MONTH.TAG transform. It appears in the Expression Editor box with
the argument field [%Arg1%] highlighted.
11. Right-click to open the Suggest Operand menu again. This time, select Input
Column. A list of available input columns appears:
12. Select DSLink3.DATE. This then becomes the argument for the transform.
13. Click OK to save the changes and exit the Transformer Editor. Once more the
small icon appears on the output link from the transformer stage to indicate that
the link now has column definitions associated with it.
1. Click the Inputs tab. The Inputs page appears. This page contains:
? The name of the link. This is automatically set to the link name used in the job
design.
? General tab. Contains the pathname of the file, an optional description of the
link, and update action choices. You can use the default settings for this example,
but you may want to enter a file name (by default the file is named after the input
link).
? Format tab. Determines how the data is written to the file. In this example, the
data is written using the default settings that is, as a comma-delimited file.
? Columns tab. Contains the column definitions for the data you want to extract.
This tab contains the column definitions specified in the Transformer stage?s
output link.
2. Enter the pathname of the text file you want to create in the File name field,
for example, seqfile.txt. By default the file is placed in the server project
directory (for example, c:\Ascential\DataStage\Projects\datastage) and is named
after the input link, but you can enter, or browse for, a different directory.
3. Click OK to close the Sequential File Stage dialog box.
4. Choose File ??Save to save the job design.
The job design is now complete and ready to be compiled.
Compiling a Job
When you finish your design you must compile it to create an executable job. Jobs
are compiled using the Designer. To compile the job, do one of the following:
? Choose File ? Compile.
? Click the Compile button on the toolbar.
The Compile Job window appears:
Running a Job
Executable jobs are scheduled by the DataStage Director and run by the DataStage
Server. You can start the Director from the Designer by choosing Tools ? Run
Director.
When the Director is started, the DataStage Director window appears with the status
of all the jobs in your project:
Highlight your job in the Job name column. To run the job, choose Job ? Run Now or
click the Run button on the toolbar. The Job Run Options dialog box appears and
allows you to specify any parameter values and to specify any job run limits. In
this case, just click Run. The status changes to Running. When the job is complete,
the status changes to Finished.
Choose File ? Exit to close the DataStage Director window.
Developing a Job
The DataStage Designer is used to create and develop DataStage jobs. A DataStage
job populates one or more tables in the target database. There is no limit to the
number of jobs you can create in a DataStage project.
STAGES:
A job consists of stages linked together which describe the flow of data from a
data source to a data target (for example, a final data warehouse).
A stage usually has at least one data input and/or one data output. However, some
stages can accept more than one data input, and output to more than one stage. The
different types of job have different stage types. The stages that are available in
the DataStage Designer depend on the type of job that is currently open in the
Designer.
DataStage offers several built-in stage types for use in server jobs. These are
used to represent data sources, data targets, or conversion stages. These stages
are either passive or active stages. A passive stage handles access to databases
for the extraction or writing of data. Active stages model the flow of data and
provide mechanisms for combining data streams, aggregating data, and converting
data from one data type to another.
As well as using the built-in stage types, you can also use plug-in stages for
specific operations that the built-in stages do not support. The Palette organizes
stage types into different groups, according to function:
? Database
? File
? PlugIn
? Processing
? Real Time
Stages and links can be grouped in a shared container. Instances of the shared
container can then be reused in different server jobs. You can also define a local
container within a job, this groups stages and links into a single unit, but can
only be used within the job in which it is defined. Each stage type has a set of
predefined and editable properties. These properties are viewed or edited using
stage editors.
At this point in your job development you need to decide which stage types to use
in your job design. The following built-in stage types are available for server
jobs:
Mainframe Job Stages
DataStage offers several built-in stage types for use in mainframe jobs. These are
used to represent data sources, data targets, or conversion stages.
The Palette organizes stage types into different groups, according to function:
? Database
? File
? Processing
Each stage type has a set of predefined and editable properties. Some stages can be
used as data sources and some as data targets. Some can be used as both. Processing
stages read data from a source, process it andwrite it to a data target target.
These properties are viewed or edited usingstage editors. A stage editor exists for
each stage type and At this point in your job development you need to decide which
stage types to use in your job design.
SERVER JOBS:
When you design a job you see it in terms of stages and links. When it is compiled,
the DataStage engine sees it in terms of processes that are subsequently run on the
server. How does the DataStage engine define a process? It is here that the
distinction between active and passive stages becomes important. Actives stages,
such as the Transformer and Aggregator perform processing tasks, while passive
stages, such as Sequential file stage and ODBC stage, are reading or writing data
sources and provide services to the active stages. At its simplest, active stages
become processes. But the situation becomes more complicated where you connect
active stages together and passive stages together.
Aggregator Stages
Aggregator stages classify data rows from a single input link into groups and
compute totals or other aggregate functions for each group. The summed totals for
each group are output from the stage via an output link.
Using an Aggregator Stage
If you want to aggregate the input data in a number of different ways, you can have
several output links, each specifying a different set of properties to define how
the input data is grouped and summarized.
When you edit an Aggregator stage, the Aggregator Stage dialog box appears:
Data to be aggregated is passed from a previous stage in the job design and into
the Aggregator stage via a single input link. The properties of this link and the
column definitions of the data are defined on the Inputs page in the Aggregator
Stage dialog box.
Note: The Aggregator stage does not preserve the order of input rows, even when the
incoming data is already sorted.
The Inputs page has the following field and two tabs:
? Input name. The name of the input link to the Aggregator stage.
? General. Displayed by default. Contains an optional description of the link.
? Columns. Contains a grid displaying the column definitions for the data being
written to the stage, and an optional sort order.
Column name: The name of the column.
Sort Order: Specifies the sort order. This field is blank by default, that is,
there is no sort order. Choose Ascending for ascending order, Descending for
descending order, or Ignore if you do not want the order to be checked.
Key: Indicates whether the column is part of the primary key.
SQL type: The SQL data type.
Length: The data precision. This is the length for CHAR data and the maximum length
for VARCHAR data.
Scale: The data scale factor.
Nullable: Specifies whether the column can contain null values.
Display: The maximum number of characters required to display the column data.
Data element: The type of data in the column.
Description: A text description of the column.
When you output data from an Aggregator stage, the properties of output links and
the column definitions of the data are defined on the Outputs page in the
Aggregator Stage dialog box.
The Outputs page has the following field and two tabs:
? Output name. The name of the output link. Choose the link to edit from the Output
name drop-down list box. This list box displays all the output links from the
stage.
? General. Displayed by default. Contains an optional description of the link.
? Columns. Contains a grid displaying the column definitions for the data being
output from the stage. The grid has the following columns:
Column name. The name of the column.
Group. Specifies whether to group by the data in the column.
Derivation. Contains an expression specifying how the data is aggregated. This is a
complex cell, requiring more than one piece of information. Double-clicking the
cell opens the Derivation
Transformer Stages
Transformer stages do not extract data or write data to a target database. They are
used to handle extracted data, perform any conversions required, and pass data to
another Transformer stage or a stage that writes data to a target data table.
Transformer stages can have any number of inputs and outputs. The link from the
main data input source is designated the primary input link. There can only be one
primary input link, but there can be any number of reference inputs.
When you edit a Transformer stage, the Transformer Editor appears. An example
Transformer stage is shown below. In this example, metadata has been defined for
the input and the output links.
Link Area
The top area displays links to and from the Transformer stage, showing their
columns and the relationships between them. The link area is where all column
definitions, key expressions, and stage variables are defined. The link area is
divided into two panes; you can drag the splitter bar between them to resize the
panes relative to one another. There is also a horizontal scroll bar, allowing you
to scroll the view left or right. The left pane shows input links, the right pane
shows output links. The input link shown at the top of the left pane is always the
primary link. Any subsequent links are reference links. For all types of link, key
fields are shown in bold. Reference link key fields that have no expression defined
are shown in red (or the color defined in Tools ? Options), as are output columns
that have no derivation defined.
Within the Transformer Editor, a single link may be selected at any one time. When
selected, the link?s title bar is highlighted, and arrowheads indicate any selected
columns.
Metadata Area
The bottom area shows the column metadata for input and output links. Again this
area is divided into two panes: the left showing input link metadata and the right
showing output link metadata. The metadata for each link is shown in a grid
contained within a tabbed page. Click the tab to bring the required link to the
front. That link is also selected in the link area.
If you select a link in the link area, its metadata tab is brought to the front
automatically. You can edit the grids to change the column metadata on any of the
links. You can also add and delete metadata.
Input Links
The main data source is joined to the Transformer stage via the primary link, but
the stage can also have any number of reference input links.
A reference link represents a table lookup. These are used to provide information
that might affect the way the data is changed, but do not supply the actual data to
be changed. Reference input columns can be designated as key fields. You can
specify key expressions that are used to evaluate the key fields. The most common
use for the key expression is to specify an equi-join, which is a link between a
primary link column and a reference link column. For example, if your primary input
data contains names and addresses, and a reference input contains names and phone
numbers, the reference link name column is marked as a key field and the key
expression refers to the primary link?s name column. During processing, the name in
the primary input is looked up in the reference input. If the names match, the
reference data is consolidated with the primary data. If the names do not match,
i.e., there is no record in the reference input whose key matches the expression
given, all the columns specified for the reference input are set to the null value.
Output Links
You can have any number of output links from your Transformer stage. You may want
to pass some data straight through the Transformer stage unaltered, but it?s likely
that you?ll want to transform data from some input columns before outputting it
from the Transformer stage. You can specify such an operation by entering a BASIC
expression or by selecting a transform to apply to the data. DataStage has many
built-in transforms, or you can define your own custom transforms that are stored
in the Repository and can be reused as required. The source of an output link
column is defined in that column?s Derivation cell within the Transformer Editor.
You can use the Expression Editor to enter expressions or transforms in this cell.
You can also simply drag an input column to an output column?s Derivation cell, to
pass the data straight through the Transformer stage. In addition to specifying
derivation details for individual output columns, you can also specify constraints
that operate on entire output links. A constraint is a BASIC expression that
specifies criteria that data must meet before it can be passed to the output link.
You can also specify a reject link, which is an output link that carries all the
data not output on other links, that is, columns that have not met the criteria.
Each output link is processed in turn. If the constraint expression evaluates to
TRUE for an input row, the data row is output on that link. Conversely, if a
constraint expression evaluates to FALSE for an input row, the data row is not
output on that link.
Constraint expressions on different links are independent. If you have more than
one output link, an input row may result in a data row being output from some,
none, or all of the output links. For example, if you consider the data that comes
from a paint shop, it could include information about any number of different
colors. If you want to separate the colors into different files, you would set up
different constraints. You could output the information about green and blue paint
on Link A, red and yellow paint on Link B, and black paint on Link C. When an input
row contains information about yellow paint, the Link A constraint expression
evaluates to FALSE and the row is not output on Link A. However, the input data
does satisfy the constraint criterion for Link B and the rows are output on Link B.
If the input data contains information about white paint, this does not satisfy any
constraint and the data row is not output on Links A, B or C, but will be output on
the reject link. The reject link is used to route data to a table or file that is a
?catch-all? for rows that are not output on any other link. The table or file
containing these rejects is represented by another stage in the job design.
Inter-Process Stages
In this example the job will run as two processes, one handling the communication
from sequential file stage to IPC stage, and one handling communication from IPC
stage to ODBC stage. As soon as the Sequential File stage has opened its output
link, the IPC stage can start passing data to the ODBC stage. If the job is running
on a multi-processor system, the two processor can run simultaneously so the
transfer will be much faster. You can also use the IPC stage to explicitly specify
that connected active stages should run as separate processes. This is advantageous
for performance on multi-processor systems. You can also specify this behavior
implicitly by turning inter process row buffering on, either for the whole project
via DataStage Administrator, or individually for a job in its Job Properties dialog
box.
When you edit an IPC stage, the InterProcess Stage dialog box appears.
This dialog box has three pages:
? Stage. The Stage page has two tabs, General and Properties. The General page
allows you to specify an optional description of the page. The Properties tab
allows you to specify stage properties.
? Inputs. The IPC stage can only have one input link. the Inputs page displays
information about that link.
? Outputs. The IPC stage can only have one output link. The Outputs page displays
information about that link.
The Properties tab allows you to specify two properties for the IPC stage:
? Buffer Size. Defaults to 128 Kb. The IPC stage uses two blocks of memory; one
block can be written to while the other is read from. This property defines the
size of each block, so that by default 256 Kb is allocated in total.
? Timeout. Defaults to 10 seconds. This gives time limit for how long the stage
will wait for a process to connect to it before timing out. This normally will not
need changing, but may be important where you are prototyping multi-processor jobs
on single processor platforms and there are likely to be delays.
The IPC stage can have one input link. This is where the process that is writing
connects.
The Inputs page has two tabs: General and Columns.
? General. The General tab allows you to specify an optional description of the
stage.
? Columns. The Columns tab contains the column definitions for the data on the
input link. This is normally populated by the metadata of the stage connecting on
the input side. You can also Load a column definition from the Repository, or type
one in yourself (and Save it to the Repository if required). Note that the metadata
on the input link must be identical to the metadata on the output link.
The IPC stage can have one output link. This is where the process that is reading
connects.
The Outputs page has two tabs: General and Columns.
? General. The General tab allows you to specify an optional description of the
stage.
? Columns. The Columns tab contains the column definitions for the data on the
input link. This is normally populated by the metadata of the stage connecting on
the input side. You can also Load a column definition from the Repository, or type
one in yourself (and Save it to the Repository if required). Note that the metadata
on the output link must be identical to the metadata on the input link.
The Link Partitioner stage is an active stage which takes one input and allows you
to distribute partitioned rows to up to 64 output links. The stage expects the
output links to use the same metadata as the input link. Partitioning your data
enables you to take advantage of a multi-processor system and have the data
processed in parallel. It can be used in conjunction with the Link Collector stage
to partition data, process it in parallel, and then collect it together again
before writing it to a single target.
In order for this job to compile and run as intended on a multi-processor system
you must have inter-process buffering turned on, either at project level using the
DataStage Administrator, or at job level from the Job Properties dialog box.
The Properties tab allows you to specify two properties for the Link Partitioner
stage:
? Partitioning Algorithm. Use this property to specify the method the stage uses to
partition data. Choose from:
? Round-Robin. This is the default method. Using the round-robin method the stage
will write each incoming row to one of its output links in turn.
? Random. Using this method the stage will use a random number generator to
distribute incoming rows evenly across all output links.
? Hash. Using this method the stage applies a hash function to one or more input
column values to determine which output link the row is passed to.
? Modulus. Using this method the stage applies a modulus function to an integer
input column value to determine which output link the row is passed to.
? Partitioning Key. This property is only significant where you have chosen a
partitioning algorithm of Hash or Modulus. For the Hash algorithm, specify one or
more column names separated by commas. These keys are concatenated and a hash
function applied to determine the destination output link. For the Modulus
algorithm, specify a single column name which identifies an integer numeric column.
The value of this column value determines the destination output link.
The Link Collector stage is an active stage which takes up to 64 inputs and allows
you to collect data from this links and route it along a single output link. The
stage expects the output link to use the same metadata as the input links. The Link
Collector stage can be used in conjunction with a Link Partitioner stage to enable
you to take advantage of a multi-processor system and have data processed in
parallel. The Link Partitioner stage partitions data, it is processed in parallel,
then the Link Collector stage collects it together again before writing it to a
single target.
The following diagram illustrates how the Link Collector stage can be used in a job
in this way.
In order for this job to compile and run as intended on a multi-processor system
you must have inter-process buffering turned on, either at project level using the
Data Stage Administrator, or at job level from the Job Properties dialog box.
The Properties tab allows you to specify two properties for the Link Collector
stage:
? Collection Algorithm. Use this property to specify the method the stage uses to
collect data. Choose from:
? Round-Robin. This is the default method. Using the round-robin method the stage
will read a row from each input link in turn.
? Sort/Merge. Using the sort/merge method the stage reads multiple sorted inputs
and writes one sorted output.
? Sort Key. This property is only significant where you have chosen a collecting
algorithm of Sort/Merge. It defines how each of the partitioned data sets are known
to be sorted and how the merged output will be sorted. The key has the following
format: Column name {sort order] [,Column name [sort order]]...
________________________________________
INFORMATICA vs DATASTAGE:
________________________________________
Deployment Facility
- Ability to handle initial deployment, major releases, minor releases and patches
with equal ease Yes,
My experience has been that INFA is definitely easier to implement initially and
upgrade. No,
Ascential has done a good job in recent releases.
Transformations
- No of available transformation functions 58 28,
DS has many more canned transformation functions than 28.
- Support for looping the source row (For While Loop) Supports for comparing
immediate previous record Does not support
- Slowly Changing Dimension Full history, recent values, Current & Prev values
Supports only through Custom scripts. Does not have a
wizard to do this.
DS has a component called ProfileStage that handles this type of comparison. You'll
want to use it judiciously in your production processing because it does take extra
resources to use it but I have
found it to be very useful.
- Time Dimension generation Does not support. Does not support.
- Rejected Records Can be captured Cannot be captured in separate file.
DS absolutely has the ability to capture rejected records in a separate file.
That's a pretty basic capability and I don't know of any ETL tool
that can't do it...
- Debugging Facility Not Supported. Supports basic debugging facilities for
testing.
Metadata
- Ability to view & navigate metadata on the web Does Not Support Job
sessions can be monitored using Informatica
Classes.
This is completely not true. DS has a very strong metadata component (MetaStage)
that works not only with DS, but also has plug-ins to work
with modeling tools (like ERWin) and BI tools (like Cognos). This is one
of their strong suits (again, IMHO).
- Ability to Customize views of metadata for different users (DBA Vs
Business user). Supports Not Available,
Also not true - MetaStage allows publishing of metadata in HTML format for
different types of users. It is completely customizable.
- Metadata repository can be stored in RDBMS Yes No. But the proprietary meta
data can be moved to a
RDBMS using the DOC Tool
________________________________________
1) System Requirement
1.1 Platform Support
1.1.1 Informatica: Win NT/ Unix
1.1.2 DataStage: Win NT/ Unix/More platforms.
2) Deployment facility
2.1. Ability to handle initial deployment, major releases, minor releases and
patches with equal ease
2.1.1.Informatica:. Yes
2.1.2.DataStage: No
My experience has been that INFA is definitiely easier to implementinitially and
upgrade. Ascential has done a good job in recent releases
to improve, but IMHO INFA still does this better.
3) Transformations
3.1. No of available transformation functions
3.1.1.Informatica:. 58
3.1.2.DataStage: 28
DS has many more canned transformation functions than 28. I'm not surewhat leads
you to this number, but I'd recheck it if I were you.
3.2. Support for looping the source row (For While Loop)
3.2.1.Informatica:. Supports for comparing immediate previous record
3.2.2.DataStage: Does not support
5) Metadata
5.1. Ability to view & navigate metadata on the web
5.1.1..Informatica:. Does not support
5.1.2.DataStage: Job sessions can be monitored using Informatica
Classes
________________________________________
? Both DataStage and Informatica support XML. DataStage comes with XML input,
transformation and output stages.
? Both products have an unlimited number of transformation functions since you
can easily write your own using the command interface.
? Both products have options for integrating with ERP systems such as SAP,
PeopleSoft and Seibel but these come at a significant extra cost. You may need to
evaluate these. SAP is a reseller of DataStage for SAP BW, PeopleSoft bundles
DataStage in its EPM products.
? DataStage has some very good debugging facilities including the ability to
step through a job link by link or row by row and watch data values as a job
executes. Also server side tracing.
? DataStage 7.x releases have intelligent assistants (wizards) for creating the
template jobs for each type of slowly changing dimension table loads. The DataStage
Best Practices course also provides training in DW loading with SCD and surrogate
key techniques.
? Ascential and Informatica both have robust metadata management products.
Ascential MetaStage comes bundled free with DataStage Enterprise and manages
metadata via a hub and spoke architecture. It can import metadata from a wide range
of databases and modelling tools and has a high degree of interaction with
DataStage for operational metadata. Informatica SuperGlue was released last year
and is rated more highly by Gartner in the metadata field. It integrates closely
with PowerCenter products. They both support multiple views (business and
technical) of metadata plus the functions you would expect such as impact analysis,
semantics and data lineage.
? DataStage can send emails. The sequence job has an email stage that is easy to
configure. DataStage 7.5 also has new mobile device support so you can administer
your DataStage jobs via a palm pilot. There are also 3rd party web based tools that
let you run and review jobs over a browser. I found it easy to send sms admin
messages from a DataStage Unix server.
? DataStage has a command line interface. The dsjob command can be used by any
scheduling tool or from the command line to run jobs and check the results and logs
of jobs.
? Both products integrate well with Trillium for data quality, DataStage also
integrate with QualityStage for data quality. This is the preferred method of
address cleansing and fuzzy matching.
________________________________________
Milind - I've got to ask - where are you getting your information from??? I have
done ETL tool comparisons for several clients over the past 7 or so years. They are
both good tools with different strengths so it really depends on what your
organizations needs / priorities are as to which one is "better". I have spent much
more time in the past couple of years on DS than INFA so I don't feel I can speak
to the changes INFA has made lately, but I know you have incorrect info about DS.
I am currently working with a client on DS v7.1. I've made a few comments below for
the more glaring inaccuracies or topics where I have up-to-date experience. I
suggest you re-research and perhaps do a proof-of-concept with each vendor.
FYI - I don't know if you have looked at the Parallel Extender component of DS 7.x,
but it is a terrific capability if you have challenges with meeting availability
requirements. It is one of the most impressive changes Ascential has made lately
(IMHO).
________________________________________
Gartner has vendor reports on Ascential and Informatica. They also have a magic
quadrant that lists both DataStage and Informatica as the clear market leaders. I
don't think you can go wrong with either product, it comes down to whether you can
access experts in these products for your project and what options you have for
training. I think if you go into a major project with either product and you don't
have an expert on your team it can go badly wrong.
Currently, our data warehouse has only Type 1 Slowly Changing Dimensions (SCD).
That is to say we overwrite the dimension record with every update. The problem
with that is when data changes, it changes for all history while this is valid for
data entry corrections, it may not be valid for all data. An acceptable example
could be Customer Date of Birth. If the date of birth was changed, chances are the
reason was that their data was incorrect.
However, if the Customer address were changed, this may and probably does mean the
customer moved. If we simply overwrite the address then all sales for that customer
will belong to the new address. Suppose the customer moved from Florida to Ohio. If
we were trying to track sales patterns by region, all of the customer?s purchase
that were made in Florida would now appear to have been made in Ohio.
In the example above, the DOB change doesn?t affect any dimensional reporting
facts. However, the City, State change would have an affect. Now all sales for Bob
Smith would appear to come from Dayton, Ohio rather than from Tampa, Florida.
The solution we have chosen for solving this problem is to implement a Type 2
slowly changing dimension. A Type 2 SCD records a separate row each time a value is
changed in the dimension. In our case, we are declaring that we will only create a
new dimension record when certain columns are changed. In the example above, we
would not record a new record for the DOB change but we would for the address
change.
As you can see, there are two dimension records for Bob Smith now. They both have
the same CustKey values, but the have different ID values. All future fact table
rows will use the new ID to link to the Customer dimension. This is accomplished by
the use of the Current Flag. The ETL process looks only at the current flag when
recording new orders. However, in the case of an update to an order the Effective
Date must be used to determine which customer the update applies to.
The primary issue with Type 2 SCD is the volume of data grows exponentially as more
changes are tracked. This can impact performance in a star schema. The principle
behind the star schema design is that while facts are few columns, they have many
rows but they only have to perform single level joins to resolve their dimensions.
The assumption is that the dimensions have lots of columns but relatively few rows.
This allows for very fast joining of data.
Conforming Dimensions
For the purposes of this discussion conforming dimensions only need a brief
definition. Conforming dimensions are a feature of star schemas that allow facts to
share dimensional data. A conforming dimension occurs when two dimensions share the
same keys. Often they have different attributes. The goal is to ensure that any
fact table can link to the conforming dimension and consume its data so long as the
dimension is relevant.
Conforming Dimension
Customer Dimension
CODE
ID CustKey Name DOB City State
1001 BS001 Bob Smith 6/8/1961 Tampa FL
1002 LJ004 Lisa Jones 10/15/1954 Miami FL
Billing Dimension
CODE
ID Bill2Ky Name Account Type Credit Limit CustKey
1001 9211 Bob Smith Credit $10,000 BS001
1002 23421 Lisa Jones Cash $100 LJ004
In the example above, we could use the ID from the Customer dimension in a fact and
in the future a link to the Billing dimension could be established without having
to reload the data.
We are considering a slight modification to the standard Type 2 SCD. The idea is to
maintain two dimensions one as a Type 1 and one as a Type 2. The problem with this
is we lose the ability to use conforming dimensions.
As you can see, the current ID for Bob Smith in the Type 1 SCD is 1001, while it is
1003 in the Type 2 SCD. This is not conforming.
In the example above, the Type 1 and the Type 2 dimensions conform on the ID level.
If a fact needs the historical data it will consume both the ID and the SubKey.
________________________________________
Rejected rows are written to the bad file. The main reason for rejected rows is an
integrity constraint in the target table; for example, null values in NOT NULL
columns, nonunique values in UNIQUE columns, and so on. The bad file is in the same
format as the input data file
? String operators for:
? Concatenating strings with Cat or :
? Extracting sub strings with [ ]
Hello. ? : ?My Name is ? : X : ?. What?s yours??
... evaluates to:
?Hello. My name is Tarzan. What?s yours??
Field Function: Returns delimited substrings in a string
Returns delimited substrings in a string
MyString = "London+0171+NW2+AZ"
SubString = Field(Mystring, "+", 2, 2)
* returns "0171+NW2"
A=?12345?
A[3]=1212
The result is 121212.
MyString = "1#2#3#4#5"
String = Fieldstore (MyString, "#", 2, 2, "A#B")
* above results in: "1#a#B#4#5"
Operator Relation Example
Eq or = Equality X = Y
Ne or # or >< or <> Inequality X # Y, X <> Y
Lt or < Less than X < Y
Gt or > Greater than X > Y
Le or <= or =< or #> Less than or equal to X <= Y
Ge or >= or => or #< Greater than or equal to X >= Y
You cannot use relational operators to test for a null value. Use the IsNull
function instead.
Tests if a variable contains a null value.
MyVar = @Null ;* sets variable to null value
If IsNull(MyVar * 10) Then
* Will be true since any arithmetic involving a null value
* results in a null value.
End
IF Operator:
Assigns a value that meets the specified conditions
Date( ) :
Returns a date in its internal system format.
This example shows how to turn the current date in internal form into a string
representing the next day:
Tomorrow = Oconv(Date() + 1, "D4/YMD") ;* "1997/5/24"
Ereplace Function:
Formats data for output.:
Replaces one or more instances of a substring.
Syntax
Ereplace (string, substring, replacement [ ,number [ ,begin] ] )
MyString = "AABBCCBBDDBB"
NewString = Ereplace(MyString, "BB", "")
* The result is "AACCDD"
= FMT("1234567", "14R2") X = "1234567.00"
X = FMT("1234567", "14R2$,")X = FMT("12345", "14*R2$,") X = " $1,234,567.00"
X = FMT("1234567", "14L2") X = "1234567.00"
X = FMT("0012345", "14R") X = "0012345"
X = FMT("0012345", "14RZ") X = "12345"
X = FMT("00000", "14RZ") X = " "
X = FMT("12345", "14'0'R") X = "00000000012345"
X = FMT("ONE TWO THREE", "10T") X = "ONE TWO ":T:"THREE "
X = FMT("ONE TWO THREE", "10R") X = "ONE TWO TH":T:"REE "
X = FMT("AUSTRALIANS", "5T") X = "AUSTR":T:"ALIAN":T:"S "
X = FMT("89", "R#####") X = " 89"
X = FMT("6179328323", "L###-#######") X = "617-9328323"
X = FMT("123456789", "L#3-#3-#3") X = "123-456-789"
X = FMT("123456789", "R#5") X = "56789"
X = FMT("67890", "R#10") X = " 67890"
X = FMT("123456789", "L#5") X = "12345"
X = FMT("12345", "L#10") X = "12345 "
X = FMT("123456", "R##-##-##") X = "12-34-56"
X = FMT("555666898", "20*R2$,") X = "*****$555,666,898.00"
X = FMT("DAVID", "10.L") X = "DAVID....."
X = FMT("24500", "10R2$Z") X = " $24500.00"
X = FMT("0.12345678E1", "9*Q") X = "*1.2346E0"
X = FMT("233779", "R") X = "233779"
Date Conversions
The following examples show the effect of various D (Date) conversion codes.
Conversion Expression Internal Value
X = Iconv("31 DEC 1967", "D") X = 0
X = Iconv("27 MAY 97", "D2") X = 10740
X = Iconv("05/27/97", "D2/") X = 10740
X = Iconv("27/05/1997", "D/E") X = 10740
X = Iconv("1997 5 27", "D YMD") X = 10740
X = Iconv("27 MAY 97", "D DMY[,A3,2]") X = 10740
X = Iconv("5/27/97", "D/MDY[Z,Z,2]") X = 10740
X = Iconv("27 MAY 1997", "D DMY[,A,]") X = 10740
X = Iconv("97 05 27", "DYMD[2,2,2]") X = 10740
Date Conversions
The following examples show the effect of various D (Date) conversion codes.
Conversion Expression External Value
X = Oconv(0, "D") X = "31 DEC 1967"
X = Oconv(10740, "D2") X = "27 MAY 97"
X = Oconv(10740, "D2/") X = "05/27/97"
X = Oconv(10740, "D/E") X = "27/05/1997"
X = Oconv(10740, "D-YJ") X = "1997-147"
X = Oconv(10740, "D2*JY") X = "147*97"
X = Oconv(10740, "D YMD") X = "1997 5 27"
X = Oconv(10740, "D MY[A,2]") X = "MAY 97"
X = Oconv(10740, "D DMY[,A3,2]") X = "27 MAY 97"
X = Oconv(10740, "D/MDY[Z,Z,2]") X = "5/27/97"
X = Oconv(10740, "D DMY[,A,]") X = "27 MAY 1997"
X = Oconv(10740, "DYMD[2,2,2]") X = "97 05 27"
X = Oconv(10740, "DQ") X = "2"
X = Oconv(10740, "DMA") X = "MAY"
X = Oconv(10740, "DW") X = "2"
X = Oconv(10740, "DWA") X = "TUESDAY"
OpenSeq ".\ControlFiles\File1" To PathFvar Locked
FilePresent = @True
End Then
FilePresent = @True
End Else
FilePresent = @False
End
Example
This example shows how a before/after routine must be declared as a subroutine at
DataStage release 2. The DataStage Manager will automatically ensure this when you
create a new before/after routine.
Subroutine MyRoutine(InputArg, ErrorCode)
* Users can enter any string value they like when using
* MyRoutine from within the Job Designer. It will appear
* in the variable named InputArg.
* The routine controls the progress of the job by setting
* the value of ErrorCode, which is an Output argument.
* Anything non-zero will stop the stage or job.
ErrorCode = 0 ;* default reply
* Do some processing...
...
Return
MyStr = Trim(" String with whitespace ")
* ...returns "String with whitespace"
MyStr = Trim("..Remove..redundant..dots....", ".")
* ...returns "Remove.redundant.dots"
MyStr = Trim("Remove..all..dots....", ".", "A")
* ...returns "Removealldots"
MyStr = Trim("Remove..trailing..dots....", ".", "T")
* ...returns "Remove..trailing..dots"
This list groups BASIC functionality under tasks to help you find the right
statement or function to use:
? Compiler Directives
? Declaration
? Job Control/Job Status
? Program Control
? Sequential Files Processing
? String Verification and Formatting
? Substring Extraction and Formatting
? Data Conversion
? Data Formatting
? Locales
Function MyTransform(Arg1)
Begin Case
Case Arg1 = 1
Reply = "A"
Case Arg1 = 2
Reply = "B"
Case Arg1 > 2 And Arg1 < 11
Reply = "C"
Case @True ;* all other values
Call DSTransformError("Bad arg":Arg1, "MyTransform"
Reply = ""
End Case
Return(Reply)
________________________________________
New and Expanded Functionality to aid DataStage users in job design and debugging.
A dialog that provides information about what all the special environment variable
values are and what they are for is available by double-clicking at:
- Job properties dialog, Parameters tab, when editing the Default Value cell for
a job parameter defined as an environment variable.
- Admin Client, Environment dialog, when editing a value cell.
Article-II:
? Transformer ?Cancel? operation:
If the Cancel button or <ESC> key are pressed from the main Transformer dialog and
changes have been made, then a confirmation message box is displayed, to check that
the user wants to quit without saving the changes. If no changes have been made, no
confirmation message is displayed.
? Multi-Client Manager:
The previously unsupported ?Client Switcher? tool has been enhanced and integrated
into the DataStage Client. This tool allows the users to install and switch between
multiple different versions of the client. Switching between them also changes the
desktop shortcuts and the Start Menu group to point to another installed DataStage
client.
Enterprise Edition:
Message Handlers allow the user to customize the severity of individual messages
and can be applied at project of job level. Messages can be suppressed from the log
(Information and Warning messages only), promoted (from Information to Warning) or
demoted (from warning to Information). A message handler management tool (available
from DS Manager and Director) provides options to edit, add or delete message
handlers. A new Director option allows message handling to be enabled/disabled for
the current job.
________________________________________
DATASTAGE & DWH INTERVIEW QUESTIONS
________________________________________
Basic DWH:
DataStage:
27. How do you import your source and targets? What are the types of sources and
targets?
28. What is Active Stages and Passive Stages means in datastage?
29. What is difference between Informatica and DataStage? Which do you think is
best?
30. What are the stages you used in your project?
31. Whom do you report?
32. What is orchestrate? Difference between orchestrate and datastage?
33. What is parallel extender? Had you work on this?
34. What do you mean by parallel processing?
35. What is difference between Merge Stage and Join Stage?
36. What is difference between Copy Stage and Transformer Stage?
37. What is difference between ODBC Stage and OCI Stage?
38. What is difference between Lookup Stage and Join Stage?
39. What is difference between Change Capture Stage and Difference Stage?
40. What is difference between Hashed file and Sequential File?
41. What are different Joins used in Join Stage?
42. How you decide when to go for join stage and lookup stage?
43. What is partition key? Which key is used in round robin partition?
44. How do you handle SCD in datastage?
45. What are Change Capture Stage and Change Apply Stages?
46. How many streams to the transformer you can give?
47. What is primary link and reference link?
48. What is routine? What is before and after subroutines? These are run
after/before job or stage?
49. Had you write any subroutines in your project?
50. What is Config File? Each job having its own config file or one is needed?
51. What is Node?
52. What is IPC Stage? What it increase performance?
53. What is Sequential buffer?
54. What are Link Partioner and Link Collector?
55. What are the performance tunning you have done in your project?
56. Did you done scheduling? How? Can you schedule a job at the every end date
of month? How?
57. What is job sequence? Had you run any jobs?
58. What is status view? Why you clear this? If you clear the status view what
internally done?
59. What is hashed file? What are the types of hashed file? Which you use? What
is default? What is main advantage of hashed file? Difference between them. (static
and dynamic)
60. What are containers? Give example from your project.
61. Had you done any hardware configuration while running parallel jobs?
62. What are operators in parallel jobs?
63. What are parameters and parameter file?
64. Can you use variables? In which stages?
65. How do you convert columns to rows and rows to columns in datastage? (Using
Pivot Stage).
66. What is Pivot Stage?
67. What is execution flow of constraints, derivations and variables in
transformer stage? What are these?
68. How do you eliminate duplicates in datastage? Can you use hash file for it?
69. If 1st and 8th record is duplicate then which will be skipped? Can you
configure it?
70. How do you import and export datastage jobs? What is the file extension?
(See each component while importing and exporting).
71. How do you rate yourself in DataStage?
72. Explain DataStage Architecture?
73. What is repository? What are the repository items?
74. What is difference between routine and transform?
75. I have 10 tables with four key column values, in this situation lookup is
necessary, but which type of lookup is used? Either OCBC or Hashed file lookup?
Why?
76. When you write the routines?
77. In one project how many shared containers are created?
78. How do you protect your project?
79. What is the complex situation you faced in DataStage?
80. How will you move hashed file from one location to another location?
81. How will you create static hashed file?
82. How many Jobs you have done in your project? Explain one of complex Job.
________________________________________
1. All about company details, project details, and client details, sample data
of your source?
2. DataStage Architecture?
3. System variable, what are system variables used your project?
4. What are the different datastage functions used in your project?
5. Difference between star schema and snow flake schema?
6. What is confirmed, degenerated and junk dimension?
7. What are confirmed facts?
8. Different type of facts and their examples?
9. What are approaches in developing data warehouse?
10. Different types of hashed files?
11. What are routines and transforms? How you used in your project?
12. Difference between Data Mart and Data Warehouse?
13. What is surrogate key? How do you generate it?
14. What are environment variables and global variables?
15. How do you improve the performance of the job?
16. What is SCD? How do you developed SCD type1 and SCD type2?
17. Why do you go for oracle sequence to generate surrogate key rather than
datastage routines?
18. How do you generate surrogate key in datastage?
19. What is job sequence?
20. What are plug-ins?
21. How much data you can get every day?
22. What is the biggest table and size in your schema or in your project?
23. What is the size of data warehouse (by loading data)?
24. How do you improve the performance of the hashed file?
25. What is IPC Stage?
26. What are the different types of stages and used in your project?
27. What are the operations you can do in IPC Stage and transformer stage?
28. What is merge stage? How do you merge two flat files?
29. I have two table, in one table contains 100 records and other table contains
1000 records which table is the master table? Why?
30. I have one job from one flat file. I have to load data to database, 10 lakhs
records are there, after loading 9 lakhs job is aborted? How do you load remaining
records?
31. Which data your project contains?
32. What is the source in your project?
________________________________________
Data stage:
1. What is ETL Architecture?
2. Explain your project Architecture?
3. How many Data marts and how many facts and dimensions Available in your
project?
4. What is the size of your Data mart?
5. How many types of loading-techniques are Available?
6. Before going to design jobs in Data stage what are preceding-steps in Data
stage?
7. What is the Architecture of Data stage?
8. What is the Main difference between different client components in Data
stage?
9. What are the different stage you have worked on it?
10. Can I call procedures in to datastage.if so How to call store- procedures in
Data stage?
11. What is the difference between sequential file and hashfile? Can we use
sequential file as a lookup? Can we put filter conditions on sequential file?
12. Differences between DRS Stage and ODBC? Which one is the best for
performance?
13. What are the different performance tunning aspects are there in Data stage?
14. How do you remove the duplicates in flat-file?
15. What is the difference between Interprocess and inprocess? Which one is the
best?
16. What is CRC32? On which situation go for CRC32?
17. What is a pivotstage? Can u explain on scenario which situation used in your
project?
18. What is row-spliter and row-merger can I use separately is it possible to do
it?
19. If one user locked the resource? How to release the particular Job?
20. What is a version-controll in data stage?
21. What is the difference between clearlog-file? Clearstage-file?
22. How to scehudle jobs with out using Data stage?
23. What is the difference between Static-hash and dynamichashfile?
24. How to do error handling in data stage?
25. What is the difference between Active stage and passive stage? What are the
Active and passive stages?
26. How to set Environment variables in datastge?
27. What is job controlled routinue? How set job parameter in Data stage?
28. How to release a job?
29. How to do Auto-purge in Data stage?
30. What is the difference between Datastge7.1 and 7.5?
________________________________________
DWH FAQ:
Conformed dimension:
? A dimension table connects to more than one fact table. We present this same
dimension table in both schemes and we refer to dimension table as conformed
dimension.
Conformed fact:
? Definitions of measurements (facts) are highly consistent we call them as
conformed fact.
Junk dimension:
? It is convenient grouping of random flags and aggregates to get them out of a
fact table and into a useful dimensional framework.
Degenerated dimension:
? Usually occur in line item oriented fact table designs. Degenerate dimensions
are normal, expected and useful.
? The degenerated dimension key should be the actual production order of number
and should set in the fact table without a join to anything.
Time dimension:
? It contains a number of useful attributes for describing calendars and
navigating.
? An exclusive time dimension is required because the SQL date semantics and
functions cannot generate several important features, attributes required for
analytical purposes.
? Attributes like week days, week ends, holidays, physical periods cannot be
generated by SQL statements.
Types of facts:
? Additive: facts involved in the calculations for deriving summarized data.
? Semi additive: facts that involved in the calculations at a particular context
of time.
? Non additive: facts that cannot involved in the calculations at every point of
time.
________________________________________
DATASTAGE ROUTINES
________________________________________
BL:
BOT v2.3.0 Returns BLANK if passed value is NOT NULL or BLANK, after trimming
spaces
DataIn = "":Trim(Arg1)
CheckFileRecords:
Function CheckFileRecords(Arg1,Arg2)
Loop
CloseSeq FileVar
Ans=vCountVal
Return (vCountVal)
CheckFileSizes:
DIR = "/interface/dashboard/dashbd_dev_dk_int/Source/"
FNAME = "GLEISND_OC_02_20040607_12455700.csv"
Ans = Output
CheckIdocsSent:
This routine will atempt to read the DataStage Director log for the job name
specified as an argument.
If the job has a fatal error with "No link file", the routine will copy the IDOC
link file(s) into the interface error folder.
In case the fatal error above is not found the routine aborts the job.
A simple log of which runs produce error link file is maintained in the module's
log directory.
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Move = 'move '
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Move = 'mv -f '
End
vErr = DSDetachJob(vJobHandle)
Call DSLogInfo("Job " : JobName : " Detached" , vRoutineName)
***** Make a log entry to keep track of how often the pack doesn't work ********
Repeat
End Else
Call DSLogInfo("Could not open file - " : vIdocLogFilePath , vRoutineName)
Call DSLogInfo("Creating new file - " : vIdocLogFilePath , vRoutineName)
CREATE vIdocLogFile ELSE Call DSLogFatal("Could not create file - " :
vIdocLogFilePath , vRoutineName)
WEOFSEQ vIdocLogFile
WRITESEQ Fmt("Module Run", "12' 'L") : Fmt("Status", "10' 'L") : " " : "Message" To
vIdocLogFile Else ABORT
Call DSLogInfo("Log file created : " : vIdocLogFilePath , vRoutineName)
GOTO FileCreated
End
**** Abort the delivery sequence and write error message to the log. ************
If Status = 'NOT SENT' Then
Call DSLogInfo("No Idocs were actually sent to SAP - Trying to clean up IDOC Link
Files: ", vRoutineName)
vIdocSrcLinkPath = Field(Interface_Root_Path_Parm, OsDelim, 1, 4) : OsDelim :
"dsproject" : OsDelim : Field(Interface_Root_Path_Parm, OsDelim, 4, 1)
vIdocTgtLinkPath = Interface_Root_Path_Parm: OsDelim : "error"
OsCmd = Move : " " : vIdocSrcLinkPath : OsDelim : JobName : ".*.lnk " :
vIdocTgtLinkPath : OsDelim
Call DSExecute(OsType, OsCmd, OsOutput, OsStatus)
If OsStatus <> 0 Then
Call DSLogWarn("Error when trying to move link file(s)", vRoutineName)
LogMessMoveFail = 'The move command (':OsCmd:') returned status
':OsStatus:':':@FM:OsOutput
Call DSLogWarn(LogMessMoveFail, vRoutineName)
Call DSLogFatal("Cleaning up of IDOC Link Files failed", vRoutineName)
End
Else
LogMessMoveOK = "Link files were moved to " : vIdocTgtLinkPath
Call DSLogInfo(LogMessMoveOK, vRoutineName)
LogMessRetry = "Job " : JobName : " is ready to be relaunched."
Call DSLogInfo(LogMessRetry, vRoutineName)
End
End Else
Call DSLogInfo("Delivery job log indicates run OK ", vRoutineName)
End
ClearMappingTable:
ComaDotRmv:
DataIn = "":(Arg1)
CopyFiles:
Function CopyofFiles(sourceDir,SourceFileMask,TargetDir,TargetFileMask,Flags)
RoutineName = "CopyFiles"
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Copy = 'copy '
Flag = Flags
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Copy = 'cp -f '
End
SourceWorkFiles = Trims(Convert(',',@FM,SourceFileMask))
SourceFileList = Splice(Reuse(SourceDir),OsDelim,SourceWorkFiles)
TargetWorkFiles = Trims(Convert(',',@FM,TargetFileMask))
TargetFileList = Splice(Reuse(TargetDir),OsDelim,TargetWorkFiles)
Ans = OsStatus
CopyofComareROWS:
Function copyofcompareRows(Column_Name,Column_Value)
vJobName=DSGetJobInfo(DSJ.ME, DSJ.JOBNAME)
vStageName=DSGetStageInfo(DSJ.ME, DSJ.ME, DSJ.STAGENAME)
vLastValue=LastValue
vNewValue=Column_Value
LastValue=vNewValue
CopyOfZSTPKeyLookup
Check if key passed exists in file passed
Arg1: Hash file to look in
Arg2: Key to look for
Arg3: Number of file to use "1" or "2"
* Routine to look to see if the key passed exists in the file passed
* If so, then the non-key field from the file is returned
* If not found, "***Not Found***" is returned
*
* The routine requires the UniVerse file named to have been created previously
*
Ans = 0
* Read the file to get the data for the key passed, if not found, return "***Not
Found***"
If Arg3 = "1"
Then
Read RetVal From SeqFile1, Arg2 Else RetVal = "***Not Found***"
End
Else
Read RetVal From SeqFile2, Arg2 Else RetVal = "***Not Found***"
End
Ans = RetVal
Create12CharTS:
Function Create12CharTS(JobName)
Ans=vDate
CreateEmptyFile:
Function CreateEmptyFile(Arg1,Arg2)
WeofSeq FileVar
CloseSeq FileVar
Ans="1"
Datetrans:
DateVal
Function Datetrans(DateVal)
Function DeleteFiles(SourceDir,FileMask,Flags)
* Function ReverseDate(DateVal)
* Date mat be in the form of DD.MM.YY i.e. 01.10.03
* convert to YYYYMMDD SAP format
DeleteFiles:
RoutineName = "DeleteFiles"
If SourceDir = '' Then SourceDir = '.'
If FileMask = '' SourceDir = '' Then Return(0)
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Delete = 'del '
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Delete = 'rm ' : Flags : ' '
End
WorkFiles = Trims(Convert(',',@FM,FileMask))
FileList = Splice(Reuse(SourceDir),OsDelim,WorkFiles)
Ans = OsStatus
DisconnectNetworkDrive:
Function Disconnectnetworkdrive(Drive_Letter)
RoutineName = "MapNetworkDrive"
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Copy = 'copy '
OsCmd = 'net use ' : Drive_Letter : ": /delete"
Call DSExecute(OsType,OsCmd,OsOutput,OsStatus)
If OsStatus Then
Call DSLogWarn('The Copy command (':OsCmd:') returned status
':OsStatus:':':@FM:OsOutput, RoutineName)
End Else
Call DSLogInfo('Drive: ' : Drive_Letter : 'Disconnected ',RoutineName)
End
Ans = OsStatus
DosCmd:
Function DosCmd(Cmd)
RoutineName = "DosCmd"
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
End
OsCmd = Cmd
DSMoveFiles:
Move files from one directory to another:
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Move = 'move '
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Move = 'mv -f '
End
WorkFiles = Trims(Convert(',',@FM,FileMask))
FileList = Splice(Reuse(SourceDir),OsDelim,WorkFiles)
Ans = OsStatus
Routine Name:ErrorMgmtDummy:
* FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogInd)
*
* Executes a lookup against a hashed file using a key
*
* Input Parameters : Arg1: Value = The Value to be Mapped or checked
* Arg2: FieldName = The Name of the field that is either the Target of the
Derivation or the sourceField that value is contained in
* Arg3: Format = The name of the Hash file containing the mapping data
* Arg4: Default = The Default value to return if value is not found
* Arg5: Msg = Any text you want stored against an error
* Arg6: SeverityInd = An Indicator to the servity Level
* Arg7: ErrorLogInd = An Indicator to indicate if errors should be logged
* Arg8: HashfileLocation = An Indicator to indicate of errors should be logged
(Note this is not yet implemented)
*
* Return Values: If the Value is not found, return value is: -1. or the Default
value if that is supplied
* If Format Table not found, return value is: -2
*
*
*
RoutineName = 'Map'
DEFFUN
LogToHashFile(ModRunNum,Ticket_Group,Ticket_Sequence,Set_Key,Table,FieldName,Key,Er
ror,Text,SeverityInd) Calling 'DSU.LogToHashFile'
If (Ans = "-1" or Ans = "-2" or UpCase(Ans)= "BLOCKED") and ErrorLogInd = "Y" Then
Ret_Code=LogToHashFile(Mod_Run_Num,Ticket_Group,Ticket_Sequence,Set_Key,Table,Field
Name,Chk_Value,Ans,Msg,SeverityInd)
End
RETURN(Ans)
FileExists:
OPENSEQ FileName TO aFile ON ERROR STOP "Cannot open file (":FileName:")" THEN
CLOSESEQ aFile
END ELSE
FileFound = @FALSE ;* file not found
END
Ans = FileFound
FileSize:
Returns the size of a file
Function FileSize(FileName)
RoutineName = "FileSize"
FileSize = -99
OPENSEQ FileName TO aFile ON ERROR STOP "Cannot open file (":FileName:")" THEN
status FileInfo from aFile else stop
FileSize=Field(FileInfo,@FM,4)
* FileSize=FileInfo
CLOSESEQ aFile
END ELSE
FileSize = -999
END
Ans = FileSize
FindExtension:
FunctionFindExtesion(Arg1)
File_Name=Arg1
Ans = File_Extension
FindFileSuffix:
Function FindFileSuffix(Arg1)
File_Name=Arg1
* Gets the timestamp. Doesn't handle the case where there are suffix types and
timestamp only contains 5 digits without "_" inbetween
If Index(File_Name, "_", 6) = 0 Then
MyLenRead=Index(File_Name, "_", 4) + 1
MyTimestamp = File_Name[MyLenRead,Len(File_Name)-1]
End Else
MyTimestamp = Field(File_Name,"_",5):"_":Field(File_Name,"_",6)
End
Ans = MySuffix
FindTimeStamp:
Function FindTimeStamp(Arg1)
File_Name=Arg1
* Gets the timestamp. Doesn't handle the case where there are suffix types and
timestamp only contains 5 digits without "_" inbetween
If Index(File_Name, "_", 6) = 0 Then
MyLenRead=Index(File_Name, "_", 4) + 1
Timestamp = File_Name[MyLenRead,Len(File_Name)-1]
End Else
Timestamp = Field(File_Name,"_",5):"_":Field(File_Name,"_",6)
End
Ans = Timestamp
formatCharge:
Function FormatCharge(Arg1)
vCharge=Trim(Arg1, 0, "L")
vCharge=vCharge/100
vCharge=FMT(vCharge,"R2")
Ans=vCharge
formatGCharge:
Ans=1
vLength=Len(Arg1)
vMinus=If Arg1[1,1]='-' Then 1 Else 0
If Arg1='0.00' Then
Ans=Arg1
End
Else
If vMinus=1 Then
vString=Arg1[2,vLength-1]
vString='-':Trim(vString, '0','L')
End
else
vString=Trim(Arg1, '0','L')
end
Ans=vString
End
FTPFile:
* FUNCTION FTPFile(Script_Path,File_Path,File_Name,IP_Address,
User_ID,Password,Target_Path)
*
*
RoutineName = 'FTPFile'
Call DSExecute("UNIX",OsCmd,OsOutput,OsStatus)
If OsStatus Then
Call DSLogInfo('The FTP command (':OsCmd:') returned status
':OsStatus:':':@FM:OsOutput,'DSMoveFiles')
End Else
Call DSLogInfo('Files FTPd...': '(':OsCmd:')','FTPFile')
End
Ans = OsStatus
RETURN(Ans)
FTPmget:
* FUNCTION FTPFile(Script_Path,Source_Path,File_Wild_Card,IP_Address,
User_ID,Password,Target_Path)
*
*
RoutineName = 'FTPmget'
Call DSLogInfo('Ftp ':File_Wild_Card: ' From ' : IP_Address : ' ' :Source_Path : '
to ' :Target_Path,RoutineName)
Call DSExecute("UNIX",OsCmd,OsOutput,OsStatus)
If OsStatus Then
Call DSLogInfo('The FTP command (':OsCmd:') returned status
':OsStatus:':':@FM:OsOutput,'DSMoveFiles')
End Else
Call DSLogInfo('Files FTPd...': '(':OsCmd:')',RoutineName)
End
Ans = OsStatus
RETURN(Ans)
Concatenate All Input Arguments to Output using TAB character Concatenate All
Routine="GBIConcatItem"
t = Char(009)
Ans = Pattern
GBIConcatItem:
Concatenate All Input Arguments to Output using TAB character:
Routine="GBIConcatItem"
t = Char(009)
Ans = Pattern
GCMFConvert:
Receive GCMF string and change known strings to required values:
DataIn = "":Trim(Arg1)
GCMFFormating:
*
* FUNCTION GCMFFormating(Switch, All_Row)
*
* Replaces some special characters when creating the GCMF file
*
* Input Parameters : Arg1: Switch = Step to change.
* Arg2: All_Row = Row containing the GCMF Record.
*
DataIn=Trim(All_Row)
If Switch=1 Then
If IsNull(DataIn) or DataIn= "" Then
Ans = "$B$"
End
Else
DataInFmt = Ereplace (DataIn ,"&", "&")
DataInFmt = Ereplace (DataInFmt ,"'", "'")
DataInFmt = Ereplace (DataInFmt ,'"', """)
Ans = DataInFmt
End
End
Else
If Switch=2 Then
DataInFmt = Ereplace (DataIn ,">", ">")
DataInFmt = Ereplace (DataInFmt ,"<", "<")
Ans = DataInFmt
End
Else
* Final Replace, After the Merge of all GCMF segments
DataInFmt = Ereplace (DataIn ,"|", "|")
Ans = DataInFmt
End
End
GeneralCounter:
COMMON /Counter/ OldParam, TotCount
NextId = Identifier
IF UNASSIGNED(OldParam) Then
OldParam = NextId
TotCount = 0
END
Ans = TotCount
GetNextCustomerNumber:
The routine argument is the name associated with the super group that the customer
is being created in.
The routine uses a file to store the next available number. It reads the number,
then increments and stores the value in common, writing the next value back to file
each time.
* Routine to generate the next customer number. The argument is a string used to
* identify the super group for the customer.
*
* The routine uses a UniVerse file to store the next number to use. This
* value is stored in a record named after the supplied argument. The
* routine reads the number, then increments and stores the value
* in common storage, writing the next value back to file each time.
*
If NOT(Initialized) Then
* Not initialised. Attempt to open the file.
Initialized = 1
Open "IOC01_SUPER_GRP_CTL_HF" TO SeqFile Else
Call DSLogFatal("Cannot open customer number allocation control file",RoutineName)
Ans = -1
End
End
* Read the named record from the file.
Readu NextVal From SeqFile, Arg1 Else
Call DSLogFatal("Cannot find super group in customer number allocation control
file",RoutineName)
Ans = -1
End
Ans = NextVal
GetNextErrorTableID:
Sequence number generator in a concurrent environment.
The routine uses a file to store the next available number. It reads the number
from the file on each invocation; a lock on the file prevents concurrent access.
If NOT(Initialized) Then
* Not initialised. Attempt to open the file.
Initialized = 1
Open "ErrorTableSequences" TO SeqFile Else
* Open failed. Create the sequence file.
EXECUTE "CREATE.FILE ErrorTableSequences 2 1 1"
Open "ErrorTableSequences" TO SeqFile Else Ans = -1
End
End
GetNextModSeqNo:
Gets the Next Mod Run Code from an Initialised Sequence
This routine gets the next Mod Run Number in a squenced that was initialised,.
The arguments are Mod_Code_Parm and Supplier_ID_Parm which combined form the key
for this instance of a sequence
GetParameterArray:
* GetParameterArray(Arg1)
* Decription: Get parameters
* Written by:
* Notes:
* Bag of Tricks Version 2.3.0 Release Date 2001-10-01
* Arg1 = Path and Name of Parameter File
*
* Result = ( <1> = Parameter names, <2> = Parameter values)
* ------------------------------------------------------------
DEFFUN FileFound(A) Calling 'DSU.FileFound'
cBlank = ''
cName = 1
cValue = 2
vParamFile = Arg1
aParam = cBlank
vParamCnt = 0
vCurRoutineName = 'Routine: GetParameterArray'
vFailed = @FALSE
Done = @FALSE
End Else
Call DSLogWarn('Error from ':vParamFile:'; Status = ':STATUS(),vCurRoutineName)
vFailed = @TRUE
End
End Else
vFailed = @TRUE
End
Ans = ""
GoTo ExitLastDayMonth
End
InYear = Substrings(Arg1,1,4)
InMonth = Substrings(Arg1,5,2)
End Case
Ans=OutDt:"-":InMonth:"-":InYear
ExitLastDayMonth:
LogToErrorFile:
* FUNCTION LogToErrorFile(Table,Field_Name,Check_Value,Error_Number,Error_Text_1,
Error_Text_2, Error_Text_3,Additional_Message)
*
*
* Places the current Writes Error Messages to a Hash File
*
* Input Parameters : Arg1: Table = The name of Control table being checked
* Arg2: Field_Name = The name of the Field that is in error
* Arg3: Check_Value = The value used to look up in the Hash file to get try and get
a look up match
* Arg4: Error_Number = The error number returned
* Arg5: Error_Text_1 = First error message argument. Used to build the default
error message
* Arg6: Error_Text_2 = Second error message argument. Used to build the default
error message
* Arg7: Error_Text_3 = Thrid error message argument. Used to build the default
error message
* Arg8: Additional_Message = Any text that could be stored against an error
*
RoutineName = "LogToErrorFile"
Ans = "ERROR"
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Move = 'move '
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Move = 'mv -f '
End
vMessage = "INLOG Error Log Data = " : ModRunID : "|" : TicketFileID : "|" :
TicketSequence : "|" : TicketSetKey : "|" : Table : "|" : Field_Name : "|" :
Check_Value : "|" : Error_Number : "|" : Additional_Message
*Call DSLogInfo(vMessage, RoutineName )
Ans = "ERROR"
Return(Ans)
LogToHashFile:
* FUNCTION
LogToHashFile(ModRunNum,TGrp,TSeg,SetKey,Table,FieldNa,KeyValue,Error,Msg,SeverityI
nd)
*
*
* Places the current Writes Error Messages to a Hah File
*
* Input Parameters : Arg1: ModRunNum = The unique number allocated to a run of an
Module
* Arg2: Ticket_Group = The Ticket Group Number of the Current Row
* Arg3: Ticket_Sequence = The Ticket Sequence Number of the Current Row
* Arg4: Set_Key = A Key to identify a set of rows e.g. an Invoice Number to a set
of invoice lines
* Arg5: Table = The name of Control table being checked
* Arg6: FieldNa = The name of the Field that is in error
* Arg7: KeyValue = The value used to look up in the Hash file to get try and get a
look up match
* Arg8: Error = The error number returned
* Arg9: Msg = Any text that could be stored against an error
* Arg10: SeverityInd = An Indicator to state the error severity level
RoutineName = "LogToHashFile"
TAns = 0
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Move = 'move '
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Move = 'mv -f '
End
*Message = "INLOG Error Log Data = " : ModRunNum : "|" : TGrp : "|" : TSeq : "|" :
Set_Key : "|" : Table : "|" : FieldNa : "|" : KeyValue : "|" : Error : "|" : Msg
*Call DSLogInfo(Message, RoutineName )
Ans = TAns
RETURN(Ans)
Routine to check to see if the passed field is populated, and if not, to check to
see if it is mandatory. If the field contains "?", then it is handled as if it is
blank.
The routine uses a control table containing process name, field name, group name
and exclusion flag to control mandatory or not.
The routine arguments are the field name, the field, the group key, whether this is
the first mandatory check for the record, and the process name when the first check
flag is "Y".
* Routine to check whether the passed field is filled, and if not, whether it is
mandatory.
*
* The routine uses a UniVerse file "MANDATORY_FIELD_HF" which contains the
mandatory field controls
*
* Arg1 Field name to be checked (literal)
* Arg2 Field value
* Arg3 Group name
* Arg4 1st call for record
* Arg5 The process name on the first call (this is saved in storage for subsequent
calls)
*
If NOT(Initialized) Then
Initialized = 1
* Call DSLogInfo("Initialisation Started",RoutineName)
Open "MANDATORY_FIELD_HF" TO SeqFile Else
Call DSLogFatal("Cannot open Mandatory field control file",RoutineName)
Ans = -1
End
* Call DSLogInfo("Initialisation Complete",RoutineName)
End
If Arg4 = "Y"
Then
Mandlist = ""
ProcessIn = "":Trim(Arg5)
If IsNull(ProcessIn) or ProcessIn = "" Then ProcessV = " "
Else ProcessV = ProcessIn
End
Map:(Routinue Name)
* FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogInd)
*
* Executes a lookup against a hashed file using a key
*
* Input Parameters : Arg1: Value = The Value to Be Mapped
* Arg2: FieldName = The Name of the field that is either the Target of the
Derivation or the sourceField that value is contained in
* Arg3: Format = The name of the Hash file containing the mapping data
* Arg4: Default = The Default value to return if value is not found
* Arg5: Msg = Any text you want stored against an error
* Arg6: SeverityInd = An Indicator to the servity Level
* Arg7: ErrorLogInd = An Indicator to indicate if errors should be logged
* Arg8: HashfileLocation = An Indicator to indicate of errors should be logged
(Note this is not yet implemented)
*
* Return Values: If the Value is not found, return value is: -1. or the Default
value if that is supplied
* If Format Table not found, return value is: -2
*
*
*
RoutineName = 'Map'
DEFFUN
LogToHashFile(ModRunNum,Ticket_Group,Ticket_Sequence,Set_Key,Table,FieldName,Key,Er
ror,Text,SeverityInd) Calling 'DSU.LogToHashFile'
*
If System(91) Then
OsType = 'NT'
OsDelim = '\'
NonOsDelim = '/'
Move = 'move '
End Else
OsType = 'UNIX'
OsDelim = '/'
NonOsDelim = '\'
Move = 'mv -f '
End
ColumnPosition = 0
PositionReturn = 0
Table = Format
End Else
Default_Ans = Chk_Value
End
Case @TRUE
If UpCase(Field(Default,"|",1)) <> "BL" Then Default_Ans = Default Else Default_Ans
= -1
End Case
*Message = "OUTSIDE PASS Trans Default_Ans ==>" : Default_Ans : " Ans ==> " : Ans
*Call DSLogInfo(Message, RoutineName )
LogPass = "N"
If (Default = "PASS" and Default_Ans <> Ans) then LogPass = "Y"
If LogPass = "Y"
Then
*Message = "PASS Trans Default_Ans ==>" : Default_Ans : " Ans ==> " : Ans
*Call DSLogInfo(Message, RoutineName )
End
*Message = "Write to Log Ans==> " : Ans : " ErrorInd==> " : ErrorLogInd
Ret_Code=LogToHashFile(Mod_Run_Num,Ticket_Group,Ticket_Sequence,Set_Key,Table,Field
Name,Chk_Value,Ans,Msg,SeverityInd)
End
RETURN(Ans)
ErrCode = DSDetachJob(hJob)
Routine="Pattern"
Var_Len = len(Value)
Pattern = Value
For i = 1 To Var_Len
If Num(Value [i,1]) Then
Pattern [i,1] = "n"
end
Else
If Alpha(Value [i,1]) Then
Pattern[i,1] = "a"
end
Else
Pattern[i,1] = Value [i,1]
end
end
Next i
Ans = Pattern
Checks a passed field to see if it matches the pattern which is also passed.:
The input field is checked to see if it conforms to the format that is also passed
as a second parameter.
The result of the routine is True is the pattern matches the required format, and
false if it does not.
Begin Case
End Case
PrepareJob:
RangeCheck:
* FUNCTION Map(Value,FieldName,Format,Default,Msg,ErrorLogInd)
*
* Executes a lookup against a hashed file using a key
*
* Input Parameters : Arg1: Value = The Value to be checked
* Arg2: MinValue = The Min Value allowed
* Arg3: MaxValue = The Max Value allowed
* Arg4: FieldName = The Name of the Source field being checked
* Arg5: Msg = Any text you want stored against an error
* Arg6: SeverityInd = An Indicator to the servity Level
* Arg7: ErrorLogInd = An Indicator to indicate if errors should be logged
*
* Return Values: If the Value is not found, return value is -1. else the value
supplied is returned
*
*
*
RoutineName = 'RangeChk'
DEFFUN
LogToHashFile(ModRunNum,Ticket_Group,Ticket_Sequence,Set_Key,Table,FieldName,Key,Er
ror,Text,SeverityInd) Calling 'DSU.LogToHashFile'
Ret_Code=LogToHashFile(Mod_Run_Num,Ticket_Group,Ticket_Sequence,Set_Key,Table,Field
Name,Value,Ans,OutputMsg,SeverityInd)
End
RETURN(Ans)
ReadParameter:
*
* Function : ReadParameter - Read parameter value from configuration file
* Arg : ParameterName (default=JOB_PARAMETER)
* DefaultValue (default='')
* Config file (default=@PATH/config.ini)
* Return : Parameter value from config file
Function Readparameters(parametersname,Defaultvalue,ConfigFile)
ParameterValue = DefaultValue
Loop
While ReadSeq Line From fCfg
If Trim(Field(Line,'=',1)) = ParameterName
Then
ParameterValue = Trim(Field(Line,'=',2))
Exit
End
Repeat
CloseSeq fCfg
Ans = ParameterValue
RETURN(Ans)
ReturnNumber:
String=Arg1
Slen=Len(String)
Scheck=0
Rnum=""
Schar=Substrings(String,Scheck,1)
If NUM(Schar) then
Rnum=Rnum:Schar
End
Next Outer
Ans=Rnum
ReturnNumbers:
length=0
length=LEN(Arg1);
length1=1;
Outer=length;
postNum=''
counter=1;
For Outer = length to 1 Step -1
Arg2=Arg1[Outer,1]
If NUM(Arg2)
then
length2=counter-1
if length2 = 0
then
length2=counter
postNum=RIGHT(Arg1,length2)
END
else
postNum=RIGHT(Arg1,counter)
END
END
counter=counter+1
Next Outer
Ans=postNum
ReverseDate:
Function ReverseDate(DateVal)
* Function ReverseDate(DatelVal)
* Date mat be in the form of DDMMYYYY i.e. 01102003 or DMMYYYY 1102003
If Len(DateVal) = 7 then
NDateVal = "0" : DateVal
End Else
NDateVal = DateVal
End
Ans = NDateVal[5,4] : NDateVal[3,2] : NDateVal[1,2]
RunJob:
The routine runs a job. Job parameters may be supplied. The result is a dynamic
array containing the job status, and row count information for each link. The
routine UtilityGetRunJobInfo can be used to interpret this result.
As well as the job name and job parameters, the routine parameters allow the job
warning limit and row count limit to be set.
Status<1>=Jobname=FinishStatus
Status<2>=Jobname
Status<3>=JobStartTimeStamp
Status<4>=JobStopTimeStamp
Status<5>=LinkNames (value mark @VM delimited)
Status<6>=RowCount (value mark @VM delimited)
FunctionRunJob(Arg1,Arg2,Arg3,Arg4)
* Demonstrate how to run a job within the GUI development enviroment. Arguments may
* be passed in. The result is a dynamic array with the resulting status and run
* statistics (row counts for every link on every stage in the job)
*
JobHandle = ''
Info = ''
ParamCount = Dcount(Params,'|')
If RowLimit = '' Then RowLimit = 0
If WarnLimit = '' Then WarnLimit = 0
JobStartTime = DSRTimestamp()
JobHandle = DSAttachJob(RunJobName, DSJ.ERRFATAL)
* Prepare the job
ErrorCode = DSPrepareJob(JobHandle)
StageList = DSGetJobInfo(JobHandle,DSJ.STAGELIST)
Message = DSRMessage('DSTAGE_TRX_I_0017', 'List of Stages=%1', StageList )
Call DSLogInfo(Message, RoutineName)
Info<1> = RunJobName
Info<2> = JobStartTime ;* StartTime (Timestamp format)
Info<3> = JobEndTime ;* Now/End (Timestamp format)
LinkCount = Dcount(LinkNames,',')
For StageLink = 1 To LinkCount
* Get Rowcount For this linkname
RowCount =
DSGetLinkInfo(JobHandle,Field(StageList,',',Stage),Field(LinkNames,',',StageLink),D
SJ.LINKROWCOUNT)
Message = DSRMessage( 'DSTAGE_TRX_I_0019', 'RowCount for %1.%2=%3',
Field(StageList,',',Stage):@FM:Field(LinkNames,',',StageLink):@FM:RowCount)
Call DSLogInfo(Message, RoutineName)
Info<4,-1> = Field(StageList,',',Stage):'.':Field(LinkNames,',',StageLink)
Info<5,-1> = RowCount
Next StageLink
Next Stage
Ans = RunJobName:'=':Status:@FM:Info
RunJobAndDetach:
The routine runs a job. Job parameters may be supplied. The job is detached from so
tht others may be started immediately and the control job finish.
As well as the job name and job parameters, the routine parameters allow the job
warning limit and row count limit to be set.
FunctionRunDetachJob(Arg1,Arg2,Arg3,Arg4)
* Run a job, and detach from it so that this job can end
*
JobHandle = ''
Info = ''
ParamCount = Dcount(Params,'|')
If RowLimit = '' Then RowLimit = 0
If WarnLimit = '' Then WarnLimit = 0
Ans = 0
RunShellCommandReturnStatus:
Function RunShellcommandreturnstatus(Command)
Call DSExecute('UNIX',Command,Ans,Ret)
Return(Ret)
SegKey:
Segment_Num: An Integer number representing the order number of the Segment in the
IDoc
Segment_Parm: A Segment Parameter containing a string of Y's and N's in order of
Segment_Num denoting of the segment should be written to in this Module
Key: The Value to Be Mapped
ErrorLogInd: An Indicator to indicate of errors should be logged (Note this is not
yet implemented)
Function Seqkey(Segment_Num,segmentparam,key,ErrorLogInd)
* FUNCTION SegKey(Value,ErrorLogInd)
*
* Executes a lookup against a hashed file using a key
*
* Input Parameters : Arg1: Segment_Num
* Arg2: Segment_Parm
* Arg1: Key = An ordered Pip separated set of Seqment Primary Key Fields
* Arg2: ErrorLogInd = An Indicator to indicate of errors should be logged (Note
this is not yet implemented)
*
* Return Values: If the Value is not found, return value is: -1. or the Default
value if that is supplied
* If Format Table not found, return value is: -2
*
*
*
RoutineName = 'SegKey'
BlankFields = ""
CRLF = Char(13) : Char(10)
Message = "IN Seg Key" : Segment_Num : "|" : Segment_Parm : "|" : Key : "|" :
ErrorLogInd : "|"
* Call DSLogInfo(Message, RoutineName )
Write_Ind = Field(Segment_Parm,"|",Segment_Num)
NumKeys = Dcount(Key,"|")
Blank_Key_Cnt = 0
ReturnKey = ""
For i = 1 to NumKeys
Key_Part = Field(Key,"|",i)
if Key_Part = "" Then
Blank_Key_Cnt = Blank_Key_Cnt + 1
BlankFields<Blank_Key_Cnt> = i
end
Next i
Ans = "Invalid_Key"
End Else
Ans = ReturnKey
End
End
Else
Ans = "Invalid_Key"
End
SetDSParamsFromFile:
A before job subroutine to set Job parameters from an external flat file
Note: a lock is placed to stop the same job from running another instance of this
routine. The second instance will have to wait for the routine to finish before
being allowed to proceed. The lock is released however the routine terminates
(normal, abort...)
The Routine may be invoked via the normal Before Job Subroutine setting, or from
within the 'Job Properties->Job Control' window by entering "Call
DSU.SetParams('MyDir,MyFile',ErrorCode)"
The routine could be made to work off a hashed file, or environment variables quite
easily.
It is not possible to create Job Parameters on-the-fly because they are referenced
within a Job via an EQUATE of the form
JobParam%%1 = STAGECOM.STATUS<7,1>
JobParam%%2 = STAGECOM.STATUS<7,2> etc
Subroutinues SetDsparmsformfile(inputArg,Errorcode)
JobName = Field(STAGECOM.NAME,'.',1,2)
ParamList = STAGECOM.JOB.CONFIG<CONTAINER.PARAM.NAMES>
If ParamList = '' Then
Call DSLogWarn('Parameters may not be externally derived if the job has no
parameters defined.',SetParams)
Return
End
ArgList = Trims(Convert(',',@FM,InputArg))
ParamDir = ArgList<1>
If ParamDir = '' Then
ParamDir = '.'
End
ParamFile = ArgList<2>
If ParamFile = '' Then
ParamFile = JobName
End
If System(91) Then
Delim = '\'
End Else
Delim = '/'
End
ParamPath = ParamDir:Delim:ParamFile
StatusFileName = FileInfo(DSRTCOM.RTSTATUS.FVAR,1)
Readvu LockItem From DSRTCOM.RTSTATUS.FVAR, JobName, 1 On Error
Call DSLogFatal('File read error for ':JobName:' on ':StatusFileName:'. Status =
':Status(),SetParams)
ErrorCode = 1
Return
End Else
Call DSLogFatal('Failed to read ':JobName:' record from ':StatusFileName,SetParams)
ErrorCode = 2
Return
End
StatusId = JobName:'.':STAGECOM.WAVE.NUM
Readv ParamValues From DSRTCOM.RTSTATUS.FVAR, StatusId, JOB.PARAM.VALUES On Error
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
ErrorCode = 1
Call DSLogFatal('File read error for ':StatusId:' on ':StatusFileName:'. Status =
':Status(),SetParams)
Return
End Else
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
ErrorCode = 2
Call DSLogFatal('Failed to read ':StatusId:' record from
':StatusFileName,SetParams)
Return
End
Loop
ReadSeq ParamData From ParamFileVar On Error
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
ErrorCode = 4
Call DSLogFatal('File read error on ':ParamPath:'. Status = ':Status(),SetParams)
Return
End Else
Exit
End
Convert '=' To @FM In ParamData
ParamName = Trim(ParamData<1>)
Del ParamData<1>
ParamValue = Convert(@FM,'=',TrimB(ParamData))
Locate(ParamName,ParamList,1;ParamPos)
Then
If Index(UpCase(ParamName),'PASSWORD',1) = 0
Then Call DSLogInfo('Parameter "':ParamName:'" set to "':ParamValue:'"',SetParams)
Else Call DSLogInfo('Parameter "':ParamName:'" set but not displayed on
log',SetParams)
End
Else
Call DSLogWarn('Parameter ':ParamName:' does not exist in Job ':JobName,SetParams)
Continue
End
ParamValues<1,ParamPos> = ParamValue
Repeat
setParamsForFileSplit:
Using values from a control file this routine will run a job multiple times loading
the specified number of rows for each job run.
Function setParamsForFileSplit:(ControlFilename,Jobname)
***********************************************************************
* Nick Bond.... *
* This routine retrieves values from a control file and passes them as paramters to
*
* a job which is run once for each record in the control file. *
* *
***********************************************************************
$INCLUDE DSINCLUDE JOBCONTROL.H
******** Start loop which gets parameters from control file and runs job.
Loop
vNewFile = 'SingleInvoice':vRecord
Else
** If record is empty leave loop
GoTo Label1
End
Repeat
******** End of Loop
Label1:
Call DSLogInfo('All records have been processed', Routine)
SetUserStatus:
Function Setuserstatus(Arg1)
Call DSSetUserStatus(Arg1)
Ans=Arg1
SMARTNumberConversion
Converts numbers in format 1234,567 to format 1234.57
Function SMARTNUMBERconversion(arg1)
TicketErrorCommon
Required to use the "LogToErrorFile" Routine. This stores variables used by the
routine in shared memory:
* FUNCTION
TicketErrorCommon(Mod_Run_ID,Ticket_Group,Ticket_Sequence,Ticket_Set_Key,Job_Stage_
Name,Mod_Root_Path)
*
* Places the current Row Ticket in Common
*
* Input Parameters : Arg1: Mod_Run_ID = The unique number allocated to a run of an
Module
* Arg2: Ticket_File_ID = The File ID assigned to the source of the Current Row
* Arg3: Ticket_Sequence = The Ticket Sequence Number of the Current Row
* Arg4: Ticket_Set_Key = Identifies a set of rows e.g. an Invoice number to set of
invoice lines
* Arg5: Job_Stage_Name = The Name of the Stage in the Job you want recorded in the
error log
* Arg6: Mod_Root_Path = Root of the module - used for location of error hash file
*
* Don't Return Ans but need to keep the compiler happy
Ans = ""
RoutineName = 'ErrorTicketCommon'
ModRunID = Mod_Run_ID
TicketFileID = Ticket_File_ID
TicketSequence = Ticket_Sequence
SetKey = Ticket_Set_Key
JobStageName = Job_Stage_Name
ModRootPath = Mod_Root_Path
RETURN(Ans)
TVARate:
Function TvaRate(mtt_Base,mtt_TVA)
BaseFormated = "":(Mtt_Base)
TvaFormated = "":(Mtt_TVA)
TVATest:
Function Tvatest(Mtt_TVA,Dlco)
Country = TRIM(Dlco):";"
TestCountry =
Count("AT;BE;CY;CZ;DE;DK;EE;ES;FI;GB;GR;HU;IE;IT;LT;LU;LV;MT;NL;PL;PT;SE;SI;SK;",
Country)
Begin Case
Case Mtt_TVA <> 0
Reply = "B3"
Case Mtt_TVA = 0 And Dlco = "FR" And TestCountry = 0
Reply = "A1"
Case Mtt_TVA = 0 And Dlco <> "FR" And TestCountry = 1
Reply = "E6"
Case Mtt_TVA = 0 And Dlco <> "FR" And TestCountry = 0
Reply = "E7"
Case @True
Reply = "Error"
End Case
Ans = Reply
UnTarFile:
Function Untarfile(Arg1)
DIR = "/interface/dashboard/dashbd_dev_dk_int/Source/"
FNAME = "GLEISND_OC_02_20040607_12455700.csv"
*--------------------------------
*---syntax= tar -xvvf myfile.tar
*---------------------------------
Ans = Output
UtilityMessageToControllerLog
This routine takes a user defined message and displays it in the job log of the
controlling sequence as an informational message.
The routine should be used sparingly in production jobs to avoid degrading the
performance.
Function UtilityMessageToControllerLog(Arg1)
InputMsg = Arg1
If Isnull(InputMsg) Then
InputMsg = " "
End
Call DSLogToController(InputMsg)
Ans = 1
UTLPropagateParms:
Function UTLprapagateparam(Handle)
Ans = 0
ParentJobName = DSGetJobInfo(DSJ.ME,DSJ.JOBNAME)
ChildParams = Convert(',',@FM,DSGetJobInfo(Handle,DSJ.PARAMLIST))
ParamCount = Dcount(ChildParams,@FM)
If ParamCount Then
ParentParams = Convert(',',@FM,DSGetJobInfo(DSJ.ME,DSJ.PARAMLIST))
Loop
ThisParam = ChildParams<1>
Del ChildParams<1>
*** Find job parameter in parent job and set parameter in child job to value of
parent.
Locate(ThisParam,ParentParams;ParamPos) Then
ThisValue = DSGetParamInfo(DSJ.ME,ThisParam,DSJ.PARAMVALUE)
ParamStatus = DSSetParam(Handle,ThisParam,ThisValue)
Call DSLogInfo ("Setting: ":ThisParam:" To: ":ThisValue, "UTLPropagateParms")
End
Else
*** If the parameter is not found in parent job:
*** - write a warning to log file.
*** - return code changed to 3.
Call DSLogWarn ("Parameter : ":ThisParam:" does not exist in ":ParentJobName,
"UTLPropagateParms")
Ans = 3
End
While ChildParams # '' Do Repeat
End
Return(Ans)
UTLRunReceptionJob:
This routines allows generic starting of reception jobs without creating specific
Reception Processing Sequence.
This routines allows generic starting of reception jobs without creating specific
Reception Processing Sequence.
- Determines job to launch (sequence or elementary job)
- Attaches job
- Propagates parameters using routine UTLPropagateParms.
- Runs job and takes action upon result (any warning will lead to a return code NOT
OK)
Function
Utilrunrece[pationjob(countryparam,fileset_name_typeparam,modulerunparam,Abort_msg_
param)
Ans = -3
***********************************************************************************
****
*** ################### ***
***********************************************************************************
****
*** Define job to launch - Sequence or Job (START) ***
*** ***
L$DefineSeq$START:
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2)
started", "ReceptionJob":@FM:vRecJobNameBase))
** If Sequential Job exists - start Sequential Job.
vJobSuffix = "_Seq"
vRecJobName = vRecJobNameBase : vJobSuffix
GoTo L$AttachJob$START
L$DefineJob$START:
** If no Sequential Job - start Elementary Job
vJobSuffix = "_Job"
vRecJobName = vRecJobNameBase : vJobSuffix
GoTo L$AttachJob$START
L$ErrNoJob$START:
** If no job found - warn and end job
Msg = DSMakeMsg("No job found to attach" : vRecJobNameBase : "_Seq or _Job", "")
MsgId = "@ReceptionJob"
GoTo L$ERROR
L$AttachJob$START:
Call DSLogInfo(DSMakeMsg("Checking presence of " : vRecJobName : " for " :
Module_Run_Parm, ""), "")
jbRecepJob = vRecJobName
hRecepJob = DSAttachJob(jbRecepJob, DSJ.ERRNONE)
If (Not(hRecepJob)) Then
AttachErrorMsg$ = DSGetLastErrorMsg()
If AttachErrorMsg$ = "(DSOpenJob) Cannot find job " : vRecJobName Then
If vJobSuffix = "_Seq" Then GoTo L$DefineJob$START
Else
GoTo L$ErrNoJob$START
End
End
Msg = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)<L>%2",
jbRecepJob:@FM:AttachErrorMsg$)
MsgId = "@ReceptionJob"; GoTo L$ERROR
GoTo L$ERROR
End
If hRecepJob = 2 Then
GoTo L$RecepJobPrepare$START
End
*** ***
*** Define job to launch - Sequence or Job (END) ***
***********************************************************************************
****
*** ################### ***
***********************************************************************************
****
*** Setup , Run and Wait for Reception Job (START) ***
*** ***
L$RecepJobPrepare$START:
*** Activity "ReceptionJob": Setup, Run and Wait for job
hRecepJob = DSPrepareJob(hRecepJob)
If (Not(hRecepJob)) Then
Msg = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)<L>%2",
jbRecepJob:@FM:DSGetLastErrorMsg())
MsgId = "@ReceptionJob"; GoTo L$ERROR
End
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2)
started", "ReceptionJob":@FM:vRecJobName))
GoTo L$PropagateParms$START
L$PropagateParms$START:
*** Activity "PropagateParms": Propagating parameters from parent job to child job
using separate routine.
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0058\%1 (ROUTINE
%2) started", "PropagateParms":@FM:"DSU.UTLPropagateParms"))
RtnOk = DSCheckRoutine("DSU.UTLPropagateParms")
If (Not(RtnOk)) Then
Msg = DSMakeMsg("DSTAGE_JSG_M_0005\BASIC routine is not cataloged: %1",
"DSU.UTLPropagateParms")
MsgId = "@PropagateParms"; GoTo L$ERROR
End
Call 'DSU.UTLPropagateParms'(rPropagateParms, hRecepJob)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0064\%1 finished,
reply=%2", "PropagateParms":@FM:rPropagateParms))
IdAbortRact%%Result1%%1 = rPropagateParms
IdAbortRact%%Name%%3 = "DSU.UTLPropagateParms"
*** Checking result of routine. If <> 0 then abort processing.
If (rPropagateParms <> 0)
Then GoTo L$ABORT
GoTo L$RecepJobRun$START
L$RecepJobRun$START:
ErrCode = DSRunJob(hRecepJob, DSJ.RUNNORMAL)
If (ErrCode <> DSJE.NOERROR) Then
Msg = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]",
jbRecepJob:@FM:ErrCode)
MsgId = "@ReceptionJob"; GoTo L$ERROR
End
ErrCode = DSWaitForJob(hRecepJob)
GoTo L$RecepJob$FINISHED
*** ***
*** Setup , Run and Wait for Reception Job (END) ***
***********************************************************************************
****
*** ################### ***
***********************************************************************************
****
*** Verification of result of Reception Job (START) ***
*** ***
L$RecepJob$FINISHED:
jobRecepJobStatus = DSGetJobInfo(hRecepJob, DSJ.JOBSTATUS)
jobRecepJobUserstatus = DSGetJobInfo(hRecepJob, DSJ.USERSTATUS)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished,
status=%2[E]", "ReceptionJob":@FM:jobRecepJobStatus))
IdRecepJob%%Result2%%5 = jobRecepJobUserstatus
IdRecepJob%%Result1%%6 = jobRecepJobStatus
IdRecepJob%%Name%%7 = vRecJobName
Dummy = DSDetachJob(hRecepJob)
bRecepJobelse = @True
If (jobRecepJobStatus = DSJS.RUNOK)
Then GoTo L$SeqSuccess$START; bRecepJobelse = @False
If bRecepJobelse Then GoTo L$SeqFail$START
*** ***
*** Verification of result of Reception Job (END) ***
***********************************************************************************
****
*** ################### ***
***********************************************************************************
****
*** Definition of actions to take on failure or success (START) ***
*** ***
L$SeqFail$START:
*** Sequencer "Fail": wait until inputs ready
Call DSLogInfo(DSMakeMsg("Routine SEQUENCER - Control End Sequence Reports a FAIL
on Reception Job", ""), "@Fail")
GoTo L$ABORT
L$SeqSuccess$START:
*** Sequencer "Success": wait until inputs ready
Call DSLogInfo(DSMakeMsg("Routine SEQUENCER - Control End Sequence Reports a
SUCCESS on Reception Job", ""), "@Success")
GoTo L$FINISH
*** ***
*** Definition of actions to take on failure or success (END) ***
***********************************************************************************
****
*** ################### ***
L$ERROR:
Call DSLogWarn(DSMakeMsg("DSTAGE_JSG_M_0009\Controller problem: %1", Msg), MsgId)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0052\Exception
raised: %1", MsgId:", ":Msg))
bAbandoning = @True
GoTo L$FINISH
L$ABORT:
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0056\Sequence
failed", ""))
Call DSLogInfo(summary$, "@UTLRunReceptionJob")
Call DSLogWarn("Unrecoverable errors in routine UTLRunReceptionJob, see entries
above", "@UTLRunReceptionJob")
Ans = -3
GoTo L$EXIT
**************************************************
L$FINISH:
If bAbandoning Then GoTo L$ABORT
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0054\Sequence
finished OK", ""))
Call DSLogInfo(summary$, "@UTLRunReceptionJob")
Ans = 0
ValidateField:
Checks the length and data type of a value. Also checks value is a valid date if
the type is Date. Any errors are logged to the Error Hash File
vData_Type = Downcase(Data_Type)
BEGIN CASE
******** Check the arguments
* Value being checked is null
CASE isNull(Field_Value)
Call DSTransformError("The value being checked is Null - Field_Name = " :
Field_Name, vRoutineName)
* Argument for the data type is not valid
CASE vData_Type <> "char" AND vData_Type <> "alpha" AND vData_Type <> "numeric" AND
vData_Type <> "date"
Call DSTransformError("The value " : Data_Type : " is not a valid data type for
routine: ", vRoutineName)
* Length is not a number
CASE Not(Num(Length))
Call DSTransformError("The length supplied is not a number : Field Checked " :
Field_Name, vRoutineName)
CASE vData_Type = "date" And (Date_Format = "" OR isNull(Date_Format))
END CASE
*********
End
Ans = Ans
VatCheckSG:
Function VatcheckSg(Arg1)
String=Arg1
Slen=Len(String)
Scheck=0
CharCheck=0
Schar=Substrings(String,Scheck,1)
CharCheck=CharCheck+1
end
Next
Ans=CharCheck
WriteParmFile:
Function writeparamfile(Arg1,Arg2,arg3,arg4)
Loop
ReadSeq Dummy From FileVar Else Exit ;* at end-of-file
Repeat
WeofSeq FileVar
CloseSeq FileVar
Ans=MyLine
WriteSeg:
* FUNCTION SegKey(Value,ErrorLogInd)
*
* Executes a lookup against a hashed file using a key
*
* Input Parameters : Arg1: Segment_Num
* Arg2: Segment_Parm
*
* Return Values: If the Segment should be written return value is "Y"
* If If not return value is "N"
*
*
*
RoutineName = 'WriteSeg'
Write_Ind = Field(Segment_Parm,"|",Segment_Num)
________________________________________
SET_JOB_PARAMETERS_ROUTINE
________________________________________
InputArg?????..Arguments.
ErrorCode????Arguments.
Routinuename: SetDSParamsFromFile
$INCLUDE DSINCLUDE DSD_STAGE.H
$INCLUDE DSINCLUDE JOBCONTROL.H
$INCLUDE DSINCLUDE DSD.H
$INCLUDE DSINCLUDE DSD_RTSTATUS.H
JobName = Field(STAGECOM.NAME,'.',1,2)
ParamList = STAGECOM.JOB.CONFIG<CONTAINER.PARAM.NAMES>
If ParamList = '' Then
Call DSLogWarn('Parameters may not be externally derived if the job has no
parameters defined.',SetParams)
Return
End
ArgList = Trims(Convert(',',@FM,InputArg))
ParamDir = ArgList<1>
If ParamDir = '' Then
ParamDir = '.'
End
ParamFile = ArgList<2>
If ParamFile = '' Then
ParamFile = JobName
End
If System(91) Then
Delim = '\'
End Else
Delim = '/'
End
ParamPath = ParamDir:Delim:ParamFile
End Else
Call StatusFileName = FileInfo(DSRTCOM.RTSTATUS.FVAR,1)
Readvu LockItem From DSRTCOM.RTSTATUS.FVAR, JobName, 1 On Error
Call DSLogFatal('File read error for ':JobName:' on ':StatusFileName:'. Status =
':Status(),SetParams)
ErrorCode = 1
ReturnDSLogFatal('Failed to read ':JobName:' record from
':StatusFileName,SetParams)
ErrorCode = 2
Return
End
StatusId = JobName:'.':STAGECOM.WAVE.NUM
Readv ParamValues From DSRTCOM.RTSTATUS.FVAR, StatusId, JOB.PARAM.VALUES On Error
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
ErrorCode = 1
Call DSLogFatal('File read error for ':StatusId:' on ':StatusFileName:'. Status =
':Status(),SetParams)
Return
End Else
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
ErrorCode = 2
Call DSLogFatal('Failed to read ':StatusId:' record from
':StatusFileName,SetParams)
Return
End
Loop
ReadSeq ParamData From ParamFileVar On Error
Release DSRTCOM.RTSTATUS.FVAR, JobName On Error Null
ErrorCode = 4
Call DSLogFatal('File read error on ':ParamPath:'. Status = ':Status(),SetParams)
Return
End Else
Exit
End
Convert '=' To @FM In ParamData
ParamName = Trim(ParamData<1>)
Del ParamData<1>
ParamValue = Convert(@FM,'=',TrimB(ParamData))
Locate(ParamName,ParamList,1;ParamPos)
Then
If Index(UpCase(ParamName),'PASSWORD',1) = 0
Then Call DSLogInfo('Parameter "':ParamName:'" set to "':ParamValue:'"',SetParams)
Else Call DSLogInfo('Parameter "':ParamName:'" set but not displayed on
log',SetParams)
End
Else
Call DSLogWarn('Parameter ':ParamName:' does not exist in Job ':JobName,SetParams)
Continue
End
ParamValues<1,ParamPos> = ParamValue
Repeat
Note: a lock is placed to stop the same job from running another instance of this
routine. The second instance will have to wait for the routine to finish before
being allowed to proceed. The lock is released however the routine terminates
(normal, abort...)
The parameter file should contain non-blank lines of the form
ParName = ParValue
White space is ignored.
The Routine may be invoked via the normal Before Job Subroutine setting, or from
within the 'Job Properties->Job Control' window by entering "Call
DSU.SetParams('MyDir,MyFile',ErrorCode)"
The routine could be made to work off a hashed file, or environment variables quite
easily.
It is not possible to create Job Parameters on-the-fly because they are referenced
within a Job via an EQUATE of the form
JobParam%%1 = STAGECOM.STATUS<7,1>
JobParam%%2 = STAGECOM.STATUS<7,2> etc
seq$V0S10$count = 0
seq$V0S43$count = 0
seq$V0S44$count = 0
handle$list = ""
id$list = ""
abort$list = ""
b$Abandoning = @False
b$AllStarted = @False
summary$restarting = @False
*** Sequence start point
summary$ = DSMakeMsg("DSTAGE_JSG_M_0048\Summary of sequence run", "")
If summary$restarting Then
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0049\Sequence
restarted after failure", ""))
End Else
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0051\Sequence
started", ""))
End
GoSub L$V0S2$START
b$AllStarted = @True
GoTo L$WAITFORJOB
**************************************************
L$V0S0$START:
*** Activity "FR_PARIS_End_to_End_Processing_SAct": Initialize job
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2)
started",
"FR_PARIS_End_to_End_Processing_SAct":@FM:"FR_PARIS_End_to_End_Processing_Seq"))
Call DSLogInfo(DSMakeMsg("SEQUENCE - START End_to_End_Processing_Seq", ""),
"@FR_PARIS_End_to_End_Processing_SAct")
jb$V0S0 = "FR_PARIS_End_to_End_Processing_Seq":'.':(Module_Run_Parm)
h$V0S0 = DSAttachJob(jb$V0S0, DSJ.ERRNONE)
If (Not(h$V0S0)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)<L>%2",
jb$V0S0:@FM:DSGetLastErrorMsg())
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
h$V0S0 = DSPrepareJob(h$V0S0)
If (Not(h$V0S0)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)<L>%2",
jb$V0S0:@FM:DSGetLastErrorMsg())
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
L$V0S0$PREPARED:
p$V0S0$1 = (Project_Parm)
err$code = DSSetParam(h$V0S0, "Project_Parm", p$V0S0$1)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Project_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$2 = (Module_Parm)
err$code = DSSetParam(h$V0S0, "Module_Parm", p$V0S0$2)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Module_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$3 = (Run_Parm)
err$code = DSSetParam(h$V0S0, "Run_Parm", p$V0S0$3)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$4 = (Module_Run_Parm)
err$code = DSSetParam(h$V0S0, "Module_Run_Parm", p$V0S0$4)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Module_Run_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$5 = (Data_Object_Parm)
err$code = DSSetParam(h$V0S0, "Data_Object_Parm", p$V0S0$5)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Data_Object_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$6 = (Interface_Parm)
err$code = DSSetParam(h$V0S0, "Interface_Parm", p$V0S0$6)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Interface_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$7 = (Interface_Root_Path_Parm)
err$code = DSSetParam(h$V0S0, "Interface_Root_Path_Parm", p$V0S0$7)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Interface_Root_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$8 = (Generic_Root_Path_Parm)
err$code = DSSetParam(h$V0S0, "Generic_Root_Path_Parm", p$V0S0$8)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Generic_Root_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$9 = (Business_Process_Parm)
err$code = DSSetParam(h$V0S0, "Business_Process_Parm", p$V0S0$9)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Business_Process_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$10 = (Company_Parm)
err$code = DSSetParam(h$V0S0, "Company_Parm", p$V0S0$10)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Company_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$11 = (Country_Parm)
err$code = DSSetParam(h$V0S0, "Country_Parm", p$V0S0$11)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Country_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$12 = (SAP_Client_Parm)
err$code = DSSetParam(h$V0S0, "SAP_Client_Parm", p$V0S0$12)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"SAP_Client_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$13 = (Source_System_Parm)
err$code = DSSetParam(h$V0S0, "Source_System_Parm", p$V0S0$13)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Source_System_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$14 = (Fileset_Type_Name_Parm)
err$code = DSSetParam(h$V0S0, "Fileset_Name_Type_Parm", p$V0S0$14)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Fileset_Name_Type_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$15 = (Request_Type_Parm)
err$code = DSSetParam(h$V0S0, "Request_Type_Parm", p$V0S0$15)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Request_Type_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$16 = (Timestamp_Parm)
err$code = DSSetParam(h$V0S0, "Timestamp_Parm", p$V0S0$16)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Timestamp_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$17 = (Fileset_Parm)
err$code = DSSetParam(h$V0S0, "Fileset_Parm", p$V0S0$17)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Fileset_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$18 = (File_Parm)
err$code = DSSetParam(h$V0S0, "File_Parm", p$V0S0$18)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"File_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$19 = (File_Name_Parm)
err$code = DSSetParam(h$V0S0, "File_Name_Parm", p$V0S0$19)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"File_Name_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$20 = (ISDB_Database_Parm)
err$code = DSSetParam(h$V0S0, "ISDB_Database_Parm", p$V0S0$20)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_Database_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$21 = (ISDB_User_Parm)
err$code = DSSetParam(h$V0S0, "ISDB_User_Parm", p$V0S0$21)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_User_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$22 = (ISDB_Password_Parm)
err$code = DSSetParam(h$V0S0, "ISDB_Password_Parm", p$V0S0$22)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_Password_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$23 = (CSDB_Database_Parm)
err$code = DSSetParam(h$V0S0, "CSDB_Database_Parm", p$V0S0$23)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_Database_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$24 = (CSDB_User_Parm)
err$code = DSSetParam(h$V0S0, "CSDB_User_Parm", p$V0S0$24)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_User_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$25 = (CSDB_Password_Parm)
err$code = DSSetParam(h$V0S0, "CSDB_Password_Parm", p$V0S0$25)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_Password_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$26 = (Retest_Parm)
err$code = DSSetParam(h$V0S0, "Retest_Parm", p$V0S0$26)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Retest_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$27 = (Versions_Keep_Cnt_Parm)
err$code = DSSetParam(h$V0S0, "Versions_Keep_Cnt_Parm", p$V0S0$27)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Versions_Keep_Cnt_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$28 = (Parm_File_Comma_Parm)
err$code = DSSetParam(h$V0S0, "Parm_File_Comma_Parm", p$V0S0$28)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Parm_File_Comma_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$29 = (FTP_Server)
err$code = DSSetParam(h$V0S0, "FTP_Server", p$V0S0$29)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Server":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$30 = (FTP_Target_Path)
err$code = DSSetParam(h$V0S0, "FTP_Target_Path", p$V0S0$30)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Target_Path":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$31 = (FTP_Port)
err$code = DSSetParam(h$V0S0, "FTP_Port", p$V0S0$31)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Port":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$32 = (FTP_User)
err$code = DSSetParam(h$V0S0, "FTP_User", p$V0S0$32)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_User":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$33 = (FTP_Password)
err$code = DSSetParam(h$V0S0, "FTP_Password", p$V0S0$33)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Password":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$34 = (Abort_Msg_Parm)
err$code = DSSetParam(h$V0S0, "Abort_Msg_Parm", p$V0S0$34)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Abort_Msg_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$35 = (Load_ISDB_Rejects_Parm)
err$code = DSSetParam(h$V0S0, "Load_ISDB_Rejects_Parm", p$V0S0$35)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Rejects_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$36 = (Load_ISDB_Source_Parm)
err$code = DSSetParam(h$V0S0, "Load_ISDB_Source_Parm", p$V0S0$36)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Source_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$37 = (Source_Delimiter_Parm)
err$code = DSSetParam(h$V0S0, "Source_Delimiter_Parm", p$V0S0$37)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Source_Delimiter_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$38 = (KTGRM_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "KTGRM_Header_Default_Parm", p$V0S0$38)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KTGRM_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$39 = (VTWEG_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "VTWEG_Header_Default_Parm", p$V0S0$39)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VTWEG_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$40 = (SPART_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "SPART_Header_Default_Parm", p$V0S0$40)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"SPART_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$41 = (WAERS_Local_Default_Parm)
err$code = DSSetParam(h$V0S0, "WAERS_Local_Default_Parm", p$V0S0$41)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WAERS_Local_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$42 = (WAERS_Foreign_Default_Parm)
err$code = DSSetParam(h$V0S0, "WAERS_Foreign_Default_Parm", p$V0S0$42)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WAERS_Foreign_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$43 = (KURSK_Default_Parm)
err$code = DSSetParam(h$V0S0, "KURSK_Default_Parm", p$V0S0$43)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KURSK_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$44 = (AUART_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "AUART_Header_Default_Parm", p$V0S0$44)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"AUART_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$45 = (ZLSCH_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "ZLSCH_Header_Default_Parm", p$V0S0$45)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZLSCH_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$46 = (ZTERM_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "ZTERM_Header_Default_Parm", p$V0S0$46)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZTERM_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$47 = (WERKS_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "WERKS_Header_Default_Parm", p$V0S0$47)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WERKS_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$48 = (VKORG_Header_Default_Parm)
err$code = DSSetParam(h$V0S0, "VKORG_Header_Default_Parm", p$V0S0$48)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKORG_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$49 = (TAXM1_Line_Default_Parm)
err$code = DSSetParam(h$V0S0, "TAXM1_Line_Default_Parm", p$V0S0$49)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"TAXM1_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$50 = (VRKME_Line_Default_Parm)
err$code = DSSetParam(h$V0S0, "VRKME_Line_Default_Parm", p$V0S0$50)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VRKME_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$51 = (VKGRP_Line_Default_Parm)
err$code = DSSetParam(h$V0S0, "VKGRP_Line_Default_Parm", p$V0S0$51)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKGRP_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$52 = (VKBUR_Line_Default_Parm)
err$code = DSSetParam(h$V0S0, "VKBUR_Line_Default_Parm", p$V0S0$52)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKBUR_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$53 = (LGORT_Line_Default_Parm)
err$code = DSSetParam(h$V0S0, "LGORT_Line_Default_Parm", p$V0S0$53)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"LGORT_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$54 = (KOSTL_Line_Default_Parm)
err$code = DSSetParam(h$V0S0, "KOSTL_Line_Default_Parm", p$V0S0$54)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KOSTL_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$55 = (ZZTAXCD_Default_Parm)
err$code = DSSetParam(h$V0S0, "ZZTAXCD_Default_Parm", p$V0S0$55)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZZTAXCD_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$56 = (KBETR_Revenue_Default_Parm)
err$code = DSSetParam(h$V0S0, "KBETR_Revenue_Default_Parm", p$V0S0$56)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KBETR_Revenue_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$57 = (KSCHL_Revenue_Default_Parm)
err$code = DSSetParam(h$V0S0, "KSCHL_Revenue_Default_Parm", p$V0S0$57)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KSCHL_Revenue_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$58 = (KSCHL_Surcharge_Default_Parm)
err$code = DSSetParam(h$V0S0, "KSCHL_Surcharge_Default_Parm", p$V0S0$58)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KSCHL_Surcharge_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$59 = (Reception_Job_Name_Parm)
err$code = DSSetParam(h$V0S0, "Reception_Job_Name_Parm", p$V0S0$59)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Reception_Job_Name_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$60 = (Invocation_Parm)
err$code = DSSetParam(h$V0S0, "Invocation_Parm", p$V0S0$60)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Invocation_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$61 = (GUNZIP_Path_Parm)
err$code = DSSetParam(h$V0S0, "GUNZIP_Path_Parm", p$V0S0$61)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"GUNZIP_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$62 = (Run_Error_Mgmt_Parm)
err$code = DSSetParam(h$V0S0, "Run_Error_Mgmt_Parm", p$V0S0$62)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Error_Mgmt_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$63 = "Y"
err$code = DSSetParam(h$V0S0, "Run_Move_and_Tidy_Parm", p$V0S0$63)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Move_and_Tidy_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$64 = (Update_Status_In_DB_Parm)
err$code = DSSetParam(h$V0S0, "Update_Status_In_DB_Parm", p$V0S0$64)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Update_Status_In_DB_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S0$65 = (Run_Reconciliation_Parm)
err$code = DSSetParam(h$V0S0, "Run_Reconciliation_Parm", p$V0S0$65)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Reconciliation_Parm":@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
err$code = DSRunJob(h$V0S0, DSJ.RUNNORMAL)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]",
jb$V0S0:@FM:err$code)
msg$id = "@FR_PARIS_End_to_End_Processing_SAct"; GoTo L$ERROR
End
handle$list<-1> = h$V0S0
id$list<-1> = "V0S0"
Return
**************************************************
L$V0S0$FINISHED:
job$V0S0$status = DSGetJobInfo(h$V0S0, DSJ.JOBSTATUS)
job$V0S0$userstatus = DSGetJobInfo(h$V0S0, DSJ.USERSTATUS)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished,
status=%2[E]", "FR_PARIS_End_to_End_Processing_SAct":@FM:job$V0S0$status))
IdV0S0%%Result2%%1 = job$V0S0$userstatus
IdV0S0%%Result1%%2 = job$V0S0$status
IdV0S0%%Name%%3 = "FR_PARIS_End_to_End_Processing_Seq"
rpt$V0S0 = DSMakeJobReport(h$V0S0, 1, "CRLF")
dummy$ = DSDetachJob(h$V0S0)
If b$Abandoning Then GoTo L$WAITFORJOB
b$V0S0else = @True
If (job$V0S0$status = DSJS.RUNOK) Then GoSub L$V0S61$START; b$V0S0else = @False
If (job$V0S0$status = DSJS.RUNOK) Then GoSub L$V0S43$START; b$V0S0else = @False
If b$V0S0else Then GoSub L$V0S44$START
GoTo L$WAITFORJOB
**************************************************
L$V0S10$START:
*** Sequencer "Run_Jobs_Seq": wait until inputs ready
seq$V0S10$count += 1
If seq$V0S10$count < 1 Then Return
GoSub L$V0S57$START
Return
**************************************************
L$V0S2$START:
*** Activity "Set_Job_Parameters_Routine": Call routine
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0058\%1 (ROUTINE
%2) started", "Set_Job_Parameters_Routine":@FM:"DSU.SetDSParamsFromFile"))
Call DSLogInfo(DSMakeMsg("ROUTINE - Set_Job_Parameters_Routine", ""),
"@Set_Job_Parameters_Routine")
rtn$ok = DSCheckRoutine("DSU.SetDSParamsFromFile")
If (Not(rtn$ok)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0005\BASIC routine is not cataloged: %1",
"DSU.SetDSParamsFromFile")
msg$id = "@Set_Job_Parameters_Routine"; GoTo L$ERROR
End
p$V0S2$1 = (Parm_File_Comma_Parm)
Call 'DSU.SetDSParamsFromFile'(p$V0S2$1, r$V0S2)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0064\%1 finished,
reply=%2", "Set_Job_Parameters_Routine":@FM:r$V0S2))
IdV0S2%%Result1%%4 = r$V0S2
IdV0S2%%Name%%6 = "DSU.SetDSParamsFromFile"
If (r$V0S2 = 0) Then GoSub L$V0S10$START
Return
**************************************************
L$V0S43$START:
*** Sequencer "Success": wait until inputs ready
seq$V0S43$count += 1
If seq$V0S43$count < 3 Then Return
Call DSLogInfo(DSMakeMsg("SEQUENCER - Control Sequence Reports a SUCCESS on all
Stages", ""), "@Success")
Return
**************************************************
L$V0S44$START:
*** Sequencer "Fail": wait until inputs ready
If seq$V0S44$count > 0 Then Return
seq$V0S44$count += 1
Call DSLogInfo(DSMakeMsg("SEQUENCER - Control Sequence Reports at least one Stage
FAILED", ""), "@Fail")
GoSub L$V0S72$START
Return
**************************************************
L$V0S57$START:
*** Activity "FR_PARIS_Control_Start_Processing_SAct": Initialize job
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2)
started",
"FR_PARIS_Control_Start_Processing_SAct":@FM:"FR_PARIS_Control_Start_Processing_Seq
"))
Call DSLogInfo(DSMakeMsg("SEQUENCE - START Control_Start_Processes_Seq", ""),
"@FR_PARIS_Control_Start_Processing_SAct")
jb$V0S57 = "FR_PARIS_Control_Start_Processing_Seq":'.':(Module_Run_Parm)
h$V0S57 = DSAttachJob(jb$V0S57, DSJ.ERRNONE)
If (Not(h$V0S57)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)<L>%2",
jb$V0S57:@FM:DSGetLastErrorMsg())
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
h$V0S57 = DSPrepareJob(h$V0S57)
If (Not(h$V0S57)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)<L>%2",
jb$V0S57:@FM:DSGetLastErrorMsg())
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
L$V0S57$PREPARED:
p$V0S57$1 = (Project_Parm)
err$code = DSSetParam(h$V0S57, "Project_Parm", p$V0S57$1)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Project_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$2 = (Module_Parm)
err$code = DSSetParam(h$V0S57, "Module_Parm", p$V0S57$2)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Module_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$3 = (Run_Parm)
err$code = DSSetParam(h$V0S57, "Run_Parm", p$V0S57$3)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$4 = (Module_Run_Parm)
err$code = DSSetParam(h$V0S57, "Module_Run_Parm", p$V0S57$4)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Module_Run_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$5 = (Data_Object_Parm)
err$code = DSSetParam(h$V0S57, "Data_Object_Parm", p$V0S57$5)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Data_Object_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$6 = (Interface_Parm)
err$code = DSSetParam(h$V0S57, "Interface_Parm", p$V0S57$6)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Interface_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$7 = (Interface_Root_Path_Parm)
err$code = DSSetParam(h$V0S57, "Interface_Root_Path_Parm", p$V0S57$7)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Interface_Root_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$8 = (Generic_Root_Path_Parm)
err$code = DSSetParam(h$V0S57, "Generic_Root_Path_Parm", p$V0S57$8)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Generic_Root_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$9 = (Business_Process_Parm)
err$code = DSSetParam(h$V0S57, "Business_Process_Parm", p$V0S57$9)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Business_Process_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$10 = (Company_Parm)
err$code = DSSetParam(h$V0S57, "Company_Parm", p$V0S57$10)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Company_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$11 = (Country_Parm)
err$code = DSSetParam(h$V0S57, "Country_Parm", p$V0S57$11)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Country_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$12 = (SAP_Client_Parm)
err$code = DSSetParam(h$V0S57, "SAP_Client_Parm", p$V0S57$12)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"SAP_Client_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$13 = (Source_System_Parm)
err$code = DSSetParam(h$V0S57, "Source_System_Parm", p$V0S57$13)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Source_System_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$14 = (Fileset_Type_Name_Parm)
err$code = DSSetParam(h$V0S57, "Fileset_Name_Type_Parm", p$V0S57$14)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Fileset_Name_Type_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$15 = (Request_Type_Parm)
err$code = DSSetParam(h$V0S57, "Request_Type_Parm", p$V0S57$15)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Request_Type_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$16 = (Timestamp_Parm)
err$code = DSSetParam(h$V0S57, "Timestamp_Parm", p$V0S57$16)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Timestamp_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$17 = (Fileset_Parm)
err$code = DSSetParam(h$V0S57, "Fileset_Parm", p$V0S57$17)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Fileset_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$18 = (File_Parm)
err$code = DSSetParam(h$V0S57, "File_Parm", p$V0S57$18)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"File_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$19 = (File_Name_Parm)
err$code = DSSetParam(h$V0S57, "File_Name_Parm", p$V0S57$19)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"File_Name_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$20 = (ISDB_Database_Parm)
err$code = DSSetParam(h$V0S57, "ISDB_Database_Parm", p$V0S57$20)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_Database_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$21 = (ISDB_User_Parm)
err$code = DSSetParam(h$V0S57, "ISDB_User_Parm", p$V0S57$21)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_User_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$22 = (ISDB_Password_Parm)
err$code = DSSetParam(h$V0S57, "ISDB_Password_Parm", p$V0S57$22)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_Password_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$23 = (CSDB_Database_Parm)
err$code = DSSetParam(h$V0S57, "CSDB_Database_Parm", p$V0S57$23)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_Database_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$24 = (CSDB_User_Parm)
err$code = DSSetParam(h$V0S57, "CSDB_User_Parm", p$V0S57$24)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_User_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$25 = (CSDB_Password_Parm)
err$code = DSSetParam(h$V0S57, "CSDB_Password_Parm", p$V0S57$25)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_Password_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$26 = (Retest_Parm)
err$code = DSSetParam(h$V0S57, "Retest_Parm", p$V0S57$26)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Retest_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$27 = (Versions_Keep_Cnt_Parm)
err$code = DSSetParam(h$V0S57, "Versions_Keep_Cnt_Parm", p$V0S57$27)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Versions_Keep_Cnt_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$28 = (Parm_File_Comma_Parm)
err$code = DSSetParam(h$V0S57, "Parm_File_Comma_Parm", p$V0S57$28)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Parm_File_Comma_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$29 = (FTP_Server)
err$code = DSSetParam(h$V0S57, "FTP_Server", p$V0S57$29)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Server":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$30 = (FTP_Target_Path)
err$code = DSSetParam(h$V0S57, "FTP_Target_Path", p$V0S57$30)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Target_Path":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$31 = (FTP_Port)
err$code = DSSetParam(h$V0S57, "FTP_Port", p$V0S57$31)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Port":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$32 = (FTP_User)
err$code = DSSetParam(h$V0S57, "FTP_User", p$V0S57$32)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_User":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$33 = (FTP_Password)
err$code = DSSetParam(h$V0S57, "FTP_Password", p$V0S57$33)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Password":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$34 = (Abort_Msg_Parm)
err$code = DSSetParam(h$V0S57, "Abort_Msg_Parm", p$V0S57$34)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Abort_Msg_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$35 = (Load_ISDB_Rejects_Parm)
err$code = DSSetParam(h$V0S57, "Load_ISDB_Rejects_Parm", p$V0S57$35)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Rejects_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$36 = (Load_ISDB_Source_Parm)
err$code = DSSetParam(h$V0S57, "Load_ISDB_Source_Parm", p$V0S57$36)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Source_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$37 = (Source_Delimiter_Parm)
err$code = DSSetParam(h$V0S57, "Source_Delimiter_Parm", p$V0S57$37)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Source_Delimiter_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$38 = (KTGRM_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "KTGRM_Header_Default_Parm", p$V0S57$38)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KTGRM_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$39 = (VTWEG_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "VTWEG_Header_Default_Parm", p$V0S57$39)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VTWEG_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$40 = (SPART_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "SPART_Header_Default_Parm", p$V0S57$40)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"SPART_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$41 = (WAERS_Local_Default_Parm)
err$code = DSSetParam(h$V0S57, "WAERS_Local_Default_Parm", p$V0S57$41)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WAERS_Local_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$42 = (WAERS_Foreign_Default_Parm)
err$code = DSSetParam(h$V0S57, "WAERS_Foreign_Default_Parm", p$V0S57$42)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WAERS_Foreign_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$43 = (KURSK_Default_Parm)
err$code = DSSetParam(h$V0S57, "KURSK_Default_Parm", p$V0S57$43)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KURSK_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$44 = (AUART_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "AUART_Header_Default_Parm", p$V0S57$44)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"AUART_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$45 = (ZLSCH_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "ZLSCH_Header_Default_Parm", p$V0S57$45)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZLSCH_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$46 = (ZTERM_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "ZTERM_Header_Default_Parm", p$V0S57$46)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZTERM_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$47 = (WERKS_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "WERKS_Header_Default_Parm", p$V0S57$47)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WERKS_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$48 = (VKORG_Header_Default_Parm)
err$code = DSSetParam(h$V0S57, "VKORG_Header_Default_Parm", p$V0S57$48)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKORG_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$49 = (TAXM1_Line_Default_Parm)
err$code = DSSetParam(h$V0S57, "TAXM1_Line_Default_Parm", p$V0S57$49)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"TAXM1_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$50 = (VRKME_Line_Default_Parm)
err$code = DSSetParam(h$V0S57, "VRKME_Line_Default_Parm", p$V0S57$50)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VRKME_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$51 = (VKGRP_Line_Default_Parm)
err$code = DSSetParam(h$V0S57, "VKGRP_Line_Default_Parm", p$V0S57$51)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKGRP_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$52 = (VKBUR_Line_Default_Parm)
err$code = DSSetParam(h$V0S57, "VKBUR_Line_Default_Parm", p$V0S57$52)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKBUR_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$53 = (LGORT_Line_Default_Parm)
err$code = DSSetParam(h$V0S57, "LGORT_Line_Default_Parm", p$V0S57$53)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"LGORT_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$54 = (KOSTL_Line_Default_Parm)
err$code = DSSetParam(h$V0S57, "KOSTL_Line_Default_Parm", p$V0S57$54)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KOSTL_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$55 = (ZZTAXCD_Default_Parm)
err$code = DSSetParam(h$V0S57, "ZZTAXCD_Default_Parm", p$V0S57$55)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZZTAXCD_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$56 = (KBETR_Revenue_Default_Parm)
err$code = DSSetParam(h$V0S57, "KBETR_Revenue_Default_Parm", p$V0S57$56)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KBETR_Revenue_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$57 = (KSCHL_Revenue_Default_Parm)
err$code = DSSetParam(h$V0S57, "KSCHL_Revenue_Default_Parm", p$V0S57$57)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KSCHL_Revenue_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$58 = (KSCHL_Surcharge_Default_Parm)
err$code = DSSetParam(h$V0S57, "KSCHL_Surcharge_Default_Parm", p$V0S57$58)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KSCHL_Surcharge_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
p$V0S57$59 = (Invocation_Parm)
err$code = DSSetParam(h$V0S57, "Invocation_Parm", p$V0S57$59)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Invocation_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
err$code = DSRunJob(h$V0S57, DSJ.RUNNORMAL)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]",
jb$V0S57:@FM:err$code)
msg$id = "@FR_PARIS_Control_Start_Processing_SAct"; GoTo L$ERROR
End
handle$list<-1> = h$V0S57
id$list<-1> = "V0S57"
Return
**************************************************
L$V0S57$FINISHED:
job$V0S57$status = DSGetJobInfo(h$V0S57, DSJ.JOBSTATUS)
job$V0S57$userstatus = DSGetJobInfo(h$V0S57, DSJ.USERSTATUS)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished,
status=%2[E]", "FR_PARIS_Control_Start_Processing_SAct":@FM:job$V0S57$status))
IdV0S57%%Result2%%8 = job$V0S57$userstatus
IdV0S57%%Result1%%9 = job$V0S57$status
IdV0S57%%Name%%10 = "FR_PARIS_Control_Start_Processing_Seq"
rpt$V0S57 = DSMakeJobReport(h$V0S57, 1, "CRLF")
dummy$ = DSDetachJob(h$V0S57)
If b$Abandoning Then GoTo L$WAITFORJOB
b$V0S57else = @True
If (job$V0S57$status = DSJS.RUNOK) Then GoSub L$V0S43$START; b$V0S57else = @False
If (job$V0S57$status = DSJS.RUNOK) Then GoSub L$V0S0$START; b$V0S57else = @False
If b$V0S57else Then GoSub L$V0S44$START
GoTo L$WAITFORJOB
**************************************************
L$V0S61$START:
*** Activity "FR_PARIS_Control_End_Processing_SAct": Initialize job
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0057\%1 (JOB %2)
started",
"FR_PARIS_Control_End_Processing_SAct":@FM:"FR_PARIS_Control_End_Processing_Seq"))
Call DSLogInfo(DSMakeMsg("SEQUENCE START - Control_End_Processing_Seq", ""),
"@FR_PARIS_Control_End_Processing_SAct")
jb$V0S61 = "FR_PARIS_Control_End_Processing_Seq":'.':(Module_Run_Parm)
h$V0S61 = DSAttachJob(jb$V0S61, DSJ.ERRNONE)
If (Not(h$V0S61)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0001\Error calling DSAttachJob(%1)<L>%2",
jb$V0S61:@FM:DSGetLastErrorMsg())
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
h$V0S61 = DSPrepareJob(h$V0S61)
If (Not(h$V0S61)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0012\Error calling DSPrepareJob(%1)<L>%2",
jb$V0S61:@FM:DSGetLastErrorMsg())
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
L$V0S61$PREPARED:
p$V0S61$1 = (Project_Parm)
err$code = DSSetParam(h$V0S61, "Project_Parm", p$V0S61$1)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Project_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$2 = (Module_Parm)
err$code = DSSetParam(h$V0S61, "Module_Parm", p$V0S61$2)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Module_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$3 = (Run_Parm)
err$code = DSSetParam(h$V0S61, "Run_Parm", p$V0S61$3)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$4 = (Module_Run_Parm)
err$code = DSSetParam(h$V0S61, "Module_Run_Parm", p$V0S61$4)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Module_Run_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$5 = (Data_Object_Parm)
err$code = DSSetParam(h$V0S61, "Data_Object_Parm", p$V0S61$5)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Data_Object_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$6 = (Interface_Parm)
err$code = DSSetParam(h$V0S61, "Interface_Parm", p$V0S61$6)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Interface_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$7 = (Interface_Root_Path_Parm)
err$code = DSSetParam(h$V0S61, "Interface_Root_Path_Parm", p$V0S61$7)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Interface_Root_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$8 = (Generic_Root_Path_Parm)
err$code = DSSetParam(h$V0S61, "Generic_Root_Path_Parm", p$V0S61$8)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Generic_Root_Path_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$9 = (Business_Process_Parm)
err$code = DSSetParam(h$V0S61, "Business_Process_Parm", p$V0S61$9)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Business_Process_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$10 = (Company_Parm)
err$code = DSSetParam(h$V0S61, "Company_Parm", p$V0S61$10)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Company_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$11 = (Country_Parm)
err$code = DSSetParam(h$V0S61, "Country_Parm", p$V0S61$11)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Country_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$12 = (SAP_Client_Parm)
err$code = DSSetParam(h$V0S61, "SAP_Client_Parm", p$V0S61$12)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"SAP_Client_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$13 = (Source_System_Parm)
err$code = DSSetParam(h$V0S61, "Source_System_Parm", p$V0S61$13)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Source_System_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$14 = (Fileset_Type_Name_Parm)
err$code = DSSetParam(h$V0S61, "Fileset_Name_Type_Parm", p$V0S61$14)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Fileset_Name_Type_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$15 = (Request_Type_Parm)
err$code = DSSetParam(h$V0S61, "Request_Type_Parm", p$V0S61$15)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Request_Type_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$16 = (Timestamp_Parm)
err$code = DSSetParam(h$V0S61, "Timestamp_Parm", p$V0S61$16)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Timestamp_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$17 = (Fileset_Parm)
err$code = DSSetParam(h$V0S61, "Fileset_Parm", p$V0S61$17)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Fileset_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$18 = (File_Parm)
err$code = DSSetParam(h$V0S61, "File_Parm", p$V0S61$18)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"File_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$19 = (File_Name_Parm)
err$code = DSSetParam(h$V0S61, "File_Name_Parm", p$V0S61$19)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"File_Name_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$20 = (ISDB_Database_Parm)
err$code = DSSetParam(h$V0S61, "ISDB_Database_Parm", p$V0S61$20)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_Database_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$21 = (ISDB_User_Parm)
err$code = DSSetParam(h$V0S61, "ISDB_User_Parm", p$V0S61$21)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_User_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$22 = (ISDB_Password_Parm)
err$code = DSSetParam(h$V0S61, "ISDB_Password_Parm", p$V0S61$22)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ISDB_Password_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$23 = (CSDB_Database_Parm)
err$code = DSSetParam(h$V0S61, "CSDB_Database_Parm", p$V0S61$23)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_Database_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$24 = (CSDB_User_Parm)
err$code = DSSetParam(h$V0S61, "CSDB_User_Parm", p$V0S61$24)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_User_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$25 = (CSDB_Password_Parm)
err$code = DSSetParam(h$V0S61, "CSDB_Password_Parm", p$V0S61$25)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"CSDB_Password_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$26 = (Retest_Parm)
err$code = DSSetParam(h$V0S61, "Retest_Parm", p$V0S61$26)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Retest_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$27 = (Versions_Keep_Cnt_Parm)
err$code = DSSetParam(h$V0S61, "Versions_Keep_Cnt_Parm", p$V0S61$27)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Versions_Keep_Cnt_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$28 = (Parm_File_Comma_Parm)
err$code = DSSetParam(h$V0S61, "Parm_File_Comma_Parm", p$V0S61$28)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Parm_File_Comma_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$29 = (FTP_Server)
err$code = DSSetParam(h$V0S61, "FTP_Server", p$V0S61$29)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Server":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$30 = (FTP_Target_Path)
err$code = DSSetParam(h$V0S61, "FTP_Target_Path", p$V0S61$30)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Target_Path":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$31 = (FTP_Port)
err$code = DSSetParam(h$V0S61, "FTP_Port", p$V0S61$31)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Port":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$32 = (FTP_User)
err$code = DSSetParam(h$V0S61, "FTP_User", p$V0S61$32)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_User":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$33 = (FTP_Password)
err$code = DSSetParam(h$V0S61, "FTP_Password", p$V0S61$33)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"FTP_Password":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$34 = (Abort_Msg_Parm)
err$code = DSSetParam(h$V0S61, "Abort_Msg_Parm", p$V0S61$34)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Abort_Msg_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$35 = (Load_ISDB_Rejects_Parm)
err$code = DSSetParam(h$V0S61, "Load_ISDB_Rejects_Parm", p$V0S61$35)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Rejects_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$36 = (Load_ISDB_Source_Parm)
err$code = DSSetParam(h$V0S61, "Load_ISDB_Source_Parm", p$V0S61$36)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Source_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$37 = (Source_Delimiter_Parm)
err$code = DSSetParam(h$V0S61, "Source_Delimiter_Parm", p$V0S61$37)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Source_Delimiter_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$38 = (KTGRM_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "KTGRM_Header_Default_Parm", p$V0S61$38)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KTGRM_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$39 = (VTWEG_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "VTWEG_Header_Default_Parm", p$V0S61$39)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VTWEG_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$40 = (SPART_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "SPART_Header_Default_Parm", p$V0S61$40)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"SPART_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$41 = (WAERS_Local_Default_Parm)
err$code = DSSetParam(h$V0S61, "WAERS_Local_Default_Parm", p$V0S61$41)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WAERS_Local_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$42 = (WAERS_Foreign_Default_Parm)
err$code = DSSetParam(h$V0S61, "WAERS_Foreign_Default_Parm", p$V0S61$42)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WAERS_Foreign_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$43 = (KURSK_Default_Parm)
err$code = DSSetParam(h$V0S61, "KURSK_Default_Parm", p$V0S61$43)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KURSK_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$44 = (AUART_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "AUART_Header_Default_Parm", p$V0S61$44)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"AUART_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$45 = (ZLSCH_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "ZLSCH_Header_Default_Parm", p$V0S61$45)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZLSCH_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$46 = (ZTERM_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "ZTERM_Header_Default_Parm", p$V0S61$46)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZTERM_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$47 = (WERKS_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "WERKS_Header_Default_Parm", p$V0S61$47)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"WERKS_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$48 = (VKORG_Header_Default_Parm)
err$code = DSSetParam(h$V0S61, "VKORG_Header_Default_Parm", p$V0S61$48)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKORG_Header_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$49 = (TAXM1_Line_Default_Parm)
err$code = DSSetParam(h$V0S61, "TAXM1_Line_Default_Parm", p$V0S61$49)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"TAXM1_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$50 = (VRKME_Line_Default_Parm)
err$code = DSSetParam(h$V0S61, "VRKME_Line_Default_Parm", p$V0S61$50)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VRKME_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$51 = (VKGRP_Line_Default_Parm)
err$code = DSSetParam(h$V0S61, "VKGRP_Line_Default_Parm", p$V0S61$51)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKGRP_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$52 = (VKBUR_Line_Default_Parm)
err$code = DSSetParam(h$V0S61, "VKBUR_Line_Default_Parm", p$V0S61$52)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"VKBUR_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$53 = (LGORT_Line_Default_Parm)
err$code = DSSetParam(h$V0S61, "LGORT_Line_Default_Parm", p$V0S61$53)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"LGORT_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$54 = (KOSTL_Line_Default_Parm)
err$code = DSSetParam(h$V0S61, "KOSTL_Line_Default_Parm", p$V0S61$54)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KOSTL_Line_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$55 = (ZZTAXCD_Default_Parm)
err$code = DSSetParam(h$V0S61, "ZZTAXCD_Default_Parm", p$V0S61$55)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"ZZTAXCD_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$56 = (KBETR_Revenue_Default_Parm)
err$code = DSSetParam(h$V0S61, "KBETR_Revenue_Default_Parm", p$V0S61$56)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KBETR_Revenue_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$57 = (KSCHL_Revenue_Default_Parm)
err$code = DSSetParam(h$V0S61, "KSCHL_Revenue_Default_Parm", p$V0S61$57)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KSCHL_Revenue_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$58 = (KSCHL_Surcharge_Default_Parm)
err$code = DSSetParam(h$V0S61, "KSCHL_Surcharge_Default_Parm", p$V0S61$58)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"KSCHL_Surcharge_Default_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$59 = (Invocation_Parm)
err$code = DSSetParam(h$V0S61, "Invocation_Parm", p$V0S61$59)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Invocation_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$60 = (Run_Reconciliation_Parm)
err$code = DSSetParam(h$V0S61, "Run_Reconciliation_Parm", p$V0S61$60)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Reconciliation_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$61 = (Run_Error_Mgmt_Parm)
err$code = DSSetParam(h$V0S61, "Run_Error_Mgmt_Parm", p$V0S61$61)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Run_Error_Mgmt_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$62 = (Update_Status_In_DB_Parm)
err$code = DSSetParam(h$V0S61, "Update_Status_In_DB_Parm", p$V0S61$62)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Update_Status_In_DB_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
p$V0S61$63 = (Load_ISDB_Cmn_Fmt_Parm)
err$code = DSSetParam(h$V0S61, "Load_ISDB_Cmn_Fmt_Parm", p$V0S61$63)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0002\Error calling DSSetParam(%1), code=%2[E]",
"Load_ISDB_Cmn_Fmt_Parm":@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
err$code = DSRunJob(h$V0S61, DSJ.RUNNORMAL)
If (err$code <> DSJE.NOERROR) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0003\Error calling DSRunJob(%1), code=%2[E]",
jb$V0S61:@FM:err$code)
msg$id = "@FR_PARIS_Control_End_Processing_SAct"; GoTo L$ERROR
End
handle$list<-1> = h$V0S61
id$list<-1> = "V0S61"
Return
**************************************************
L$V0S61$FINISHED:
job$V0S61$status = DSGetJobInfo(h$V0S61, DSJ.JOBSTATUS)
job$V0S61$userstatus = DSGetJobInfo(h$V0S61, DSJ.USERSTATUS)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0063\%1 finished,
status=%2[E]", "FR_PARIS_Control_End_Processing_SAct":@FM:job$V0S61$status))
IdV0S61%%Result2%%11 = job$V0S61$userstatus
IdV0S61%%Result1%%12 = job$V0S61$status
IdV0S61%%Name%%13 = "FR_PARIS_Control_End_Processing_Seq"
rpt$V0S61 = DSMakeJobReport(h$V0S61, 1, "CRLF")
dummy$ = DSDetachJob(h$V0S61)
If b$Abandoning Then GoTo L$WAITFORJOB
b$V0S61else = @True
If (job$V0S61$status = DSJS.RUNOK) Then GoSub L$V0S43$START; b$V0S61else = @False
If b$V0S61else Then GoSub L$V0S44$START
GoTo L$WAITFORJOB
**************************************************
L$V0S72$START:
*** Activity "Abort_RAct": Call routine
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0058\%1 (ROUTINE
%2) started", "Abort_RAct":@FM:"DSX.UTILITYABORTTOLOG"))
rtn$ok = DSCheckRoutine("DSX.UTILITYABORTTOLOG")
If (Not(rtn$ok)) Then
msg$ = DSMakeMsg("DSTAGE_JSG_M_0005\BASIC routine is not cataloged: %1",
"DSX.UTILITYABORTTOLOG")
msg$id = "@Abort_RAct"; GoTo L$ERROR
End
p$V0S72$1 = (Abort_Msg_Parm)
Call 'DSX.UTILITYABORTTOLOG'(r$V0S72, p$V0S72$1)
summary$<1,-1> = Time$$:Convert(@VM, " ", DSMakeMsg("DSTAGE_JSG_M_0064\%1 finished,
reply=%2", "Abort_RAct":@FM:r$V0S72))
IdV0S72%%Result1%%14 = r$V0S72
IdV0S72%%Name%%16 = "DSX.UTILITYABORTTOLOG"
Return
**************************************************
L$WAITFORJOB:
If handle$list = "" Then GoTo L$FINISH
handle$ = DSWaitForJob(handle$list)
If handle$ = 0 Then handle$ = handle$list<1>
Locate handle$ In handle$list Setting index$ Then
id$ = id$list<index$>
b$Abandoning = abort$list<index$>
Del id$list<index$>; Del handle$list<index$>; Del abort$list<index$>
Begin Case
Case id$ = "V0S0"
GoTo L$V0S0$FINISHED
Case id$ = "V0S57"
GoTo L$V0S57$FINISHED
Case id$ = "V0S61"
GoTo L$V0S61$FINISHED
End Case
End
* Error if fall though
handle$list = ""
msg$ = DSMakeMsg("DSTAGE_JSG_M_0008\Error calling DSWaitForJob(), code=%1[E]",
handle$)
msg$id = "@Coordinator"; GoTo L$ERROR
**************************************************
L$ERROR:
Call DSLogWarn(DSMakeMsg("DSTAGE_JSG_M_0009\Controller problem: %1", msg$), msg$id)
Customer Reviews
PMP, CAPM, PMI is a registered certification mark of the Project Management
Institute, Inc