Documente Academic
Documente Profesional
Documente Cultură
2.4 Chapter 2-4. Using SPUFI: Executing SQL from Your Terminal 1
2.4.1 Step 1. Invoke SPUFI and Allocate an Input Data Set 2
2.4.2 Step 2. Change SPUFI Defaults (Optional) 2
2.4.3 Step 3. Enter SQL Statements 2
2.4.4 Step 4. Process SQL Statements 2
2.4.5 Step 5. Browse the Output 2
2.4.6 More SQL Examples 2
5.5 Chapter 5-5. Programming for the Call Attachment Facility (CAF) 1
5.5.1 Call Attachment Facility Capabilities and Restrictions 2
5.5.2 How to Use CAF 2
5.5.3 Sample Scenarios 2
5.5.4 Exits from Your Application 2
5.5.5 Error Messages and DSNTRACE 2
5.5.6 Return and Reason Codes Generated by CAF 2
5.5.7 Program Examples 2
GLOSSARY Glossary 1
BIBLIOGRAPHY Bibliography 1
INDEX Index 1
FIGURES Figures
TABLES Tables
FRONT_1 Notices
(1) Methods for distributed data access are described in "Chapter 4-1.
Designing a DB2 Database Application" in topic 4.1.
1.2.5 Tables
base table A table created with the SQL CREATE TABLE statement
and
used to hold persistent user data.
result table A set of rows that DB2 selects or generates from one or
more base tables.
sample table One of several tables sent with the DB2 licensed program
that contains sample data. Many examples in this book
are based on sample tables. See Appendix A,
"DB2 Sample Tables" in topic A.0 for a description of
the sampl tables.
1.2.6 Indexes
1.2.7 Keys
A unique key is a key that is constrained so that no two of its values are
equal. The constraint is enforced by DB2 during the execution of the LOAD
utility and the SQL INSERT and UPDATE statements. The mechanism
used to enforce the constraint is a unique index. Thus, every unique key is a
key of a unique index. Such an index is also said to have the UNIQUE
attribute.
In DB2, a storage structure is a set of one or more VSAM data sets that
hold DB2 tables or indexes. A storage structure is also called a page set. A
storage structure can be one of the following:
Table space
A table space can hold one or more base tables. All tables are kept in table
spaces. A table space can be defined using the CREATE TABLESPACE
statement.
Index space
Defining and deleting the data sets of a storage structure can be left to
DB2. If it is left to DB2, the storage structure has an associated storage
group. The storage group is a list of DASD volumes on which DB2 can
allocate data sets for associated storage structures. The association
between a storage structure and its storage group is explicitly or implicitly
defined by the statement that created the storage structure.
1.2.10 Databases
In DB2, a database is a set of table spaces and index spaces. These index
spaces contain indexes on the tables in the table spaces of the same
database. Databases are defined using the CREATE DATABASE statement
and are primarily used for administration. Whenever a table space is
created, it is explicitly or implicitly assigned to an existing database.
1.2.11 Catalog
Each DB2 maintains a set of tables containing information about the data
under its control. These tables are collectively known as the catalog. The
catalog tables contain information about DB2 objects such as tables,
views, and indexes. In this book, "catalog" refers to a DB2 catalog unless
otherwise indicated. In contrast, the catalogs maintained by access method
services are known as "integrated catalog facility catalogs."
Tables in the catalog are like any other database tables with respect to
retrieval. If you have authorization, you can use SQL statements to look at
data in the catalog tables in the same way that you retrieve data from any
other table in the system. Each DB2 ensures that the catalog contains
accurate descriptions of the objects that the DB2 controls. For further
details about the DB2 catalog, see Appendix D of SQL Reference.
1.2.12 Views
Views can be used to control access to a table and make data easier to
use. Access to a view can be granted without granting access to the table.
The view can be defined to show only portions of data in the table. A view
can show summary data for a given table, combine two or more tables in
meaningful ways, or show only the selected rows that are pertinent to the
process using the view.
Example: The following SQL statement defines a view named XYZ. The
view represents a table whose columns are named EMPLOYEE and
WHEN_HIRED. The data in the table comes from the columns EMPNO
and HIREDATE of the sample table DSN8310.EMP. The rows from which
the data is taken are for employees in departments A00 and D11.
Read-only views cannot be used for insert, update, and delete operations.
Read-only views are discussed under the CREATE VIEW statement in
Chapter 6 of SQL Reference.
Like all result tables, the table represented by a view is not persistent.
On the other hand, the definition of a view persists in the DB2 catalog. An
SQL DROP VIEW statement can drop a view, and the definition of the view
is removed from the catalog. The definition of a view is also removed
from the catalog when any view or base table on which the view depends is
dropped.
DB2 tables are protected. No one can insert, update, delete, or even
select rows of a table without specific authorization to do so. If you use your
organization's operational data, you could have been granted the privilege to
select, for instance, but not to insert, update, or delete rows in the tables that
contain the data. You could have authorization to insert, update, and delete
data in tables that belong to your department. You could even have been
granted the privilege to create tables, in which case you can be the tables'
owner, or you can assign ownership to someone else. Before you try
creating tables, check with your database administrator (DBA) to see
whether you have been granted authority to create tables.
The data retrieved through SQL is always in the form of a table. In the
DB2 library, this table is called a result table. Like the tables from which
the data is retrieved, a result table has rows and columns. A program fetches
this data one row at a time.
PICTURE 1
For more detailed information on each data type see Chapter 3 of SQL
Reference.
Table 1 shows whether operands of any two data types are compatible
(Yes)
or not (No).
-----------------------------------------------------------------------------------------
-----
Table 1. Compatibility of Data Types
-----------------------------------------------------------------------------------------
-----
Binary Decimal Floating Character Graphic
Time-
Operands Integer Number Point String String Date Time
stamp
----------+----------+----------+----------+----------+----------+----------
+----------
Binary Yes Yes Yes No No No No No
Integer
----------+----------+----------+----------+----------+----------+----------
+----------
Decimal Yes Yes Yes No No No No No
Number
----------+----------+----------+----------+----------+----------+----------
+----------
Floating Yes Yes Yes No No No No No
Point
----------+----------+----------+----------+----------+----------+----------
+----------
Character No No No Yes No * * *
String
----------+----------+----------+----------+----------+----------+----------
+----------
Graphic No No No No Yes No No No
String
----------+----------+----------+----------+----------+----------+----------
+----------
Date No No No * No Yes No No
----------+----------+----------+----------+----------+----------+----------
+----------
Time No No No * No No Yes No
----------+----------+----------+----------+----------+----------+----------
+----------
Time- No No No * No No No Yes
stamp
-----------------------------------------------------------------------------------------
-
Note: * The compatibility of datetime values is limited to assignment
and comparison:
You do not need to know the column names to select DB2 data. Use an
asterisk (*) in the SELECT clause to indicate "all columns" from each
selected row of the named table.
SELECT *
FROM DSN8310.DEPT;
The dashes for MGRNO in the fourth row of the result table indicate that
this value is null. It is null because a manager is not identified for
this department. Null values are described under "Selecting Rows with
Null Values" in topic 2.1.5.1.
SELECT * is recommended for dynamic SQL and view definitions. You
can use
SELECT * in static SQL, but this is not recommended because columns
are
sometimes added to the table to which SELECT * refers. And if this
happens, the program might reference columns for which no receiving
host
variables are defined. For a definition and description of host
variables, see "Accessing Data Using Host Variables and Host Structures"
in topic 3.1.4.
Select the column or columns you want by naming each column. All
columns
appear in the order you specify, not in their order in the table.
000010 A00
000020 B01
000030 C01
------ D01
000050 E01
000060 D11
000070 D21
000090 E11
000100 E21
------ F22
------ G22
------ H22
------ I22
------ J22
Use a WHERE clause to select data from only the rows that meet certain
conditions. A WHERE clause specifies a search condition. A search
condition consists of one or more predicates. A predicate specifies a
test you want DB2 to apply to each table row.
-------------------------------------------------------------------------
Table 2. Comparison Operators Used in Conditions
------------------------------------------------------------------------
Type of Specified Example
comparison with...
--------------------------------+--------------+------------------------
Equal to null IS NULL PHONENO IS NULL
--------------------------------+--------------+------------------------
Equal to DEPTNO 'X01'
--------------------------------+--------------+------------------------
Not equal to <> DEPTNO <> 'X01'
--------------------------------+--------------+------------------------
Less than < AVG(SALARY) < 30000
--------------------------------+--------------+------------------------
Less than or equal to < AGE < 25
--------------------------------+--------------+------------------------
Not less than > AGE > 21
--------------------------------+--------------+------------------------
Greater than > SALARY > 2000
--------------------------------+--------------+------------------------
Greater than or equal to > SALARY > 5000
--------------------------------+--------------+------------------------
Not greater than < SALARY < 5000
--------------------------------+--------------+------------------------
Similar to another value LIKE NAME LIKE '%SMITH%' or
STATUS LIKE 'N_'
--------------------------------+--------------+------------------------
At least one of two conditions OR HIREDATE <
'1965-01-01' OR SALARY
< 16000
--------------------------------+--------------+------------------------
Both of two conditions AND HIREDATE <
'1965-01-01' AND
SALARY < 16000
--------------------------------+--------------+------------------------
Between two values BETWEEN SALARY BETWEEN
20000
AND 40000
--------------------------------+--------------+------------------------
Equals a value in a set IN (X, Y, Z) DEPTNO IN ('B01',
'C01', 'D01')
-----------------------------------------------------------------------
You can also search for rows that do not satisfy one of the above
conditions, by using the NOT keyword before the specified condition.
See
"Selecting Rows Using the NOT Keyword or Comparison Operators" in
topic 2.1.5.4 for more information about using the NOT keyword.
A null value indicates the absence of a column value from a row. A null
value is not the same as zero or all blanks.
A WHERE clause can specify a column that, for some rows, contains a
null
value. Normally, values from such a row are not retrieved, because a null
value is neither less than, equal to, nor greater than the value specified
in the condition. To select values from rows that contain null values;
specify:
You can also use a predicate to screen out null values, specify:
Use the equal () comparison operator to select data only from the rows
that contain a data value in the specified column that is equivalent to
the value specified. To select only the rows where the department number
is A00, use WHERE WORKDEPT 'A00' in your SQL statement:
The statement retrieves the department number and the first and last name
of each employee in department A00.
The example retrieves the date hired and the name for each employee
hired
before 1960.
Comparison operators can be used with string data, in which case the
System/370 (*) EBCDIC collating sequence is used: uppercase letters,
certain special characters, lowercase letters, other special characters,
digits. The EBCDIC collating sequence is different from the ASCII
collating sequence.
Use the NOT keyword or comparison operators (<>, >, and <) to select all
rows except the rows identified with the search condition. (2) The NOT
keyword or comparison operators must precede the search condition. To
select all managers whose compensation is not greater than $30,000, use:
The NOT keyword cannot be used with the comparison operators. The
following WHERE clause results in an error:
You also can precede other SQL keywords with NOT: NOT LIKE, NOT
IN, or
NOT BETWEEN are all acceptable. For example, the following two
clauses
are equivalent:
(2) Although the not sign (^) can be used with comparison
operators, it is not recommended. See Chapter 3 of SQL
Reference for more information on basic predicates.
Use LIKE to specify a character string that is similar to the column value
of rows you want to select:
The following SQL statement selects data from each row for employees
with
the initials "E. H."
The following SQL statement selects data from each row of the
department
table where the department name contains "CENTER" anywhere in its
name.
could be specified to select all department numbers that begin with E and
end with 1. If E1 is one of your department numbers, it is not selected
because the blank that follows 1 is not taken into account. If the
DEPTNO
column had been defined as a three-character column of varying-length,
department E1 would have been selected because varying-length columns
can
have any number of characters, up to and including the maximum number
specified when the column was created.
The following SQL statement selects data from each row of the
department
table where the department number starts with an E and contains a 1.
'E_1' means "E, followed by any character, followed by 1." (Be careful:
'_' is an underscore character, not a hyphen.) If two-character
department numbers were also allowed, 'E_1' would select only
three-character department numbers that begin with E and end with 1.
The SQL statement below selects data from each row whose four-digit
phone
number has the first three digits of 378.
This example retrieves the employee number, date hired, and salary for
each employee hired before 1965 and having a salary of less than $16,000
per year.
This example retrieves the employee number, date hired, and salary for
each employee who either was hired before 1965, or has a salary less than
$16,000 per year, or both.
If you use more than two conditions with AND or OR, you can use
parentheses to make sure the intended condition is interpreted correctly.
* The employee was hired before 1965 AND is paid less than $20,000.
* The employee's education level is less than 13.
Based on this WHERE clause, the selected rows are for employees
000290,
000310, and 200310.
With the parentheses moved, however, the meaning of the WHERE clause
changes significantly:
This clause selects the row of each employee that satisfies both of the
following conditions:
Based on this WHERE clause, two rows are selected: one for employee
000310 and one for employee 200310.
SELECT EMPNO
FROM DSN8310.EMP
WHERE (HIREDATE < '1965-01-01' AND SALARY < 20000)
OR (HIREDATE > '1965-01-01' AND SALARY > 20000);
When using the NOT condition with AND and OR, the placement of the
parentheses is important.
In this SQL statement, only the first predicate (SALARY > 50000) is
negated.
This SQL statement retrieves the employee number, education level, and
job
title of each employee who satisfies both of the following conditions:
This SQL statement retrieves the employee number, education level, and
job
title of each employee who satisfies at least one of the following
conditions:
You can retrieve data from each row whose column has a value within
two
limits; use BETWEEN.
Specify the lower boundary of the BETWEEN condition first, then the
upper
boundary. The limits are inclusive. If you specify WHERE column-name
BETWEEN 6 AND 8 (and if the value of the column-name column is an
integer), DB2 selects all rows whose column-name value is 6, 7, or 8. If
you specify a range from a larger number to a smaller number (for
example,
BETWEEN 8 AND 6), the predicate is always false.
Example
Example
The example retrieves the employee numbers and the salaries for all
employees who either earn less than $40,000 or more than $50,000.
You can use the BETWEEN predicate when comparing floating point
data in
your program with decimal data in the database to define a tolerance
around the two numbers being compared. Assume the following:
Possibly wrong:
SELECT *
FROM MYTABLE
WHERE :HOSTVAR COL1;
The following embedded SQL statement uses a host variable, FUZZ, that
contains a tolerance factor.
Better:
SELECT *
FROM MYTABLE
WHERE :HOSTVAR BETWEEN (COL1 - :FUZZ) AND (COL1 +
:FUZZ);
You can use the IN predicate to retrieve data from each row that has a
column value equal to one of several listed values.
In the values list after IN, the order of the items is not important and
does not affect the ordering of the result. Enclose the entire list in
parentheses, and separate items by commas; the blanks are optional.
Using the IN predicate gives the same results as a much longer set of
conditions separated by the OR keyword. For example, the WHERE
clause in
the SELECT statement above could be coded:
The SQL statement below finds any sex code not properly entered.
You can concatenate strings by using the CONCAT keyword. You can
use
CONCAT in any string expression. For example,
concatenates the last name, comma, and first name of each result row.
CONCAT is the preferred keyword. See Chapter 3 of SQL Reference for
more
information on expressions using the concatenation operator.
You can retrieve calculated values, just as you display column values, for
selected rows.
# EMPNO
# Column names are not shown because columns that are the result of
# calculations are unnamed.
# DB2 allows two sets of rules for determining the precision and scale of
# the result of an operation with decimal numbers.
# For example, if A and B are columns of a table with the data type
DECIMAL,
# and if DEC31 rules are in effect, then SUM(A) / SUM(B) always creates
an
# error.
# For another example, if DEC31 rules are in effect, this statement also
# causes an error:
# In that case, the SALARY column has the data type DECIMAL(9,2).
Converted
# to a decimal number, 52 has type DECIMAL(5,0). By DEC31 rules,
# SUM(SALARY) is DECIMAL(31,2) and the scale of the result is 30-5-
(31-2+0)
# -4. The negative value of the scale produces an error that is detected
# when the statement is bound, for a static statement, or when it is
# prepared, for a dynamic statement.
# What You Can Do: For static SQL statements, the simplest fix is to
# override DEC31 rules by specifying the option DEC(15) when
precompiling.
# That reduces the probability of errors for statements embedded in the
# program, but does nothing for dynamic statements that the program
# prepares.
# For a dynamic statement, or for a single static statement, use the scalar
# function DECIMAL to specify values of the precision and scale of a
result
# that cause no errors. For example, write:
# In that example, the function DECIMAL specifies that the precision and
# scale of the first operand are 21 and 2, regardless of DEC31 rules. For
# the division operation, those rules then say that the scale of the result
# must be 30-5-(21-2+0) 6, which causes no problem.
If you will be using dates, you should assign datetime data types to all
columns containing dates. This not only allows you to do more with your
table but it can save you from problems like the following:
Suppose now that at the time the query is executed, one person
represented
in YEMP is 27 years, 0 months, and 29 days old but is not shown in the
results. What happens is this:
If you have existing date data that is not stored in datetime columns, you
can use conversions to avoid errors. The following examples illustrate a
few conversion techniques, given the existing type of data:
DATE(SUBSTR(DIGITS(C2),1,4)CONCAT'-'
CONCAT SUBSTR(DIGITS(C2),5,2)
CONCAT'-'CONCAT
SUBSTR(DIGITS(C2),7,2))
* For DECIMAL(7,0) values of the form yyyynnn in column C2, use the
DATE
function as follows:
DATE(DIGITS(C2))
* For DECIMAL(5,0) values of the form yynnn in column C3, you need
to
concatenate 19 to the front of the two-digit year using the DATE
function as follows:
Two types of built-in functions are available for use with a SELECT
statement: column and scalar.
A column function produces a single value for a group of rows. You can
# use the SQL column functions to calculate values based on entire
columns
# of data. The calculated values are based on selected rows only (all rows
that satisfy the WHERE clause).
The following SQL statement calculates for department D11, the sum of
employee salaries, the minimum, average, and maximum salary, and the
count
of employees in the department:
SELECT SUM(SALARY), MIN(SALARY), AVG(SALARY),
MAX(SALARY), COUNT(*)
FROM DSN8310.EMP
WHERE WORKDEPT 'D11';
#
276620.00 18270.00 25147.27272727 32250.00 11
# Column names are not shown because columns that are the result of
# functions are unnamed.
DISTINCT can be used with the SUM, AVG, and COUNT functions.
DISTINCT
means that the selected function will be performed on only the unique
values in a column. The specification of DISTINCT with the MAX and
MIN
functions has no effect on the result and is not advised.
# SUM and AVG can only be applied to numbers. MIN, MAX, and
COUNT can be
applied to values of any type.
SELECT COUNT(*)
FROM DSN8310.EMP;
This SQL statement calculates the average education level of employees
in
a set of departments.
SELECT AVG(EDLEVEL)
FROM DSN8310.EMP
WHERE WORKDEPT LIKE '_0_';
The SQL statement below counts the different jobs in the DSN8310.EMP
table.
SELECT YEAR(HIREDATE)
FROM DSN8310.EMP
WHERE WORKDEPT 'A00';
1972
1965
1965
1958
1963
# No column name is shown because columns that are the result of
functions
# are unnamed.
The scalar function YEAR produces a single scalar value for each row of
# DSN8310.EMP that satisfies the search condition. In this example, there
# are five rows that satisfy the search condition, so YEAR is applied five
# times resulting in five scalar values.
Table 3 shows the scalar functions that can be used. For complete details
on using these functions see Chapter 4 of SQL Reference.
-------------------------------------------------------------------------
Table 3. Scalar Functions
------------------------------------------------------------------------
Scalar Returns... Example
Function
-------------+-----------------------------+----------------------------
CHAR a string representation of CHAR(HIREDATE)
its first argument.
-------------+-----------------------------+----------------------------
DATE a date derived from its DATE('1989-03-02')
argument.
-------------+-----------------------------+----------------------------
DAY the day part of its DAY(DATE1 - DATE2)
argument.
-------------+-----------------------------+----------------------------
DAYS an integer representation DAYS('1990-01-08') -
of its argument. DAYS(HIREDATE) + 1
-------------+-----------------------------+----------------------------
DECIMAL a decimal representation of DECIMAL(AVG(SALARY),
8,2)
a numeric value.
-------------+-----------------------------+----------------------------
DIGITS a character string DIGITS(COLUMNX)
representation of its
argument.
-------------+-----------------------------+----------------------------
FLOAT floating-point FLOAT(SALARY)/COMM
representation of its
argument.
-------------+-----------------------------+----------------------------
HEX a hexadecimal HEX(BCHARCOL)
representation of its
argument.
-------------+-----------------------------+----------------------------
HOUR the hour part of its HOUR(TIMECOL) > 12
argument.
-------------+-----------------------------+----------------------------
INTEGER an integer representation INTEGER(AVG(SALARY)
+.5)
of its argument.
-------------+-----------------------------+----------------------------
LENGTH the length of its argument. LENGTH(ADDRESS)
-------------+-----------------------------+----------------------------
MICROSECOND the microsecond part of its
MICROSECOND(TSTMPCOL) <> 0
argument.
-------------+-----------------------------+----------------------------
MINUTE the minute part of its MINUTE(TIMECOL) 0
argument.
-------------+-----------------------------+----------------------------
MONTH the month part of its MONTH(BIRTHDATE) 5
argument.
-------------+-----------------------------+----------------------------
SECOND the seconds part of its SECOND(RECEIVED)
argument.
-------------+-----------------------------+----------------------------
SUBSTR a substring of a string. SUBSTR(FIRSTNME,2,3)
-------------+-----------------------------+----------------------------
TIME a time derived from its TIME(TSTMPCOL) <
argument. '13:00:00'
-------------+-----------------------------+----------------------------
TIMESTAMP a timestamp derived from TIMESTAMP(DATECOL,
its argument or arguments. TIMECOL)
-------------+-----------------------------+----------------------------
VALUE the first argument that is VALUE(SMLLINT1,100) +
not null. SMLLINT2 > 1000
-------------+-----------------------------+----------------------------
VARGRAPHIC a graphic string from its VARGRAPHIC
(:MIXEDSTRING)
argument.
-------------+-----------------------------+----------------------------
YEAR the year part of its YEAR(BIRTHDATE) 1956
argument.
-----------------------------------------------------------------------
2.1.9.2.1 Examples
* CHAR
SELECT CHAR(MAX(BIGDECIMAL))
INTO :BIGSTRING
FROM T;
SELECT CHAR(HIREDATE,USA)
FROM DSN8310.EMP
WHERE EMPNO'000010';
returns 01/01/1965.
* DECIMAL
The DECIMAL function returns a decimal representation of a numeric
value. For example, DECIMAL can transform an integer value so that
it
can be used as a duration. Assume that the host variable PERIOD is of
type INTEGER. The following example selects all of the starting dates
(PRSTDATE) from the DSN8310.PROJ table and adds to them a
duration
specified in a host variable (PERIOD).
To use the integer value in PERIOD as a duration it must first be
interpreted as decimal(8,0). For example,
EXEC SQL
SELECT PRSTDATE + DECIMAL(:PERIOD,8)
FROM DSN8310.PROJ;
* VALUE
For example, you want to know the month and day of hire for a
particular employee in department E11, and you want the result in USA
format.
06/19
28.0
The actual form of the above result depends on the definition of the
host variable to which the result is assigned (in this case,
DECIMAL(3,1)).
For example, you want to know the year in which the last employee in
department A00 was hired.
SELECT YEAR(MAX(HIREDATE))
FROM DSN8310.EMP
WHERE WORKDEPT 'A00';
ORDER BY lets you specify the order in which rows are retrieved.
The order of the selected rows is based on the column identified in the
ORDER BY clause; this column is the ordering column.
The example retrieves data showing the seniority of employees. Rows are
shown in ascending order, based on each row's HIREDATE column
value. ASC
is the default sorting order.
To put the rows in descending order, specify DESC. For example, to
retrieve the department numbers, last names, and employee numbers of
female employees in descending order of department numbers, use the
following SQL statement:
Rows are sorted in the EBCDIC collating sequence. To order the rows by
more than one column's value, use more than one column name in the
ORDER
BY clause.
When several rows have the same first ordering column value, those rows
are in an order based on the second column identified in the ORDER BY
clause, and then on the third ordering column, and so on. For example,
there is a difference between the results of the following two SELECT
statements. The first one orders selected rows by job and next by
education level. The second SELECT statement orders selected rows by
education level and next by job.
FIELDREP 14 LEE
FIELDREP 14 WONG
FIELDREP 16 GOUNOT
FIELDREP 16 ALONZO
FIELDREP 16 MEHTA
MANAGER 14 SPENSER
FIELDREP 14 LEE
FIELDREP 14 WONG
MANAGER 14 SPENSER
FIELDREP 16 MEHTA
FIELDREP 16 GOUNOT
FIELDREP 16 ALONZO
EMPNO LASTNAME
You can use the column number even if the ordering column has a name.
The following SQL statement lists names and phone numbers of all
employees
in ascending order by LASTNAME.
The DISTINCT keyword removes duplicate rows from your result. Each
row
contains unique data.
The following SELECT statement lists the department numbers of the
administrating departments:
ADMRDEPT
A00
D01
E01
SELECT ADMRDEPT
FROM DSN8310.DEPT;
ADMRDEPT
A00
A00
A00
A00
A00
D01
D01
E01
E01
E01
E01
E01
E01
E01
When the DISTINCT keyword is omitted, the ADMRDEPT column value
of each
selected row is returned, even though the result includes several
duplicate rows.
Except for the grouping columns (the columns named in the GROUP BY
clause), any other column value that is selected must be specified as an
operand of one of the column functions.
The following SQL statement lists, for each department, the lowest and
highest education level within that department.
You can use ORDER BY to specify the order in which rows are retrieved.
GROUP BY accumulates column values from the selected rows by group,
but
does not order the groups.
The following SQL statement calculates the average salary for each
department:
A00 40850.00000000
B01 41250.00000000
C01 29722.50000000
D11 25147.27272727
D21 25668.57142857
E01 40175.00000000
E11 21020.00000000
E21 24086.66666666
You can also specify that you want the rows grouped by more than one
column. For example, you can find the average salary for men and women
in
departments A00 and C01. The following SQL statement:
# WORKDEPT SEX
A00 F 49625.00000000
A00 M 35000.00000000
C01 F 29722.50000000
The rows are grouped first by department number and next (within each
department) by sex before DB2 derives the average SALARY value for
each
group.
If there are null values in the column you specify in the GROUP BY
clause,
DB2 considers those values equal and returns a single-row result
summarizing the data in those rows with null values.
When it is used, the GROUP BY clause follows the FROM clause and
any WHERE
clause, and precedes the ORDER BY clause.
The SQL statement below lists, for each job, the number of employees
with
that job and their average salary.
# WORKDEPT
A00 40850.00000000
C01 29722.50000000
D11 25147.27272727
D21 25668.57142857
E11 21020.00000000
E21 24086.66666666
The HAVING clause tests a property of the group. To specify that you
want
the average salary and minimum education level of women in each
department
in which all female employees have an education level greater than or
equal to 16, you can use the HAVING clause. Assuming you are only
interested in departments A00 and D11, the following SQL statement tests
the group property, MIN(EDLEVEL):
# WORKDEPT
A00 49625.00000000 18
D11 25817.50000000 17
When GROUP BY and HAVING are both specified, the HAVING clause
must follow
the GROUP BY clause. A function in a HAVING clause can include
DISTINCT
if you have not used DISTINCT anywhere else in the same SELECT
statement.
You can also use multiple predicates in a HAVING clause by connecting
them
with AND and OR, and you can use HAVING NOT for any predicate of a
search
condition.
Data from two or more tables can be combined or joined. To join data,
you
need to do the following:
The search condition for the join is the link between the tables that
restricts the selection of rows. For example, the search condition could
identify two columns that must be equal (one from each of the tables
being
joined) in order for the rows to be joined and included in the result.
Each row of the result table contains data that has been joined from both
tables (for rows that satisfy the search condition). Each column of the
result table contains data from one, but not both, of the tables.
If you do not specify a search condition in the WHERE clause, the result
table contains all possible combinations of rows for the tables identified
in the FROM clause. If this happens, the number of rows in the result
table is equal to the product of the number of rows in each table
specified.
The SELECT statement example displays data retrieved from two tables:
The WHERE clause makes the connection between the two tables.
Column
values are selected only from those rows where MGRNO (in
DSN8310.DEPT)
equals EMPNO (in DSN8310.EMP).
Using the UNION keyword, you can combine two or more SELECT
statements to
form a single result table. When DB2 encounters the UNION keyword, it
processes each SELECT statement to form an interim result table, and
then
combines the interim result table of each statement. You use UNION to
merge lists of values from two or more tables. You can use any of the
clauses and techniques you have learned so far when coding SELECT
statements, including ORDER BY.
SELECT EMPNO
FROM DSN8310.EMP
WHERE WORKDEPT 'D11'
UNION
SELECT EMPNO
FROM DSN8310.EMPPROJACT
WHERE PROJNO 'MA2112' OR
PROJNO 'MA2113' OR
PROJNO 'AD3111'
ORDER BY 1;
000060
000150
000160
000170
000180
000190
000200
000210
000220
000230
000240
200170
200220
Any ORDER BY clause must appear after the last SELECT statement that
is
part of the union. In this example, the sequence of the results is based
on the first column of the result table. ORDER BY specifies the order of
the final result table.
To specify the columns by which DB2 is to order the results, use numbers
(in a union, you cannot use column names for this). The number refers to
the position of the column in the result table.
SELECT EMPNO
FROM DSN8310.EMP
WHERE WORKDEPT 'D11'
UNION ALL
SELECT EMPNO
FROM DSN8310.EMPPROJACT
WHERE PROJNO 'MA2112' OR
PROJNO 'MA2113' OR
PROJNO 'AD3111'
ORDER BY 1;
000060
000150
000150
000150
000160
000160
000170
000170
000170
000170
000180
000180
000190
000190
000190
000200
000210
000210
000210
000220
000230
000230
000230
000230
000230
000240
000240
200170
200220
Suppose that you want the name, employee number, and department ID of
every employee whose last name ends in "son" in table DSN8310.EMP at
location DALLAS. If you have the appropriate authority, you could run
the
following query:
Assume now that the alias SMITH.DALEMP has been defined at your
local
subsystem for the table DALLAS.DSN8310.EMP. You could then
substitute the
alias for this table name in the previous query.
For more on aliases and their creation, see the description of CREATE
ALIAS in SQL Reference.
2.1.18 Using Special Registers
-------------------------------------------------------------------------
Table 4. Contents of Special Registers for Locally and Remotely
Executed SQL Statements
------------------------------------------------------------------------
Register Name Contains Set By
-------------------+--------------------------+-------------------------
CURRENT DATE The current date. System clock and MVS
TIMEZONE parameter
-------------------+--------------------------+-------------------------
CURRENT DEGREE The degree to which Connection or sign-on
parallel I/O processing (DEGREE option of BIND
is used when executing PLAN), package
dynamic SQL queries. execution (DEGREE
See "Parallel I/O option of BIND
Operations and Query PACKAGE), or SET
Performance" in CURRENT DEGREE
topic 5.2.5 statement
-------------------+--------------------------+-------------------------
CURRENT The name of a collection SET CURRENT
PACKAGESET
PACKAGESET of packages to be statement
searched. See
"Identifying Packages at
Run Time" in
topic 4.2.4.3.
-------------------+--------------------------+-------------------------
CURRENT SERVER The name of the current Connection or sign-on
application server that (local DB2 or
executes SQL statements; CURRENTSERVER option of
see "Using BIND PLAN), COMMIT
Application-Directed after RELEASE (blanks),
Access" in CONNECT or SET
topic 4.1.4.4.2. CONNECTION statement
-------------------+--------------------------+-------------------------
CURRENT SQLID The authorization ID of Connection, sign-on, or
the process. SET CURRENT SQLID
statement
-------------------+--------------------------+-------------------------
CURRENT TIME The current time. System clock and MVS
TIMEZONE parameter
-------------------+--------------------------+-------------------------
CURRENT TIMESTAMP The current date and System clock and
MVS
time, in the form of a TIMEZONE parameter
timestamp.
-------------------+--------------------------+-------------------------
CURRENT TIMEZONE The difference between MVS TIMEZONE
parameter
current time and
Greenwich Mean Time
(GMT).
-------------------+--------------------------+-------------------------
USER The primary Connection or sign-on
authorization ID of the
process. (This might be
a derived value for
remotely executed SQL
statements.)
-----------------------------------------------------------------------
Use the CREATE TABLE statement to create a table. The following SQL
statement creates a table named PRODUCT:
* A list of the columns that make up the table. For each column,
specify:
- The data type and length attribute (for example, CHAR(8)). For
further information about data types, see "Data Types" in
topic 2.1.2.
- NOT NULL, when the column cannot contain null values and does
not
have a default value.
You must separate each column description from the next with a
comma
and enclose the entire list of column descriptions in parentheses.
Before executing sample SQL statements that insert, update, and delete
rows, you probably want to create work tables (duplicates of the
DSN8310.EMP and DSN8310.DEPT tables) that you can practice with so
the
original sample tables remain intact. This section shows how to create
two work tables and how to fill a work table with the contents of another
table.
Each example shown in this chapter assumes you are logged on using
your
own authorization ID. The authorization ID qualifies the name of each
object you create. For example, if your authorization ID is SMITH, and
you create table YDEPT, the name of the table is SMITH.YDEPT. If you
want
to access table DSN8310.DEPT, you must refer to it by its complete
name.
If you want to access your own table YDEPT, you need only to refer to it
as "YDEPT".
You must use two statements to create YDEPT and its index as shown
above.
If you want DEPTNO to be a primary key as in the sample table, you must
explicitly define the key. Use an ALTER TABLE statement:
You can use the following statements to create and fill a new employee
table called YEMP.
The CREATE TABLE statement can create tables with primary keys or
foreign
keys in order to establish referential constraints.
Use the CREATE VIEW statement to define a view and give the view a
name,
just as you do for a table.
This view adds each department manager's name to the department data in
the DSN8310.DEPT table.
When a program accesses the data defined by a view, DB2 uses the view
definition to return a set of rows the program can access with SQL
statements. Now that the view VDEPTM exists, you can manipulate data
by
means of it. To see the departments administered by department D01 and
the managers of those departments, execute the following statement:
You can use views to limit access to certain kinds of data, such as salary
information. Views can also be used to do the following:
* Combine data from two or more tables and make the combined data
available to an application. By using a SELECT statement that matches
values in one table with those in another table, you can create a view
that presents data from both tables. However, data defined by this
type of view can only be selected. You cannot update, delete, or
insert data into a view that joins two tables.
* The owner of the plan or package that contains the program must be
authorized to update, delete, or insert rows into the view. You are
so authorized if you have created that view or have privileges for the
table on which the view is based. Otherwise, to bind the program you
have to obtain authorization through a GRANT statement.
* When inserting a row into a table (via a view), the row must have a
value for each column of the table that does not have a default value.
If a column in the table on which the view is based is not specified
in the view's definition, and if the column does not have a default
value, you cannot insert rows into the table via the view.
In either case, for every row you insert, you must provide a value for any
column that does not have a default value.
You can name all columns for which you are providing values.
Alternatively, you can omit the column name list; when the program is
bound, DB2 inserts the name of each column of the table or view into the
column name list.
By naming the columns, you need not list the values based on their
position in the table. When you list the column names, supply their
corresponding values in the same order as the listed column names.
It is a good idea to name all columns into which you are inserting values
because:
For example,
SELECT *
FROM YDEPT
WHERE DEPTNO LIKE 'E%'
ORDER BY DEPTNO;
shows you all the new department rows that you have inserted:
* You can copy one table into another, as explained in "Filling a Table
from Another Table: Mass INSERT" in topic 2.2.2.1.3.
* You can use the DB2 LOAD utility to enter data from other sources.
See Command and Utility Reference for more information about the
LOAD
utility.
* If the primary index does not currently exist, then define a unique
index on the primary key.
* Do not insert a null value for any column of the primary key.
* Each non-null value you insert into a foreign key column must be
equal
to some value in the primary key (the primary key is in the parent
table).
* If any field in the foreign key is null, the entire foreign key is
considered null.
* If the index enforcing the primary key of the parent table has been
dropped, the INSERT into either the parent table or dependent table
fails.
For example, the sample application project table (PROJ) has foreign keys
on the department number (DEPTNO), referencing the department table,
and
the employee number (RESPEMP), referencing the employee table.
Every row
inserted into the project table must have a value of RESPEMP that is
either equal to some value of EMPNO in the employee table or is null.
The
row must also have a value of DEPTNO that is equal to some value of
DEPTNO
in the department table. (The null value is not allowed because DEPTNO
in
the project table is defined as NOT NULL.)
The following statement also inserts a row into the YEMP table.
However,
several column values are not specified. Because the unspecified columns
allow it, null values are inserted into columns not named: PHONENO,
EDLEVEL, SEX, BIRTHDATE, SALARY, BONUS, COMM, and
HIREDATE. Since YEMP has
a foreign key WORKDEPT referencing the primary key DEPTNO in
YDEPT, the
value being inserted for WORKDEPT (D11) must be a value of DEPTNO
in
YDEPT.
INSERT INTO YEMP
(EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, JOB)
VALUES ('000410', 'MILLARD', 'K', 'FILLMORE', 'D11',
'MANAGER');
This statement copies data from DSN8310.EMP into the newly created
table:
The two previous statements create and fill a table, TELE, that looks like
this:
The INSERT statement fills the newly created table with data selected
from
the DSN8310.EMP table: the names and phone numbers of employees in
Department D21. (The SELECT statement within the INSERT statement
specifies the data you want selected from one table to be inserted into
another table.)
The following SQL statements create a table with character and graphic
columns and then insert mixed data strings into the character column and
graphic data strings into the graphic column:
PL/I uses a different format for graphic data. See Chapter 3 of SQL
Reference for information on graphic string constants for PL/I.
To change the data in a table, use the UPDATE statement. You can also
use
the UPDATE statement to delete a value from a row's column (without
removing the row) by changing the column's value to NULL.
UPDATE YEMP
SET JOB 'MANAGER ',
PHONENO '5678'
WHERE EMPNO '000400';
The SET clause names the columns you want updated and provides the
values
you want them changed to. The value you specify can be:
A column name. Replace the column's current value with the contents
of another column in the same row.
A null value. Replace the column's current value with a null value.
The column must have been defined as capable of containing a null
# value when the table was created or when the column was added, or an
error occurs.
A host variable. Replace the column's current value with the contents
of the host variable.
* CURRENT DATE
* CURRENT DEGREE
* CURRENT PACKAGESET
* CURRENT SERVER
* CURRENT SQLID
* CURRENT TIME
* CURRENT TIMESTAMP
* CURRENT TIMEZONE
* USER
An expression. Replace the column's current value with the value that
results from an expression.
* To update several rows, use a WHERE clause that locates only the
rows
you want to update.
If you omit the WHERE clause; DB2 updates every row in the table or
view
with the values you supply.
UPDATE YEMP
SET MIDINIT 'H', JOB 'FIELDREP'
WHERE EMPNO '000200';
UPDATE YEMP
SET SALARY SALARY + 400.00
WHERE WORKDEPT 'D11';
If you are updating a dependent table, any non-null foreign key values
that you enter must match the primary key for each relationship in which
the table is a dependent. For example, department numbers in the
employee
table depend on the department numbers in the department table; you can
assign no department to an employee, but you cannot assign an employee
to
a department that does not exist.
You can use the DELETE statement to remove entire rows from a table.
The
DELETE statement removes zero or more rows of a table, depending on
how
many rows satisfy the search condition you specified in the WHERE
clause.
If you omit a WHERE clause from a DELETE statement, DB2 removes
all the
rows from the table or view you have named. The DELETE statement
does not
remove specific columns from the row.
This DELETE statement deletes each row in the YEMP table that has an
employee number 000060.
When this statement is executed, DB2 deletes any row from the YEMP
table
that meets the search condition.
Delete Rules:
DELETE RESTRICT
The row can be deleted only if no other row depends on it. If a
dependent row exists in the relationship, the DELETE fails.
For example, you cannot delete a department from the department table
if it is still responsible for some project, which is described by a
dependent row in the project table.
DELETE CASCADE
First the named rows are deleted, then the dependent rows are deleted,
honoring the delete rules of their dependents.
For example, you can delete a department by deleting its row in the
department table; that also deletes the rows for all departments that
report to it, all departments that report to them, and so forth.
deletes every row in the YDEPT table. If the statement is executed, the
table continues to exist (that is, you can insert rows into it) but it is
empty. All existing views and authorizations on the table remain intact
when using DELETE. If you use DROP, all views and authorizations are
dropped which can invalidate plans and packages. Refer to "Dropping
Tables: DROP" in topic 2.2.3 for a description of the DROP statement.
Use the DROP TABLE statement with care: When a table is dropped, it
loses
# its data as well as its definition. When you drop a table, all synonyms,
views, indexes, and referential constraints associated with that table are
also dropped. All authorities granted on the table are lost. Similarly,
a view can be dropped, using the DROP VIEW statement. This does not
cause
a loss of data in the table on which the view was created.
With the proper authorization, you can also drop a database. When a
database is dropped, all tables, views, indexes, and referential
constraints defined on that database are also dropped.
DROP TABLE drops all constraints in which the table is a parent or
dependent. When a table is dropped, all indexes on the table (including
the primary index) are implicitly dropped. If a primary index is
explicitly dropped, the definition of its table is changed to incomplete
and an SQL warning message is issued. If a table is defined as
incomplete, programs cannot use the table.
Dropping a table is NOT equivalent to deleting all its rows. Instead, when
you drop a table you also drop all the relationships in which the table
participates, either as parent or dependent. This can affect application
programs that depend on the existence of a parent table, so use DROP
carefully.
But you cannot go further because the DSN8310.EMP table does not
include
project number data. You do not know which employees are working on
project MA2111 without issuing another SELECT statement against the
DSN8310.EMPPROJACT table.
You can nest one SELECT statement within another to solve this
problem.
The inner SELECT statement is called a subquery. The SELECT
statement
surrounding the subquery is called the outer SELECT.
To better understand what results from this SQL statement, imagine that
DB2 goes through the following process:
.
.
.
(SELECT EMPNO
FROM DSN8310.EMPPROJACT
WHERE PROJNO 'MA2111');
(from DSN8310.EMPPROJACT)
-------
000200
------
000220
-------
2. The interim result table then serves as a list in the search condition
of the outer SELECT. Effectively, DB2 executes this statement:
2.3.1.1 Correlation
A case in point is the previous query. Its content is the same for every
row of the table DSN8310.EMP. Subqueries like this are said to be
uncorrelated.
Some subqueries do vary in content from row to row or group to group.
The
mechanism that allows this is called correlation, and the subqueries are
said to be correlated. Correlated subqueries are described on page 2.3.3.
All of the information described preceding that section applies to both
correlated and uncorrelated subqueries.
SELECT AVG(SALARY)
SELECT EMPNO
The result table produced by a subquery can have zero or more rows. For
some usages, no more than one row is allowed.
* Basic predicate
* Quantified Predicates: ALL, ANY, and SOME
* Using the IN Keyword
* Using the EXISTS Keyword
* Correlated subqueries.
For example, the following SQL statement returns the employee numbers,
names, and salaries for employees whose education level is higher than
the
average company-wide education level.
* Use ALL to indicate that the value you have supplied must compare in
the indicated way to all the values the subquery returns. For
example, suppose you use the greater-than comparison operator with
ALL:
.
.
.
WHERE expression > ALL (subquery)
* Use ANY or SOME to indicate that the value you have supplied must
compare in the indicated way to at least one of the values the
subquery returns. For example, suppose you use the greater-than
comparison operator with ANY:
.
.
.
WHERE expression > ANY (subquery)
The results when a subquery returns one or more null values could in
some
cases surprise you. For applicable rules, read the description of
quantified predicates in Chapter 3 of SQL Reference.
You can use IN to say that the value in the expression must be among the
values returned by the subquery. Using IN is equivalent to using " ANY"
or " SOME."
In the subqueries presented thus far, DB2 evaluates the subquery and uses
the result as part of the WHERE clause of the outer SELECT. In contrast,
when you use the keyword EXISTS, DB2 simply checks whether the
subquery
returns one or more rows. If it does, the condition is satisfied; if it
does not (if it returns no rows), the condition is not satisfied. For
example:
SELECT EMPNO,LASTNAME
FROM DSN8310.EMP
WHERE EXISTS
(SELECT *
FROM DSN8310.PROJ
WHERE PRSTDATE > '1986-01-01');
As shown in the example, you do not need to specify column names in the
subquery of an EXISTS clause. Instead, you can code SELECT *. You
can
also use the EXISTS keyword with the NOT keyword in order to select
rows
when the data or condition you specify does not exist; that is, you can
code
Suppose that you want a list of all the employees whose education levels
are higher than the average education levels in their respective
departments. To get this information, DB2 must search the
DSN8310.EMP
table. For each employee in the table, DB2 needs to compare the
employee's education level to the average education level for the
employee's department.
In the subquery, you tell DB2 to compute the average education level for
the department number in the current row. A query that does this follows:
Consider what happens when the subquery is executed for a given row of
DSN8310.EMP. Before it is executed, the occurrence of X.WORKDEPT
is
replaced with the value of the WORKDEPT column for that row.
Suppose, for
example, that the row is for CHRISTINE HAAS. Her work department is
A00,
which is the value of WORKDEPT for this row. The subquery executed
for
this row is therefore:
(SELECT AVG(EDLEVEL)
FROM DSN8310.EMP
WHERE WORKDEPT 'A00');
Thus, for the row considered, the subquery produces the average
education
level of Christine's department. This is then compared in the outer
statement to Christine's own education level. For some other row for
which WORKDEPT has a different value, that value appears in the
subquery
in place of A00. For example, for the row for MICHAEL L
THOMPSON, this
value is B01, and the subquery for his row delivers the average education
level for department B01.
The result table produced by the query has the following values:
(from DSN8310.EMP)
EMPNO LASTNAME WORKDEPT EDLEVEL
Fetch -------------------------------------------
1--> 000010 HAAS A00 18
---------+-----------+----------+---------
2--> 000030 KWAN C01 20
---------+-----------+----------+---------
3--> 000090 HENDERSON E11 16
---------+-----------+----------+---------
4--> 000110 LUCCHESI A00 19
---------+-----------+----------+---------
* * * *
* * * *
* * * *
The correlation name is defined in the FROM clause of some query. This
query could be the outer-level SELECT, or any of the subqueries that
contain the reference. Suppose, for example, that a query contains
subqueries A, B, and C, and that A contains B and B contains C. Then a
correlation name used in C could be defined in B, A, or the outer
SELECT.
You can define a correlation name for each table name appearing in a
FROM
clause. Simply append the correlation name after its table name. Leave
one or more blanks between a table name and its correlation name, and
place a comma after the correlation name if it is followed by another
table name. The following FROM clause, for example, defines the
correlation names TA and TB for the tables TABLEA and TABLEB, and
no
correlation name for the table TABLEC.
UPDATE DSN8310.PROJ X
SET PRIORITY 1
WHERE DATE('1995-09-01') >
(SELECT MAX(ACENDATE)
FROM DSN8310.PROJACT
WHERE PROJNO X.PROJNO);
The result of an operation must not depend on the order in which rows of
a
table are accessed. If this statement could be executed, its result would
depend on whether the row for any department was accessed before or
after
deleting the rows for the departments it administers. Hence, the operation
is prohibited.
With the referential constraints defined for the sample tables, the
statement causes an error. The delete operation involves the table
referred to in the subquery (DSN8310.EMP is a dependent of
DSN8310.DEPT)
and the last delete rule in the path to EMP is SET NULL, not RESTRICT.
If
the statement could be executed, its results would again depend on the
order in which rows were accessed.
2.4 Chapter 2-4. Using SPUFI: Executing SQL from Your Terminal
This chapter explains how to enter and execute SQL statements at a TSO
terminal using the SPUFI (SQL processor using file input) facility. You
can execute most of the interactive SQL examples shown in "Section 2.
Using SQL Queries" by following the instructions provided in this chapter
and using the sample tables shown in Appendix A, "DB2 Sample Tables"
in
topic A.0.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Data you enter on the SPUFI panel tells DB2 how to process your input
data
set, and where to send the output.
When the SPUFI panel is first displayed, enter the name of an input data
set (where you will put SQL statements you want DB2 to execute) and an
output data set (where DB2 will put the results of your queries). You can
also enter new processing option defaults to specify how you want the
next
SPUFI processing sequence to proceed.
The next time (and subsequent times) the SPUFI panel is displayed, the
data entry fields on the panel will contain the values that were set on
the panel previously. You can specify data set names and processing
options each time the SPUFI panel displays, as needed. Values you do
not
change will remain in effect.
-----------------------------------------------------------------------------------
Enter the output data set name: (Must be a sequential data set)
4 DATA SET NAME..... > RESULT
-----------------------------------------------------------------------------------
* Allocate this data set before you invoke SPUFI. The name must
conform to standard TSO naming conventions.
* The data set can be empty before you begin the session. You can
then add the SQL statements by editing the data set from SPUFI.
If you use this panel a second time, the name of the data set you
used previously appears. To create a new member of an existing
partitioned data set, change only the member name. If you need to
allocate an input data set, refer to ISPF/PDF Version 3 for MVS Guide
and Reference for information on ISPF and allocating data sets.
5 Change defaults
The SPUFI defaults need not be changed for this example. However,
if
you specify Y(YES) you can look at the SPUFI defaults panel. See
"Step 2. Change SPUFI Defaults (Optional)" in topic 2.4.2 for more
information about the values you can specify and how they affect
SPUFI processing and output characteristics.
6 Edit input
To edit the input data set, leave Y(YES) on line 6. You can use the
ISPF editor to create a new member of the input data set and enter
SQL statements in it. (To process a data set that already contains a
set of SQL statements you want to execute immediately, enter N(NO).
Specifying N bypasses the step described in "Step 3. Enter SQL
Statements" in topic 2.4.3.)
7 Execute
To execute SQL statements contained in the input data set, leave
Y(YES) on line 7.
8 Autocommit
To make changes to the DB2 data permanent, leave Y(YES) on line 8.
Specifying Y makes SPUFI issue COMMIT if all statements execute
successfully. If all statements do not execute successfully, SPUFI
issues a ROLLBACK statement, and changes already made to the file
(back to the last commit point) are deleted. We suggest that you
read about the COMMIT and the ROLLBACK functions in "The
ISOLATION
Option" in topic 4.1.2.7.2 or Chapter 6 of SQL Reference.
9 Browse output
To look at the results of your query, leave Y(YES) on line 9. The
results are saved in the output data set. You can look at them at
any time, until you delete or write over the data set.
10 Connect Location
Specify the name of the application server, if applicable, to which
you want to submit SQL statements for execution. SPUFI will then
issue a type 1 CONNECT statement to this application server.
When you finish with the SPUFI panel, press the ENTER key. Because
you
specified YES on line 5 of the SPUFI panel, the next panel you see is the
SPUFI Defaults panel, as shown in Figure 8.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Default values are provided for each user the first time SPUFI is used.
Defaults are set for all options except the DB2 subsystem name. Any
changes you make to these values remain in effect until the values are
changed again. Initial default values are shown in Figure 8 in
topic 2.4.1.
Specify values for the following options on the CURRENT SPUFI
DEFAULTS
panel:
1 Isolation level
See "The ISOLATION Option" in topic 4.1.2.7.2 for more
information.
3 Record length
The record length must be at least 80 bytes. The default value
allows a 4092-byte record.
4 Blocksize
Follow normal block size selection rules. For F, the block size is
equal to record length. For FB and FBA, choose a block size that is
an even multiple of LRECL. For VB and VBA only, the block size
must
be 4 bytes larger than the block size for FB or FBA.
5 Record format
The record format default is VB (variable-length blocked).
6 Device type
SYSDA specifies that MVS is to select an appropriate direct access
storage device.
When you have specified SPUFI options, press the ENTER key to
continue.
SPUFI continues by processing the next processing option for which YES
has
been specified. If all other processing options are NO, SPUFI continues
by displaying the SPUFI panel.
If you press the END key, you return to the SPUFI panel, but all changes
made on the SPUFI Defaults panel are lost. If you press ENTER, your
changes are saved.
Next, SPUFI lets you edit the input data set. Initially, editing consists
of entering an SQL statement into the input data set. You can also edit
an input data set that contains SQL statements and you can change, delete,
or insert SQL statements.
On the panel, use the ISPF EDIT program to enter SQL statements that
you
want to execute, as shown in Figure 9.
Move the cursor to the first input line and enter the first part of an SQL
statement. You can enter the rest of the SQL statement on subsequent
lines, as shown in Figure 9. Line indentation and entry on several lines
are not necessary, although this format is easier to read.
You can put more than one SQL statement in the input data set. You can
put an SQL statement on one line of the input data set or on more than one
line. When the data set is processed, DB2 executes the statements one
after the other. Do not put more than one SQL statement on a single line.
The first one is executed, but other SQL statements on the same line are
ignored.
When using SPUFI, end each SQL statement with a semicolon (;). This
tells
SPUFI that the statement is complete.
When you have entered the SQL statements that you want, press the END
PF
key to save the file and to begin execution.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Figure 9 shows what the panel looks like if you enter the sample SQL
statement, followed by a SAVE command.
The editing step is bypassed when you reset the EDIT INPUT processing
option by specifying:
You can put comments about SQL statements either on separate lines or
on
the same line. In either case, two hyphens (--) are used to begin a
comment. Everything to the right of the two hyphens is ignored by DB2.
SPUFI passes the input data set to DB2 for processing. The SQL
statement
in the input data set EXAMPLES(XMP1) is executed. Output is sent to
the
output data set userid.RESULT.
The DB2 processing step is bypassed when you specify the EXECUTE
processing option:
Your SQL statement might take a long time to execute, depending on how
large a table DB2 has to search, or on how many rows DB2 has to
process.
To interrupt DB2's execution, press the PA1 key and respond to the
prompting message that asks you if you really want to stop processing.
This cancels the executing SQL statement and returns you to the ISPF-
PDF
menu.
What happens to the output data set? This depends on how far execution
has progressed before you interrupted the execution of the input data set.
DB2 might not have opened the output data set yet, or the output data set
might contain all or part of the results data produced so far.
SPUFI formats and displays the output data set using the ISPF Browse
program. The output from the sample program is shown in Figure 10. An
output data set contains these items for each SQL statement executed by
DB2:
* The SQL statement that was executed, copied from the input data set
* At the end of the data set are summary statistics that describe the
execution of the input data set as a whole.
-----------------------------------------------------------------------------------------
--
-----------------------------------------------------------------------------------------
--
You can change the amount of data displayed for numeric and character
columns by changing values on the SPUFI Defaults panel, as described
in "Step 2. Change SPUFI Defaults (Optional)" in topic 2.4.2.
- Character values that are too wide are truncated on the right.
- Numeric values that are too wide are replaced by asterisks (*).
- If truncation occurs, a warning message is written to the output
data set.
* An SQLSTATE code.
* What character positions of the input data set were scanned to find
SQL statements. This information helps you check the assumptions
SPUFI made about the location of line numbers (if any) in your input
data set.
* Which columns display data that was truncated because the data was
too
wide.
The examples on the next few pages show you how to access the DB2
system
catalog tables to:
The contents of the DB2 system catalog tables can be a useful reference
tool when you begin to develop an SQL statement or an application
program.
3. Press the ENTER key to display the EDIT panel. (This assumes that
the
CHANGE DEFAULTS processing option is set to NO.)
4. On the EDIT panel, enter the SQL statement shown in Figure 11.
5. Press the END PF key to execute the contents of the input data set.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
1. This example reuses an existing input data set to execute another SQL
statement. Starting with the SPUFI panel, name an existing input data
set: userid.EXAMPLES(XMP2). Leave the other choices as you left
them
before.
2. When you press the ENTER key to display the EDIT panel, you see
the
data set you saved from a previous example.
When your edit is complete, press the END PF key. The data set is
passed to DB2 for execution.
4. When the SQL statement has executed, browse the result.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
* Declare the tables you use, as described in "Declaring Table and View
Definitions" in topic 3.1.3. (This is optional.)
* Declare the data items used to pass data between DB2 and a host
language, as described in "Accessing Data Using Host Variables and
Host Structures" in topic 3.1.4.
* Code SQL statements to access DB2 data. See "Accessing Data Using
Host Variables and Host Structures" in topic 3.1.4.
Ada. See IBM Ada/370 SQL Module Processor for DB2 Database
Manager
User's Guide for more information about writing applications in Ada.
The SQL statements shown in this section use the following conventions:
* The SQL statement is coded as part of a COBOL application program.
Each SQL example is shown on several lines, with each clause of the
statement on a separate line.
* The SQL statements access data in the sample tables that are shipped
as part of DB2. Those tables contain data that a manufacturing
company might keep about its employees and its current projects. They
are described in Appendix A, "DB2 Sample Tables" in topic A.0.
Some of the examples vary from these conventions. Exceptions are noted
where they occur.
COBOL END-EXEC
EXEC SQL
an SQL statement
END-EXEC.
Before your program issues SQL statements that retrieve, update, delete,
or insert data, you can declare the tables and views your program accesses
by including an SQL DECLARE statement in your program.
You do not have to declare tables or views, but there are advantages if
you do. One advantage is documentation; for example, the DECLARE
statement specifies the structure of the table or view you are working
with, and the data type of each column. You can refer to the DECLARE
statement for the column names and data types in the table or view.
Another advantage is that the DB2 precompiler uses your declarations to
make sure you have used correct column names and data types in your
SQL
statements. The DB2 precompiler issues a warning message when the
column
names and data types do not correspond to the SQL DECLARE statements
in
your program.
EXEC SQL
DECLARE DSN8310.DEPT TABLE
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) )
END-EXEC.
You can access data using host variables and host structures.
A host variable is a data item declared in the host language (in this
case, COBOL) for use within an SQL statement. Using host variables,
you
can:
* Retrieve data and put it into the host variable for use by the
application program
* Use the data in the host variable to insert into a table or to change
the contents of a row
# * Assign null values to columns when the indicator variable for the host
# variable is negative
Any valid host variable name can be used in an SQL statement. The name
must be declared in the host program before it is used. (For more
information see the appropriate language section in "Chapter 3-4.
Embedding SQL Statements in Host Languages" in topic 3.4.)
A host variable can be used to represent a data value but cannot be used
to represent a table, view, or column name. (Table, view, or column
names
can be specified at execution time using dynamic SQL. See "Chapter 5-1.
Coding Dynamic SQL in Application Programs" in topic 5.1 for more
information.)
Host variables follow the naming conventions of the host language. (In
this chapter, COBOL is assumed to be the host language.) Host variables
used within SQL statements must be preceded by a colon (:) to tell DB2
that the variable is not a column name. (4) Host variables outside of SQL
statements must not be preceded by a colon.
For more information about declaring host variables, see the appropriate
language section:
For example, suppose you are retrieving the EMPNO, LASTNAME, and
WORKDEPT
column values from rows in the DSN8310.EMP table. You can define a
data
area in your program to hold each column, then name the data areas with
an
INTO clause, as in the following example (where each host variable is
preceded by a colon):
EXEC SQL
SELECT EMPNO, LASTNAME, WORKDEPT
INTO :CBLEMPNO, :CBLNAME, :CBLDEPT
FROM DSN8310.EMP
WHERE EMPNO :EMPID
END-EXEC.
If the SELECT statement returns more than one row, this is an error, and
any data returned is undefined and unpredictable. To avoid this error see
"Retrieving Multiple Rows of Data."
The results are shown below with column headings that represent the
names
of the host variables:
Retrieving Multiple Rows of Data: If you are unsure about the number of
rows that will be returned, or if you expect that more than one row will
be returned, then you must use an alternative to the SELECT ... INTO
statement.
You can set or change a value in a DB2 table to the value of a host
variable. Use the host variable name in the SET clause of UPDATE or
the
VALUES clause of INSERT. This example changes an employee's phone
number:
EXEC SQL
UPDATE DSN8310.EMP
SET PHONENO :NEWPHONE
WHERE EMPNO :EMPID
END-EXEC.
Verify that the value of a retrieved character string has not been
truncated when retrieved
Retrieving Data into Host Variables: If the value for the column you are
retrieving is null, DB2 puts a negative value in the indicator variable.
If it is null because of a numeric or character conversion error, or an
arithmetic expression error, DB2 sets the indicator variable to -2. See
"Handling Arithmetic or Conversion Errors" in topic 3.1.5.3 for more
information.
If you do not use an indicator variable and DB2 retrieves a null value, an
error results.
When DB2 retrieves the value of a column, you can test the indicator
variable. If the indicator variable's value is less than zero, the column
value is null. When the column value is null, DB2 puts nothing into the
host variable; its value is unchanged.
You can then test INDNULL for a negative value. If it is negative, the
corresponding value of PHONENO is null, and you can disregard the
contents
of CBLPHONE.
When a column value is fetched using a cursor, you can use the same
technique to determine whether the column value is null or not.
Inserting Null Values into Columns Using Host Variables: You can use
an
indicator variable to insert a null value from a host variable into a
column. When DB2 processes INSERT and UPDATE statements, it
checks the
indicator variable (if it exists). If the indicator variable is negative,
the column value is set to null value. If the indicator variable is
greater than -1, the associated host variable contains a value for the
column.
EXEC SQL
UPDATE DSN8310.EMP
SET PHONENO :NEWPHONE:PHONEIND
WHERE EMPNO :EMPID
END-EXEC.
MOVE 0 TO PHONEIND.
MOVE -1 TO PHONEIND.
3.1.4.1.5 Considerations
If you transfer data between a DB2 column and a host variable, and the
two
do not have the same data type or length attribute, you can expect the
data format to change. Values might be truncated, padded, or rounded
somehow. If you transfer or compare data, see Chapter 3 of SQL
Reference
for the rules associated with these operations.
A host structure can be substituted for one or more host variables and,
like host variables, indicator variables (or structures) can be used with
host structures.
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,
WORKDEPT
INTO :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME,
:WORKDEPT
FROM DSN8310.VEMP
WHERE EMPNO :EMPID
END-EXEC.
In this example, if you want to avoid listing host variables, you can
substitute the name of a structure, say :PEMP, that contains :EMPNO,
:FIRSTNME, :MIDINIT, :LASTNAME, and :WORKDEPT. The
example then reads:
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,
WORKDEPT
INTO :PEMP
FROM DSN8310.VEMP
WHERE EMPNO :EMPID
END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to
generate a COBOL record description, PL/I structure declaration, or C
structure declaration that corresponds to the columns of a table For more
details about coding a host structure in your program, see "Chapter 3-4.
Embedding SQL Statements in Host Languages" in topic 3.4. For more
information on using DCLGEN and the restrictions that apply to the C
language, see "Chapter 3-3. Using DCLGEN" in topic 3.3.
01 PEMP-ROW.
10 EMPNO PIC X(6).
10 FIRSTNME.
49 FIRSTNME-LEN PIC S9(4) USAGE COMP.
49 FIRSTNME-TEXT PIC X(12).
10 MIDINIT PIC X(1).
10 LASTNAME.
49 LASTNAME-LEN PIC S9(4) USAGE COMP.
49 LASTNAME-TEXT PIC X(15).
10 WORKDEPT PIC X(3).
10 EMP-BIRTHDATE PIC X(10).
01 INDICATOR-TABLE.
02 EMP-IND PIC S9(4) COMP OCCURS 6 TIMES.
.
.
.
MOVE '000230' TO EMPNO.
.
.
.
EXEC SQL
SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME,
WORKDEPT, BIRTHDATE
INTO :PEMP-ROW:EMP-IND
FROM DSN8310.EMP
WHERE EMPNO :EMPNO
END-EXEC.
Because this example selects rows from the DSN8310.EMP table, some
of the
EMP-IND array values are always zero. The first five columns of each
row
are defined NOT NULL. In the above example, DB2 selects the values
for a
row of data into a host structure. Therefore, you must use a
corresponding structure for the indicator variables to determine which (if
any) selected column values are null. For information on using the IS
NULL keyword phrase in WHERE clauses, see "Chapter 2-1. Retrieving
Data"
in topic 2.1.
A program that includes SQL statements needs to have an area set apart
for
communication with DB2; this area is called the SQL communication area
(SQLCA). When DB2 processes an SQL statement in your program, it
places
return codes in the SQLCODE (5) and SQLSTATE fields of the SQLCA.
The
return codes indicate whether the statement you executed succeeded or
failed.
The -911 SQLCODE indicates that a unit of work has been rolled back.
If the impact of this action is not taken into consideration when
writing the code, the data integrity of your system can be
compromised.
-------------------------------------------------------------------------
The meaning of SQLCODEs other than 0 and 100 varies with the
particular
product implementing SQL.
EXEC SQL
WHENEVER condition action
END-EXEC
NOT FOUND Indicates what should be done when DB2 cannot find a
row to
satisfy your SQL statement or when there are no more rows to
fetch (SQLCODE 100).
CONTINUE
Specifies the next sequential statement of the source program.
GOTO or GO TO host-label
Specifies the statement identified by host-label. For
host-label, substitute a single token, optionally preceded by
a colon. The form of the token depends on the host language.
In COBOL, for example, it can be section-name or an
unqualified paragraph-name.
The WHENEVER statement must precede the first SQL statement it is to
affect. However, if your program checks the SQLCODE directly, the
check
must be done after the SQL statement is executed.
For rows in which the error does occur, one or more selected items have
no
meaningful value. This error is flagged by a -2 in the indicator variable
for the affected host variable, and an SQLCODE of +802 (SQLSTATE
'01519')
in the SQLCA.
# You should check for negative SQLCODE values before you commit
data, and
# handle the errors that they represent. The assember subroutine
DSNTIAR
# helps you to obtain a displayable form of the SQLCA and a text message
# based on the SQLCODE field of the SQLCA.
# You can find the programming language specific syntax and details for
# calling DSNTIAR in following sections:
# DSNTIAR takes data from the SQLCA, formats it into a message, and
places
# the result in a message output area that you provide in your application
# program.
# Each time you use DSNTIAR, it overwrites any previous messages. You
# should move print the messages before using DSNTIAR again, and
before the
# contents of the SQLCA change to get an accurate view of the SQLCA.
# You must define the message output area in VARCHAR format. In this
# varying character format, a two-byte length field precludes the data. The
# length field tells DSNTIAR how many total bytes are in the output
message
# area; its minimum value is 240.
# Figure 13 shows the format of the message output area, where length is
the
# two-byte total length field, and the length of each line should match the
# logical record length (lrecl) you specify to DSNTIAR.
--------------------------------------------------------------------------
# LINE:
# ------------------------------------------------------------------
# 1 length
# ------------------------------------------------------------------
# 2
# ------------------------------------------------------------
# 3
# ------------------------------------------------------------
# 4
# -----------------------------------------------------------
# ...
# ------------------------------------------------------------
# n-2
# ------------------------------------------------------------
# n-1
# ------------------------------------------------------------
# n
# -----------------------------------------------------------
--------------------------------------------------------------------------
# Figure 13. Format of the Message Output Area
# Your application program can have only one message output area.
# When you call DSNTIAR, you must name an SQLCA and an output
message area
# in its parameters. You must also provide the logical record length
# (lrecl) as a value between 72 and 240 bytes. DSNTIAR assumes the
message
# area contains fixed-length records of length lrecl.
# DSNTIAR can place up to 10 messages in the message area. If you have
more
# messages than the message output area can contain, DSNTIAR issues a
return
# code of 4. A completely blank record marks the end of the message
output
# area.
# The calling program must allocate enough storage in the message output
# area to hold the 10 messages. You will probably not need more than 10
# lines of 80 bytes each for your message output area. If a message is
# longer than the lrecl you specify on DSNTIAR, the output message splits
# into several records, on word boundaries, if possible. The split records
# are indented. All records begin with a blank character for carriage
# control.
# Code Meaning
# 0 Successful execution.
# 4 More data was available than could fit into the provided message
# area.
# 8 The logical record length was not between 72 and 240, inclusive.
# 12 The message area was not large enough. The message length was
240
# or greater.
# 16 Error in TSO message routine.
# 20 DSNTIA1 module could not be loaded.
# 24 SQLCA data error.
# * Linked with other modules that also have the attributes AMODE(31)
and
# RMODE(ANY),
# You can dynamically link (load) and call DSNTIAR directly from a
language
# that does not handle 31-bit addressing (OS/VS COBOL, for example).
To do
# this, link a second version of DSNTIAR with the attributes AMODE(24)
and
# RMODE(24) into another load module library. Or you can write an
# intermediate assembler language program and that calls DSNTIAR in 31-
bit
# mode; then call that intermediate program in 24-bit mode from your
# application.
# For more information on the allowed and default AMODE and RMODE
settings
# for a particular language, see the application programming guide for that
# language. For details on how the attributes AMODE and RMODE of an
# application are determined, see the linkage editor and loader user's guide
# for the language in which you have written the application.
# Suppose you want your DB2 COBOL application to check for deadlocks
and
# timeouts, and you want to make sure your cursors are closed before
# continuing. You use the statement WHENEVER SQLERROR to transfer
control
# to an error routine when your application receive a negative SQLCODE.
# In your error routine, you write a section the checks for a -911 or -913
# SQLCODE. You can receive either of these SQLCODEs when there is a
# deadlock or timeout. When one of these errors occurs, the error routine
# issues an EXEC SQL CLOSE cursor-name statement to close your
cursors. An
# SQLCODE of 0 or -501 from the close cursor indicates that the close was
# successful.
# You can use DSNTIAR in the error routine to generate the complete
message
# text associated with the negative SQLCODEs.
# 01 ERROR-MESSAGE.
# 02 ERROR-LEN PIC S9(4) COMP VALUE +720.
# 02 ERROR-TEXT PIC X(72) OCCURS 10 TIMES
# INDEXED BY ERROR-INDEX.
# 77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +72.
# 3. Make sure you have an SQLCA. For this example, assume the name
of the
# SQLCA is SQLCA.
# You can then print the message output area just as you would any other
# variable. Your message might look like the following:
DB2 can be used to retrieve and process a set of rows. Each row in the
set satisfies the criteria specified in the search conditions of an SQL
statement. However, when a set of rows is selected by your program, the
program cannot process all the rows at once. The program needs to
process
the rows one at a time.
The result table of a cursor is processed much like a sequential data set.
The cursor must be opened (with an OPEN statement) before any rows are
retrieved. A FETCH statement is used to retrieve the cursor's current
row. FETCH can be executed repeatedly until all rows have been
retrieved.
When the end-of-data condition occurs, you must close the cursor with a
CLOSE statement (similar to end-of-file processing).
Your program can have several cursors. Each cursor requires its own:
You can use cursors to fetch, update, or delete a row of a table, but you
cannot use them to insert a row into a table.
-------------------------------------------------------------------------
Table 5. SQL Statements Required to Define and Use a Cursor in a
COBOL
Program
------------------------------------------------------------------------
SQL Statement Described in Section
------------------------------------+-----------------------------------
EXEC SQL "Step 1: Define the Cursor" in
DECLARE THISEMP CURSOR FOR topic 3.2.2.1
SELECT EMPNO, LASTNAME,
WORKDEPT, JOB
FROM DSN8310.EMP
WHERE WORKDEPT 'D11'
FOR UPDATE OF JOB
END-EXEC.
------------------------------------+-----------------------------------
EXEC SQL "Step 2: Open the Cursor" in
OPEN THISEMP topic 3.2.2.2
END-EXEC.
------------------------------------+-----------------------------------
EXEC SQL "Step 3: Specify What to Do When
WHENEVER NOT FOUND End-of-Data Is Reached" in
GO TO CLOSE-THISEMP topic 3.2.2.3
END-EXEC.
------------------------------------+-----------------------------------
EXEC SQL "Step 4: Retrieve a Row Using
FETCH THISEMP the Cursor" in topic 3.2.2.4
INTO :EMP-NUM, :NAME2,
:DEPT, :JOB-NAME
END-EXEC.
------------------------------------+-----------------------------------
... for specific employees "Step 5a: Update the Current
in Department D11, Row" in topic 3.2.2.5
update the JOB value:
EXEC SQL
UPDATE DSN8310.EMP
SET JOB :NEW-JOB
WHERE CURRENT OF THISEMP
END-EXEC.
EXEC SQL
DELETE FROM DSN8310.EMP
WHERE CURRENT OF THISEMP
END-EXEC.
------------------------------------+-----------------------------------
Branch back to fetch and
process the next row.
------------------------------------+-----------------------------------
CLOSE-THISEMP. "Step 6: Close the Cursor" in
EXEC SQL topic 3.2.2.7
CLOSE THISEMP
END-EXEC.
------------------------------------------------------------------------
EXEC SQL
DECLARE cursor-name CURSOR FOR
SELECT column-name-list
FROM table-name
WHERE search-condition
FOR UPDATE OF column-name
END-EXEC.
The SELECT statement shown here is rather simple. You can code
several
other types of clauses in a SELECT statement within a DECLARE
CURSOR
statement. Chapter 6 of SQL Reference illustrates several more clauses
that can be used within a SELECT statement.
A column of the identified table can be updated even though it is not part
of the result table. In this case, you do not need to name the column in
the SELECT statement (but do not forget to name it in the FOR UPDATE
OF
clause). When the cursor retrieves a row (using FETCH) that contains a
column value you want to update, you can use UPDATE ... WHERE
CURRENT OF
to update the row.
For example, assume that each row of the result table includes the
EMPNO,
LASTNAME, and WORKDEPT columns from the DSN8310.EMP table.
If you want to
update the JOB column (one of the columns in the DSN8310.EMP table),
the
DECLARE CURSOR statement must include FOR UPDATE OF JOB
even though JOB is
omitted from the SELECT statement.
To tell DB2 you are ready to process the first row of the result table,
have your program issue the OPEN statement. When this happens, DB2
processes the SELECT statement within the DECLARE CURSOR
statement to
identify a set of rows using the current value of any host variables
specified in the SELECT statement. The result table can contain zero,
one, or many rows, depending on the extent to which the search condition
is satisfied. The OPEN statement looks like this:
EXEC SQL
OPEN cursor-name
END-EXEC.
When used with cursors, the CURRENT DATE, CURRENT TIME, and
CURRENT
TIMESTAMP special registers are evaluated once when the OPEN
statement is
executed; the value returned in the register is then used on all
subsequent FETCH statements.
3.2.2.3 Step 3: Specify What to Do When End-of-Data Is Reached
Test the SQLCODE field for a value of 100 or the SQLSTATE field for a
value of '02000' to determine if the last row of data has been retrieved.
These codes occur when a FETCH statement has retrieved the last row in
the
result table and your program issues a subsequent FETCH. For example:
EXEC SQL
WHENEVER NOT FOUND GO TO symbolic-address
END-EXEC.
To move the contents of a selected row into your program host variables,
use the FETCH statement. The SELECT statement within the DECLARE
CURSOR
statement identifies rows that contain the column values your program
wants (that is, the result table is defined), but DB2 does not retrieve
any data for your application program until FETCH is issued.
When your program issues the FETCH statement, DB2 uses the cursor to
point
to the next row in the result table, making it the current row. DB2 then
moves the current row contents into your program host variables
(specified
with the INTO clause). This sequence is repeated each time FETCH is
issued, until you have processed all rows in the result table.
EXEC SQL
FETCH cursor-name
INTO :host variable1, :host variable2
END-EXEC.
When your program has retrieved the current row, you can update its data
by using the UPDATE statement. To do this, issue an
UPDATE...WHERE
CURRENT OF statement; it is intended specifically for use with a cursor.
The UPDATE ... WHERE CURRENT OF statement looks like this:
EXEC SQL
UPDATE table-name
SET column1 value, column2 value
WHERE CURRENT OF cursor-name
END-EXEC.
When used with a cursor, the UPDATE statement differs from the one
described in "Chapter 2-2. Creating Tables and Modifying Data" in
topic 2.2.
* The WHERE clause identifies the cursor that points to the row to be
updated.
After you have updated a row, the cursor points to the current row until
you issue a FETCH statement for the next row.
Remember that you cannot update a row if your update violates any
referential constraints the table might have. Refer to "Updating Tables
with Referential Constraints" in topic 2.2.2.3 for more information.
When your program has retrieved the current row, you can delete the row
by
using the DELETE statement. To do this, you issue a DELETE...WHERE
CURRENT OF statement; it is intended specifically for use with a cursor.
The DELETE...WHERE CURRENT OF statement looks like this:
EXEC SQL
DELETE FROM table-name
WHERE CURRENT OF cursor-name
END-EXEC.
When used with a cursor, the DELETE statement differs from the one you
learned in "Chapter 2-2. Creating Tables and Modifying Data" in
topic 2.2.
After you have deleted a row, you cannot update or delete another row
using that cursor until you issue a FETCH statement to position the cursor
on the next row.
"Deleting Rows: DELETE" in topic 2.2.2.4 showed you how to use the
DELETE
statement to delete all rows that meet a specific search condition.
Alternatively, you can use the DELETE...WHERE CURRENT OF
statement
repeatedly when you want to obtain a copy of the row, examine it, and
then
delete it.
Remember that you cannot delete a row if doing so will result in the
violation of any referential constraints the table might have. In the
example on page 3.2.2, the employee cannot be deleted from the
employee
table unless the employee has already been deleted from the project table
and the project activity table. This is because of the way the
referential constraints have been defined on these tables. Refer to
"Updating Tables with Referential Constraints" in topic 2.2.2.3 for more
information.
When you are finished processing the rows of the result table and you
want
to use the cursor again, issue a CLOSE statement to close the cursor:
EXEC SQL
CLOSE cursor-name
END-EXEC.
If you are finished processing the rows of the "result table" and you do
not want to use the cursor, you can let DB2 automatically close the cursor
when your program terminates.
To maintain a cursor and its position across commit points, use the WITH
HOLD option of the DECLARE CURSOR statement. The commit
process releases
only locks that are not required to maintain cursor position. After the
commit process, open cursors are not closed. A cursor is positioned after
the last row retrieved and before the next logical row of the result table
to be returned.
The following example shows how to use a cursor to fetch data without
writing code to reposition the cursor after a commit point:
EXEC SQL
DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY,
WORKDEPT
FROM DSN8310.EMP
WHERE WORKDEPT < 'D11'
ORDER BY EMPNO
END-EXEC.
DECLARE CURSOR WITH HOLD should not be used with the new
user signon
from a DB2 attachment facility, because all open cursors are closed.
---------------------------------------------------------------------
# You should always close cursors that you no longer need. If you
# let DB2 close a CICS attachment cursor, the cursor might not close
# until DB2 reuses or terminates the thread.
---------------------------------------------------------------------
DB2 must be active before you can use DCLGEN. You can invoke
DCLGEN in
several different ways:
* From ISPF through DB2I. Select the DCLGEN option on the DB2I
Primary
Option Menu panel. Next, fill in the DCLGEN panel with the
information it needs to build the declarations and press ENTER.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
TABLE OWNER
Is the table name qualifier. If you do not specify this
value, and the table is a local table, DB2 assumes that the
table qualifier is your TSO logon ID. If the table is at a
remote location, you must specify this value.
AT LOCATION
Is the location of the table in an interconnected network.
The AT keyword option is used to prefix the table name on the
SQL DECLARE statement as follows:
location_name.owner_id.table_name
for example,
PLAINS_GA.CARTER.CROP_YIELD_89
ACTION
Tells what to do with the output when it is sent to a
partitioned data set. (The option is ignored if the data set
you specified in field 3 is sequential.)
COLUMN LABEL
Includes labels declared on any columns of the table or view
DCLGEN is operating on. If you enter YES in this field, and
if the table or view for which you are creating declarations
includes column labels, those labels is included as comments
in the data declarations DCLGEN produces. Specifying NO
causes DCLGEN to ignore any column labels it encounters.
STRUCTURE NAME
Names the generated data structure. The name can be up to 31
bytes in length. If the name is not a DBCS string, and the
first character is not alphabetic, then the name must be
enclosed in apostrophes. If you leave this field blank,
DCLGEN generates a name that contains the table or view name
with a prefix of DCL. If the language is COBOL or PL/I, and
the table or view name consists of a DBCS string, the prefix
consists of DBCS characters.
If you use the C language, the letters you enter will not be
folded to uppercase letters.
If you use the C language, the letters you enter will not be
folded to uppercase letters.
DELIMIT DBCS
Specifies whether to delimit DBCS table names and column
names
in the DCLGEN table declaration. If you enter YES, DBCS table
and column names are surrounded by SQL delimiters. YES is the
default.
* The name is a DBCS string, and you have requested that DBCS names
be
delimited.
If you are using an SQL reserved word as an identifier, you must edit the
DCLGEN output in order to add the appropriate SQL delimiters.
EXEC SQL
INCLUDE DECEMP
END-EXEC.
For various reasons, there are times when DCLGEN does not produce the
results you expect. You might need to edit the results, tailoring the
output to your specific needs. For example, DCLGEN does not generate
columns that are named NOT NULL WITH DEFAULT.
Variable names provided by DCLGEN are derived from the source in the
database. In Table 6, var represents variable names that are provided by
DCLGEN when it is necessary to clarify the host language declaration.
-----------------------------------------------------------------------------------------
-----------------------------
Table 6. Declarations Generated by DCLGEN
-----------------------------------------------------------------------------------------
----------------------------
SQL Data Type C(4) COBOL(5)
PL/I
---------------------+------------------------
+-------------------------------------------------+--------------------
SMALLINT short int PIC S9(4)
BIN FIXED(15)
USAGE COMP
---------------------+------------------------
+-------------------------------------------------+--------------------
INTEGER long int PIC S9(9)
BIN FIXED(31)
USAGE COMP
---------------------+------------------------
+-------------------------------------------------+--------------------
DECIMAL(p,s) or Not generated (no PIC S9(p-s)V9(s)
DEC FIXED(p,s)
NUMERIC(p,s) exact equivalent); USAGE COMP-3
If p>15, a warning
comment replaces If p>18, a warning
is generated.
the declaration. is generated.
---------------------+------------------------
+-------------------------------------------------+--------------------
REAL or float USAGE COMP-1
BIN FLOAT(n)
FLOAT(n)
1 < n < 21
---------------------+------------------------
+-------------------------------------------------+--------------------
DOUBLE PRECISION double USAGE COMP-2
BIN FLOAT(n)
or FLOAT(n)
---------------------+------------------------
+-------------------------------------------------+--------------------
CHAR(1) char PIC X(1)
CHAR(1)
---------------------+------------------------
+-------------------------------------------------+--------------------
CHAR(n) char var ân+1ã PIC X(n)
CHAR(n)
---------------------+------------------------
+-------------------------------------------------+--------------------
VARCHAR(n) struct 10 var.
CHAR(n) VAR
{short int var_len; 49 var_LEN PIC 9(4) USAGE
COMP.
char var_dataânã 49 var_TEXT PIC X(n).
} var;
---------------------+------------------------
+-------------------------------------------------+--------------------
GRAPHIC(n) Not generated (no PIC G(n) USAGE DISPLAY-
1.(1) GRAPHIC(n)
exact equivalent); or
comment replaces PIC N(n).(1)
declaration.
---------------------+------------------------
+-------------------------------------------------+--------------------
VARGRAPHIC(n) Not generated (no 10 var.
GRAPHIC(n) VAR
exact equivalent); 49 var_LEN PIC 9(4) USAGE
COMP.
comment replaces 49 var_TEXT PIC G(n) USAGE
DISPLAY-1.(1)
declaration. or
10 var.
49 var_LEN PIC 9(4) USAGE COMP.
49 var_TEXT PIC N(n).(1)
---------------------+------------------------
+-------------------------------------------------+--------------------
DATE char varâ11ã(2) PIC X(10)(2)
CHAR(10)(2)
---------------------+------------------------
+-------------------------------------------------+--------------------
TIME char varâ9ã(3) PIC X(8)(3)
CHAR(8)(3)
---------------------+------------------------
+-------------------------------------------------+--------------------
TIMESTAMP char varâ27ã PIC X(26)
CHAR(26)
-----------------------------------------------------------------------------------------
-------------------------
Notes:
1. DCLGEN chooses the format based on the character you specify as the
DBCS symbol on the COBOL Defaults panel.
Fill in the COBOL defaults panel as necessary and press Enter to save the
new defaults, if any, and return to the DB2I Primary Option menu.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Figure 16. The COBOL Defaults Panel. Shown only if the application
language is COBOL or COB2.
Select option 2 on the DB2I Primary Option menu, and press Enter to
display the DCLGEN panel.
Fill in the fields as shown in Figure 17, and then press Enter.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
DB2 then displays the screen as shown in Figure 19. Press Enter to return
to the DB2I Primary Option menu.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
To browse or edit the results, first exit from DB2I by entering X on the
command line of the DB2I Primary Option menu. The ISPF/PDF menu is
then
displayed, and you can select either the browse or the edit option to view
the results.
PROGAREA DSECT
EXEC SQL INCLUDE SQLCA
As an alternative you can provide a separate storage area for the SQLCA
and ensure addressability to that area.
All executable SQL statements in the program must be within the scope of
the declaration of the SQLCODE variable or SQLCA variables. See
Chapter 6
of SQL Reference for more information about the INCLUDE statement
and
Appendix C of SQL Reference for a complete description of SQLCA
fields.
(8) The SQLSTATE variable can be used as an alternative to the
SQLCODE variable when analyzing the result of an SQL
statement. Like the SQLCODE variable, the SQLSTATE variable
is defined as part of the SQLCA, and its value is set by DB2
after each SQL statement is executed.
Unlike the SQLCA, there can be more than one SQLDA in a program,
and an
SQLDA can have any valid name. An SQLDA can be coded in an
assembler
program either directly or by using the SQL INCLUDE statement. The
SQL
INCLUDE statement requests the inclusion of a standard SQLDA
declaration:
SQLDA declarations must be placed before the first SQL statement that
references the data descriptor unless the TWOPASS precompiler option is
specified. See Chapter 6 of SQL Reference for more information about
the
INCLUDE statement and Appendix C of SQL Reference for a complete
description of SQLDA fields.
Continuation for SQL Statements: The line continuation rules for SQL
statements are the same as those for assembler statements, except that
EXEC SQL must be specified within one line. Any part of the statement
that does not fit on one line can appear on the next and later lines,
beginning at the continuation margin (column 16, the default). Every line
of the statement, except the last, must have a continuation character (a
nonblank character) immediately after the right margin in column 72.
Names: Any valid assembler name can be used for a host variable.
Assembler names are subject to the following restrictions: external entry
names or access plan names that begin with 'DSN' and host variable
names
that begin with 'SQL' are not allowed. These names are reserved for DB2.
--- CICS
--------------------------------------------------------------------------------
DFHEISTG DSECT
DFHEISTG
EXEC SQL INCLUDE SQLCA
*
DS 0F
SQDWSREG EQU R7
SQDWSTOR DS (SQLDLEN)C RESERVE STORAGE TO BE
USED FOR SQLDSECT
.
.
&XPROGRM DFHEIENT
CODEREGR12,EIBREGR11,DATAREGR13
*
*
* SQL WORKING STORAGE
LA SQDWSREG,SQDWSTOR GET ADDRESS OF
SQLDSECT
USING SQLDSECT,SQDWSREG AND TELL ASSEMBLER
ABOUT IT
*
-----------------------------------------------------------------------------------------
---------------------------------------------------------------------
* Generated code can include more than two continuations per comment.
If you have more than two continuations per comment you receive
message IF069 from assembler XF (but not from assembler H). You
can
ignore the message.
* If you use any other DATAREG in the DFHEIENT macro, you must
provide addressability to a save area.
---------------------------------------------------------------------
You must explicitly declare all host variables used in SQL statements. A
host variable used in an SQL statement must be declared prior to the first
use of the host variable in an SQL statement, unless the TWOPASS
precompiler option is specified. If the TWOPASS precompiler option is
specified, the declaration for host variables used in a DECLARE
CURSOR
statement must come before the DECLARE CURSOR statement.
The assembler statements that are used to define host variables can be
preceded by a BEGIN DECLARE SECTION statement and followed by
an END
DECLARE SECTION statement. The BEGIN DECLARE SECTION and
END DECLARE
SECTION statements are required only when the STDSQL(86)
precompiler
option is used.
You can declare host variables in normal assembler style (DC or DS),
depending on the data type and the limitations on that data type. You can
include a value specification on DC and DS declarations (for example,
DC
H'5'). The DB2 precompiler examines only packed decimal declarations,
for
which it identifies the location of the decimal point.
An SQL statement that uses a host variable must be within the scope of
the
statement in which the variable was declared.
3.4.1.5 Declaring Host Variables
Numeric Host Variables: The following figure shows the syntax for valid
numeric host variable declarations. The nominal value is used to specify
the scale of the packed decimal variable and must include a decimal point.
>>--variable-name----DC-------------H-------------------------------><
-DS-- -1-- -L2--
-F------------------
-L4--
-P----------'value'-
-Ln--
-E------------------
-L4--
-D-------------------
-L8--
-------------------------------------------------------------------------
Character Host Variables: There are two valid forms of character host
variables:
* Fixed-length strings
* Varying-length strings.
--- Fixed-Length Character Strings --------------------------------------
>>--variable-name----DC-----------C---------------------------------><
-DS-- -1-- -Ln--
-------------------------------------------------------------------------
>>--variable-name---------H----------,---------CLn------------------><
-1-- -L2-- -1--
-------------------------------------------------------------------------
Graphic Host Variables: There are two valid forms of graphic character
host variables:
* Fixed-length strings
* Varying-length strings.
>>----DC----G-------------------------------------------------------><
-DS-- -Ln----------
-'<value>'---
-Ln'<value>'--
-------------------------------------------------------------------------
-------------------------------------------------------------------------
-------------------------------------------------------------------------
Table 7. SQL Data Types Generated from Assembler Declarations
------------------------------------------------------------------------
SQLLEN of
Assembler Data SQLTYPE of Host
Type Host Variable Variable SQL Data Type
-----------------+---------------+--------------+-----------------------
DS HL2 500 2 SMALLINT
-----------------+---------------+--------------+-----------------------
DS FL4 496 4 INTEGER
-----------------+---------------+--------------+-----------------------
# DS P'value' 484 p in byte 1, DECIMAL(p,s)
# DS PL n'value' s in byte 2
# or See the description
# DS PL n for DECIMAL(p,s) in
# 1< n<16 Table 8.
-----------------+---------------+--------------+-----------------------
DS EL4 480 4 REAL or FLOAT (n)
1<n<21
-----------------+---------------+--------------+-----------------------
DS DL4 480 8 DOUBLE PRECISION or
FLOAT (n)
22<n<53
-----------------+---------------+--------------+-----------------------
DS CLn 452 n CHAR(n)
1<n<254
-----------------+---------------+--------------+-----------------------
DS HL2,CLn 448 n VARCHAR(n)
1<n<254
-----------------+---------------+--------------+-----------------------
DS HL2,CLn 456 n VARCHAR(n)
n>254
-----------------+---------------+--------------+-----------------------
DS GLn 468 n GRAPHIC(n)(2)
2<x<254(1)
-----------------+---------------+--------------+-----------------------
DS HL2,GLn 464 n VARGRAPHIC(n)(2)
2<x<254(1)
-----------------+---------------+--------------+-----------------------
DS HL2,GLn 472 n VARGRAPHIC(n)(2)
x>254(1)
---------------------------------------------------------------------
Notes:
1. x is expressed in bytes.
2. n is the number of double-byte characters.
-------------------------------------------------------------------------
-------------------------------------------------------------------------
Table 8. SQL Data Types Mapped to Typical Assembler Declarations
------------------------------------------------------------------------
SQL Data Type Assembler Notes
Equivalent
------------------+--------------------+--------------------------------
SMALLINT DS HL2
------------------+--------------------+--------------------------------
INTEGER DS FL4
------------------+--------------------+--------------------------------
# DECIMAL(p,s) or DS P'value' p is precision; s is scale.
# NUMERIC( p,s) DS PL n'value' 1<p<31 and 0<s<p.
# DS PL n 1<n<16. value is a literal
# value that includes a decimal
# point. You must use Ln,
# value, or both. We recommend
# that you use only value.
Host Graphic Data Type: The assembler host graphic data type is
supported
via the GRAPHIC/NOGRAPHIC option. However, assembler DBCS
literals are
not supported.
CLSCD DS CL7
DAY DS HL2
BGN DS CL8
END DS CL8
DAYIND DS HL2 INDICATOR VARIABLE FOR DAY
BGNIND DS HL2 INDICATOR VARIABLE FOR BGN
ENDIND DS HL2 INDICATOR VARIABLE FOR END
The following figure shows the syntax for a valid indicator variable.
>>--variable-name----DC-----------H---------------------------------><
-DS-- -1-- -L2--
-------------------------------------------------------------------------
-------------------------------------------------------------------------
# The output lines of text, each line being the length specified in
# lrecl, are put into this area. For example, you could specify the
# format of the output area as:
# LINES EQU 10
# LRECL EQU 132
#
# .
# .
# .
# MESSAGE DS H,CL(LINES*LRECL)
# ORG MESSAGE
# MESSAGEL DC AL2(LINES*LRECL)
# MESSAGE1 DS CL(LRECL) text line 1
# MESSAGE2 DS CL(LRECL) text line 2
#
# .
# .
# .
# MESSAGEn DS CL(LRECL) text line n
#
# .
# .
# .
# CALL DSNTIAR,(SQLCA, MESSAGE, LRECL),MF(E,PARM)
# If your CICS application requires CICS storage handling, you must use
# the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the
following
# syntax:
# CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF(E,PARM)
# DSNTIAC has extra parameters, which you must use for calls to routines
# that use CICS commands.
# eib
# commarea
# communication area
# The assembler source code for DSNTIAC and job DSNTEJ5A, which
# assembles and linkedits DSNTIAC, are in the data set
DSN310.SDSNSAMP.
-------------------------------------------------------------------------
# This section helps you with the programming techniques that are unique
to
# coding SQL statements within a C or C++ program. Except where noted
# otherwise, references to C imply C or C++.
All executable SQL statements in the program must be within the scope of
the declaration of the SQLCODE variable or SQLCA variables.
Unlike the SQLCA, there can be more than one SQLDA in a program,
and an
SQLDA can have any valid name. An SQLDA can be coded in a C
program
either directly or by using the SQL INCLUDE statement. Using the SQL
INCLUDE statement requests the inclusion of a standard declaration:
Each SQL statement in a C program must begin with EXEC SQL and end
with a
semi-colon (;). The EXEC and SQL keywords must appear all on one
line,
but the remainder of the statement can appear on the next and subsequent
lines. All SQL words must be entered in uppercase letters since C is case
sensitive.
EXEC SQL
UPDATE DSN8310.DEPT
SET MGRNO :mgr_num
WHERE DEPTNO :int_dept;
Comments: You can include C comments (/* ... */) within SQL
statements
wherever a blank is allowed, except between the keywords EXEC and
SQL.
You cannot use mixed data in comments. Comments cannot be nested.
Single-line comments (starting with //) are not permitted. You can
include SQL comments in any static SQL statement if the STDSQL(86)
precompiler option is specified.
Names: Any valid C name can be used for a host variable. C names are
subject to the following restrictions:
NULLs and NULs: C and SQL differ in the way they use the word null.
The
C language has a null character (NUL), a null pointer (NULL), and a null
statement (just a semicolon). The C NUL is a single character which
compares equal to 0. The C NULL is a special reserved pointer value that
does not point to any valid data object. The SQL null value is a special
value that is distinct from all non-null values and denotes the absence of
a (non-null) value. For this specification, NUL is the null character in
C and NULL is the SQL null value. All other uses are explicit.
Trigraphs: Some characters from the C character set are not available on
all keyboards. These characters can be entered into a C source program
using a sequence of three characters called a trigraph. The trigraphs
supported are the same as those supported by the C/370 compiler.
Special C Considerations
* Use of the C/370 multi-tasking facility, where multiple tasks execute
SQL statements, causes unpredictable results.
* You cannot use the #include preprocessor directive to include the main
procedure.
You must explicitly declare all host variables used in SQL statements. A
host variable used in an SQL statement must be declared prior to the first
use of the host variable in an SQL statement. If the TWOPASS
precompiler
option is specified, the declaration for a host variable used in the
DECLARE CURSOR statement must precede the DECLARE CURSOR
statement.
The C statements that are used to define the host variables must be
preceded by a BEGIN DECLARE SECTION statement and followed by
an END
DECLARE SECTION statement. You can have more than one host
variable
declaration section in your program.
The names of host variables must be unique within the program even if
the
host variables are in different blocks or procedures.
An SQL statement that uses a host variable must be within the scope of
the
statement in which the variable was declared.
Numeric Host Variables: The following figure shows the syntax for valid
numeric host variable declarations.
>>------------------------------------------------------------------->
-auto--- -const----
-extern- -volatile--
-static--
>----float----------------------------------------------------------->
-double--------------------------------
-int--
---long------------------------------
-short--
# -decimal--(--integer-----------------)--
# -, integer--
<-,--------------------------------
>----variable-name--------------------- ; -------------------------><
# - expression--
-------------------------------------------------------------------------
Character Host Variables: There are three valid forms for character host
variables:
* Single-character form
* NUL-terminated character form
* VARCHAR structured form.
>>------------------------------------------char--------------------->
-auto--- -const---- -unsigned--
-extern- -volatile--
-static--
<-,--------------
>----variable-name--- ; -------------------------------------------><
-------------------------------------------------------------------------
--- NUL-Terminated Character Form ---------------------------------------
>>------------------------------------------char--------------------->
-auto--- -const---- -unsigned--
-extern- -volatile--
-static--
<-,----------------------------
>----variable-name--â--length--ã--- ; -----------------------------><
-------------------------------------------------------------------------
Notes
>>----------------------------struct----------- { --short------------>
-auto--- -const---- -tag-- -int--
-extern- -volatile--
-static--
<- , ------------
>----variable-name--- ; -------------------------------------------><
-------------------------------------------------------------------------
Note: var-1 and var-2 must be simple variable references and cannot be
used as host variables.
Example
struct VARCHAR {
short len;
char sâ10ã;
} vstring;
The following figure shows the syntax for valid host structures.
>>--------------------------------------------------struct--{-------->
-auto--- -const---- -_Packed-- -tag--
-extern- -volatile--
-static--
<-----------------------------------------------------
>--------float-----------------var-1--;-------------------}--------->
-double-------------
---long-------------
-short-- -int--
-varchar structure---
---------------char--var-2--------------------;--
-unsigned-- -â--length--ã--
>--variable-name--;-------------------------------------------------><
varchar structure:
>--struct-----------{--short-----------var-3--;---------------------->
-tag-- -int--
>----------------char--var-4--â--length--ã--}------------------------>
-unsigned--
-------------------------------------------------------------------------
-------------------------------------------------------------------------
Table 9. SQL Data Types Generated from C Declarations
------------------------------------------------------------------------
C Data Type SQLTYPE of SQLLEN of SQL Data Type
Host Variable Host
Variable
--------------------+---------------+--------------+--------------------
short int 500 2 SMALLINT
--------------------+---------------+--------------+--------------------
long int 496 4 INTEGER
--------------------+---------------+--------------+--------------------
# decimal(p,s)(1) 484 p in byte 1, DECIMAL(p,s)(1)
# s in byte 2
--------------------+---------------+--------------+--------------------
float 480 4 FLOAT (single
precision)
--------------------+---------------+--------------+--------------------
double 480 8 FLOAT (double
precision)
--------------------+---------------+--------------+--------------------
single-character 452 1 CHAR(1)
form
--------------------+---------------+--------------+--------------------
NUL-terminated 460 n VARCHAR (n-1)
character form
--------------------+---------------+--------------+--------------------
VARCHAR structured 448 n VARCHAR(n)
form 1<n<254
--------------------+---------------+--------------+--------------------
VARCHAR structured 456 n VARCHAR(n)
form n>254
----------------------------------------------------------------------
-------------------------------------------------------------------------
Table 10. SQL Data Types Mapped to Typical C Declarations
------------------------------------------------------------------------
SQL Data Type C Data Type Notes
-----------------------+------------------------+-----------------------
SMALLINT short int
-----------------------+------------------------+-----------------------
INTEGER long int
-----------------------+------------------------+-----------------------
# DECIMAL(p,s) or decimal You can use the
# NUMERIC(p,s) double data type if
# your C compiler does
# not have a decimal
# data type; however,
# double is not an
# exact equivalent.
-----------------------+------------------------+-----------------------
REAL or FLOAT(n) float 1<n<21
-----------------------+------------------------+-----------------------
DOUBLE PRECISION or double 22<n<53
FLOAT(n)
-----------------------+------------------------+-----------------------
CHAR(1) single-character form
-----------------------+------------------------+-----------------------
CHAR(n) no exact equivalent If n>1, use
NUL-terminated
character form
-----------------------+------------------------+-----------------------
# VARCHAR(n) or NUL-terminated If data can contain
# LONG VARCHAR character form character NULs (\0),
# use VARCHAR
# structured form.
# Allow at least n+1 to
# accommodate the
# NUL-terminator.
# For LONG VARCHAR
# columns, run DCLGEN
# against the table to
# determine the length
# that DB2 assigns to
# the column. Use that
# length to choose an
# appropriate value for
# n.
------------------------+-----------------------
# VARCHAR structured For LONG VARCHAR
# form columns, run DCLGEN
# against the table to
# determine the length
# that DB2 assigns to
# the column. Use that
# length to choose an
# appropriate length
# for var-2 in the host
# variable structure.
-----------------------+------------------------+-----------------------
GRAPHIC(n) not supported
-----------------------+------------------------+-----------------------
# VARGRAPHIC(n) or not supported
# LONG VARGRAPHIC
-----------------------+------------------------+-----------------------
DATE NUL-terminated If you are using a
character form date exit routine,
the length is
determined by that
routine. Otherwise,
allow at least 11
characters to
accommodate the
NUL-terminator.
------------------------+-----------------------
VARCHAR structured If you are using a
form date exit routine,
the length is
determined by that
routine. Otherwise,
allow at least 10
characters.
-----------------------+------------------------+-----------------------
TIME NUL-terminated If you are using a
character form time exit routine,
the length is
determined by that
routine. Otherwise,
the length must be at
least 7; to include
seconds, the length
must be at least 9 to
accommodate the
NUL-terminator.
------------------------+-----------------------
VARCHAR structured If you are using a
form time exit routine,
the length is
determined by that
routine. Otherwise,
the length must be at
least 6; to include
seconds, the length
must be at least 8.
-----------------------+------------------------+-----------------------
TIMESTAMP NUL-terminated The length must be at
character form least 20. To include
microseconds, the
length must be 27.
If the length is less
than 27, truncation
occurs on the
microseconds part.
------------------------+-----------------------
VARCHAR structured The length must be at
form least 19. To include
microseconds, the
length must be 26.
If the length is less
than 26, truncation
occurs on the
microseconds part.
-----------------------------------------------------------------------
C Data Types with No SQL Equivalent: C supports some data types and
storage classes with no SQL equivalents, for example, register storage
class, type def, wchar_t, and the pointer.
char hostvarânã;
and the host variable is used as input to SQL, the equivalent SQL data
type is VARCHAR with length (n-1) or less, where n is a positive integer.
If no NUL is found up to length n, SQLCODE -302 is returned.
* less than or equal to n, then DB2 inserts the characters into the host
variable as long as the characters fit up to length (n-1) and appends
a NUL at the end of the string. SQLWARNâ1ã is set to W and if an
indicator variable is provided it is set to the original length of the
source string.
* greater than n+1, then the rules depend on whether the source string
is a value of a fixed-length string column or a varying-length string
column:
Quotes
Apostrophes
In SQL, quotes are used to delimit identifiers and apostrophes are used to
delimit string constants. The following examples illustrate the use of
apostrophes and quotes in SQL.
Quotes
SELECT "COL#1" FROM TBL1;
Apostrophes
C host variables used in SQL statements must be type compatible with the
columns with which they are to be used:
Use the VARCHAR structured form for varying length BIT data. Some C
string manipulation functions process NUL-terminated strings and others
process strings that are not NUL-terminated. The C string manipulation
functions that process NUL-terminated strings cannot handle bit data
because the data can contain the NUL character, which would be
misinterpreted as a NUL terminator.
Indicator variables are declared in the same way as host variables. The
declarations of the two types of variables can be mixed in any way that
seems appropriate to you. For more information about indicator variables,
see "Using Indicator Variables with Host Variables" in topic 3.1.4.1.4.
The following figure shows the syntax for a valid indicator variable.
<-,--------------
>>----------------------------short-------------variable-name---;--><
-auto--- -const---- -int--
-extern- -volatile--
-static--
-------------------------------------------------------------------------
The following figure shows the syntax for a valid indicator array.
>>----------------------------short---------------------------------->
-auto--- -const---- -int--
-extern- -volatile--
-static--
<-,----------------------------
>----variable-name--â--length--ã---;-------------------------------><
-------------------------------------------------------------------------
# You can use the subroutine DSNTIAR to convert an SQL return code
into a
# text message. DSNTIAR takes data from the SQLCA, formats it into a
# message, and places the result in a message output area that you provide
# in your application program. For concepts and more information on the
# behavior of DSNTIAR, see "Handling SQL Error Return Codes" in
# topic 3.1.5.4.
-------------------------------------------------------------------------
# &message
# An output area, in VARCHAR format, in which DSNTIAR places
the
# message text. The first halfword contains the length of the
# remaining area; its minimum value is 240.
# The output lines of text, each line being the length specified in
# &lrecl, are put into this area. For example, you could specify
# the format of the output area as:
# For C, include:
# If your CICS application requires CICS storage handling, you must use
# the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the
following
# syntax:
# DSNTIAC has extra parameters, which you must use for calls to routines
# that use CICS commands.
# &eib
# &commarea
# communication area
# The assembler source code for DSNTIAC and job DSNTEJ5A, which
# assembles and linkedits DSNTIAC, are in the data set
DSN310.SDSNSAMP.
-------------------------------------------------------------------------
# Using C++ data types as host variables: You can use class members (also
# called static data members) as host variables. Any SQL statement within
# the same class is within the scope of a class member used as a host
# variable.
# Host structures: You can use host structures in C++ programs, just as
you
# can in C, but unlike C, you cannot nest one host structure within another.
# For example, suppose a C++ program contains the following host
variable
# declaration:
# Language Restrictions: The C++ compiler does not support the decimal
# type. Therefore, you cannot declare decimal host variables in a C++
# program. If your program accesses tables that contain columns of type
# DECIMAL, use host variables of type double to send data to or receive
data
# from those columns.
All executable SQL statements in the program must be within the scope of
the declaration of the SQLCODE variable or SQLCA variables. See
Chapter 6
of SQL Reference for more information about the INCLUDE statement
and
Appendix C of SQL Reference for a complete description of SQLCA
fields.
Unlike the SQLCA, there can be more than one SQLDA in a program,
and an
SQLDA can have any valid name. The DB2 SQL INCLUDE statement
does not
provide an SQLDA mapping for COBOL. The SQLDA can be defined
using one of
the following two methods:
-------------------------------------------------------------------------
Table 11. Allowable SQL Statements for COBOL Program Sections
------------------------------------------------------------------------
SQL Statement Program Section
-----------------------+------------------------------------------------
BEGIN DECLARE SECTION WORKING-STORAGE SECTION or
LINKAGE SECTION
END DECLARE SECTION
-----------------------+------------------------------------------------
# INCLUDE SQLCA WORKING-STORAGE SECTION or
LINKAGE SECTION
-----------------------+------------------------------------------------
INCLUDE PROCEDURE DIVISION or DATA DIVISION(1)
text-file-name
-----------------------+------------------------------------------------
DECLARE TABLE DATA DIVISION or PROCEDURE
DIVISION
DECLARE CURSOR
-----------------------+------------------------------------------------
Other PROCEDURE DIVISION
-----------------------------------------------------------------------
Note: (1) When including host variable declarations, the INCLUDE
statement must be in the WORKING-STORAGE SECTION or the
LINKAGE
SECTION.
-------------------------------------------------------------------------
Each SQL statement in a COBOL program must begin with EXEC SQL
and end
with END-EXEC. If the SQL statement appears between two COBOL
statements,
the period is optional and might not be appropriate. If the statement is
nested in an IF...THEN set of COBOL statements, leave off the ending
period to avoid inadvertently ending the IF statement. The EXEC and
SQL
keywords must appear on one line, but the remainder of the statement can
appear on the next and subsequent lines.
EXEC SQL
UPDATE DSN8310.DEPT
SET MGRNO :MGR-NUM
WHERE DEPTNO :INT-DEPT
END-EXEC.
Declaring Tables and Views: Your COBOL program should also include
a
DECLARE TABLE statement to describe each table and view the
program
accesses. The DECLARE TABLE statements can be generated by the
DB2
declarations generator (DCLGEN). The DCLGEN members should be
included in
the DATA DIVISION. For details, see "Chapter 3-3. Using DCLGEN"
in
topic 3.3.
Names: Any valid COBOL name can be used for a host variable.
COBOL names
are subject to the following restrictions: external entry names or access
plan names that begin with 'DSN' and host variable names that begin with
'SQL' are not allowed. These names are reserved for DB2.
# IMS and DB2 share a common alias name, DSNHLI, for the language
# interface module. You must do the following when you concatenate
# your libraries:
---------------------------------------------------------------------
---------------------------------------------------------------------
# - TRUNC(OPT) if you are certain that the data being moved to each
# binary variable by the application does not have a larger
# precision than defined in the PICTURE clause of the binary
# variable.
# * Observe the rules in SQL Reference when you name SQL identifiers.
Do
# not use hyphens in an SQL identifier or begin the identifier with a
# digit.
* The precompiler inserts some lines of code right after the procedure
division statement. Using section headers in conjunction with this
generated code causes the COBOL compiler to generate the warning
messages IGYPS2023-I and IGYPS2015-I, which you can ignore.
* If CCSID 930 or 5026 is specified for the Coded Character Set install
option, no folding of the source is performed. You must use uppercase
letters for all COBOL and SQL statements in the program. Otherwise,
you can use lowercase letters in COBOL words, including user-defined
words, system names, and reserved words. You can also use lowercase
letters in DB2 keywords, in COBOL statements, and in SQL
statements.
In general, lowercase letters are folded to uppercase letters unless
they are enclosed in apostrophes or quotation marks. The precompiler
retains the original case of the COBOL source listing and output
source when the SOURCE option is specified.
If host variables with address changes are passed into a program more
than
once, then the called program must reset SQL-INIT-FLAG. Resetting this
flag indicates that the storage must be initialized when the next SQL
statement is executed. To reset the flag, insert the statement MOVE
ZERO
TO SQL-INIT-FLAG in the called program's PROCEDURE DIVISION,
prior to any
executable SQL statements that use the host variables.
If you are using VS COBOL II Release 3 and the NOCMPR2 option, then
the
following additional restrictions apply:
* All SQL statements and any host variables they reference must be
within the first program when using nested programs or batch
compilation.
You must explicitly declare all host variables used in SQL statements in
the WORKING-STORAGE SECTION or LINKAGE SECTION of your
program's DATA
DIVISION. A host variable used in an SQL statement must be declared
prior
to the first use of the host variable in an SQL statement.
The COBOL statements that are used to define the host variables can be
preceded by a BEGIN DECLARE SECTION statement and followed by
an END
DECLARE SECTION statement. The BEGIN DECLARE SECTION and
END DECLARE
SECTION statements are only required when the STDSQL(86)
precompiler
option is used.
The names of host variables should be unique within the program, even if
the host variables are in different blocks or procedures.
Numeric Host Variables: The following figures show the syntax for valid
numeric host variable declarations.
>--------------------------------------- . -------------------------><
-VALUE----------numeric-constant--
-IS--
-------------------------------------------------------------------------
>>----01----variable-name----PICTURE------------picture-string------->
-77-- -PIC------ -IS--
>-------------------------------------------------------------------->
-USAGE----------
-IS--
>------BINARY-------------------------------------------------------->
-COMPUTATIONAL-4-
-COMP-4----------
-COMPUTATIONAL---
-COMP-------------
---PACKED-DECIMAL--------------------------------------
-COMPUTATIONAL-3-
-COMP-3-----------
-DISPLAY SIGN----------LEADING SEPARATE-----------------
-IS-- -CHARACTER--
>--------------------------------------- . -------------------------><
-VALUE----------numeric-constant--
-IS--
-------------------------------------------------------------------------
Notes
Character Host Variables: There are two valid forms of character host
variables:
* Fixed-length strings
* Varying-length strings.
>>----01----variable-name----PICTURE------------picture-string------->
-77-- -PIC------ -IS--
>-------------------------------------------------------------------->
--------------------DISPLAY--
-USAGE----------
-IS--
>----------------------------------------- . -----------------------><
-VALUE----------character-constant--
-IS--
-------------------------------------------------------------------------
>>--01--variable-name-- . ------------------------------------------><
>--49--var-1----PICTURE--------------S9(4)--------------------------->
-PIC------ -IS-- -S9999-- -USAGE----------
-IS--
>----BINARY-------------------------------------------------- . ----->
-COMPUTATIONAL-4- -VALUE----------numeric-constant--
-COMP-4----------- -IS--
>--49--var-2----PICTURE------------picture-string-------------------->
-PIC------ -IS--
>-------------------------------------------------------------------->
--------------------DISPLAY--
-USAGE----------
-IS--
>----------------------------------------- . -----------------------><
-VALUE----------character-constant--
-IS--
-------------------------------------------------------------------------
Notes
DB2 uses the full length of the S9(4) variable even though SAA
COBOL
only recognizes values up to 9999. This can cause data truncation
errors when COBOL statements are being executed and might
effectively
limit the maximum length of variable-length character strings to 9999.
Consider using the TRUNC(OPT) or NOTRUNC COBOL compiler
option
(whichever is appropriate) to avoid data truncation.
Graphic Character Host Variables: There are two valid forms for graphic
character host variables:
* Fixed-length strings
* Varying-length strings.
>>----01----variable-name----PICTURE------------picture-string------><
-77-- -PIC------ -IS--
>--USAGE----------DISPLAY-1------------------------------------------>
-IS--
>--------------------------------------- . -------------------------><
-VALUE----------graphic-constant--
-IS--
-------------------------------------------------------------------------
>>--01--variable-name-- . --49--var-1----PICTURE--------------------->
-PIC------ -IS--
>----S9(4)-------------------------BINARY---------------------------->
-S9999-- -USAGE---------- -COMPUTATIONAL-4-
-IS-- -COMP-4-----------
>--------------------------------------- . -------------------------><
-VALUE----------numeric-constant--
-IS--
>--49--var-2----PICTURE------------picture-string--USAGE------------->
-PIC------ -IS-- -IS--
>--DISPLAY-1--------------------------------------- . --------------><
-VALUE----------graphic-constant--
-IS--
-------------------------------------------------------------------------
Notes
1. The picture-string associated with these forms must be G(m) (or
GG...G, with m instances of G), with 1 < m < 127 for fixed-length
strings. N can be used in place of G for COBOL graphic variable
declarations. If you use N for graphic variable declarations, USAGE
DISPLAY-1 is optional. For strings other than fixed-length, m cannot
be greater than the maximum size of a varying-length graphic string.
DB2 uses the full size of the S9(4) variable even some COBOL
implementations restrict the maximum length of varying-length graphic
string to 9999. This can cause data truncation errors when COBOL
statements are being executed and might effectively limit the maximum
length of variable-length graphic strings to 9999. Consider using the
TRUNC(OPT) or NOTRUNC COBOL compiler option (which ever is
appropriate) to avoid data truncation.
01 A
02 B
03 C1 PICTURE ...
03 C2 PICTURE ...
When writing an SQL statement using a qualified host variable name (for
example, to identify a field within a structure), use the name of the
structure followed by a period and the name of the field. For example,
specify B.C1 rather than C1 OF B or C1 IN B.
The following figure shows the syntax for valid host structures.
>>--level-1--variable-
name--.-----------------------------------------------------------------------------------
--->
<-------------------------------------------------------------------------------------
--
>----level-2--variable-name------PICTURE--------------------------------
usage clause--.--------------------------><
-PIC------ -IS-- -picture-string--
-varstring--.----------------------------------------------
usage clause:
>-------------------------COMPUTATIONAL-
1-------------------------------------------------------------------------->
-USAGE---------- -COMP-1----------
-VALUE----------constant--
-IS-- -COMPUTATIONAL-2-
-IS--
-COMP-2-----------
---BINARY----------------------------------------------
-COMPUTATIONAL-4-
-COMP-4----------
-COMPUTATIONAL---
-COMP-------------
---PACKED-DECIMAL--------------------------------------
-COMPUTATIONAL-3-
-COMP-3-----------
-DISPLAY SIGN----------LEADING
SEPARATE-----------------
-IS-- -CHARACTER--
varstring:
>--49--var-1----PICTURE--------------S9(4)-------------------------
BINARY------------------------------------------>
-PIC------ -IS-- -S9999-- -USAGE----------
-COMPUTATIONAL-4-
-IS-- -COMP-4----------
-COMPUTATIONAL---
-COMP-------------
>---------------------------------------.--49--var-2----PICTURE------------
picture-string-------------------------->
-VALUE----------numeric-constant-- -PIC------ -IS--
-IS--
>---------------------------------------------------------------------------------.-----
--------------------------->
----------------------DISPLAY------ -VALUE------------
constant------------
-USAGE---------- -DISPLAY-1-- -IS-- -graphic-constant--
-IS--
-----------------------------------------------------------------------------------------
------------------------------
Notes
--------------------------------------------------------------------------
Table 12. SQL Data Types Generated from COBOL Declarations
-------------------------------------------------------------------------
COBOL Data Type SQLTYPE of SQLLEN of SQL Data Type
Host Variable Host
Variable
---------------------+---------------+--------------+--------------------
COMP-1 480 4 REAL or FLOAT (n)
1<n<21
---------------------+---------------+--------------+--------------------
COMP-2 480 8 DOUBLE PRECISION
or FLOAT (n)
22<n<53
---------------------+---------------+--------------+--------------------
S9(i)V9(d) COMP-3 484 i+d in byte DECIMAL(i+d,d)
or 1, NUMERIC(i+d,d)
S9(i)V9(d) d in byte 2
PACKED-DECIMAL
---------------------+---------------+--------------+--------------------
S9(i)V9(d) 504 i+d in byte No exact
DISPLAY SIGN 1, equivalent.
LEADING SEPARATE d in byte 2 Use DECIMAL(i+d,d)
or NUMERIC(i+d,d)
---------------------+---------------+--------------+--------------------
S9(4) COMP-4 or 500 2 SMALLINT
BINARY
---------------------+---------------+--------------+--------------------
S9(9) COMP-4 or 496 4 INTEGER
BINARY
---------------------+---------------+--------------+--------------------
Fixed-length 452 m CHAR(m)
character data
---------------------+---------------+--------------+--------------------
Varying-length 448 m VARCHAR(m)
character data
1<m<254
---------------------+---------------+--------------+--------------------
Varying-length 456 m VARCHAR(m)
character data
m>254
---------------------+---------------+--------------+--------------------
Fixed-length 468 m GRAPHIC(m)
graphic data
---------------------+---------------+--------------+--------------------
Varying-length 464 m VARGRAPHIC(m)
graphic data
1<m<127
---------------------+---------------+--------------+--------------------
Varying-length 472 m VARGRAPHIC(m)
graphic data
m>127
-----------------------------------------------------------------------
Table 13 can be used to determine the COBOL data type that is equivalent
to a given SQL data type.
-------------------------------------------------------------------------
Table 13. SQL Data Types Mapped to Typical COBOL Declarations
------------------------------------------------------------------------
SQL Data Type COBOL Data Type Notes
------------------+--------------------------+--------------------------
SMALLINT S9(4) COMP-4 or BINARY
------------------+--------------------------+--------------------------
INTEGER S9(9) COMP-4 or BINARY
------------------+--------------------------+--------------------------
# DECIMAL(p,s) or If p<19: p is precision; s is
# NUMERIC(p,s) S9(p-s)V9(s) COMP-3 or scale. 0<s<p<18. If
# S9(p-s)V9(s) s0, S9(p)V or S9(p)
# PACKED-DECIMAL should be used. If sp,
# DISPLAY SIGN LEADING SV9(s) should be used.
# SEPARATE If p>18:
# There is no exact
# equivalent.
# Use COMP-2.
------------------+--------------------------+--------------------------
REAL or FLOAT COMP-1 1<n<21
(n)
------------------+--------------------------+--------------------------
DOUBLE PRECISION COMP-2 22<n<53
or FLOAT (n)
------------------+--------------------------+--------------------------
CHAR(n) fixed-length character 1<n<254
string
------------------+--------------------------+--------------------------
# VARCHAR(n) or varying-length character For LONG VARCHAR
# LONG VARCHAR string columns, run DCLGEN
# against the table to
# determine the length
# that DB2 assigns to the
# column. Use that length
# to choose an appropriate
# value for the
# varying-length character
# string.
------------------+--------------------------+--------------------------
GRAPHIC(n) fixed-length graphic n refers to the number
string of double-byte
characters, not to the
number of bytes.
1<n<127
------------------+--------------------------+--------------------------
# VARGRAPHIC(n) or varying-length graphic n refers to the number
# LONG VARGRAPHIC string of double-byte
# characters, not to the
# number of bytes.
# For LONG VARGRAPHIC
# columns, run DCLGEN
# against the table to
# determine the length
# that DB2 assigns to the
# column. Use that length
# to choose an appropriate
# value for the
# varying-length graphic
# string.
------------------+--------------------------+--------------------------
DATE fixed-length character If you are using a date
string of length n exit routine, n is
determined by that
routine. Otherwise, n
must be at least 10.
------------------+--------------------------+--------------------------
TIME fixed-length character If you are using a time
string of length n of exit routine, n is
length n determined by that
routine. Otherwise, n
must be at least 6; to
include seconds, n must
be at least 8.
------------------+--------------------------+--------------------------
TIMESTAMP fixed-length character n must be at least 19.
string To include microseconds,
n must be 26; if n is
less than 26, truncation
occurs on the
microseconds part.
-----------------------------------------------------------------------
SQL Data Types with No COBOL Equivalent: COBOL does not provide
an
equivalent for the decimal data type when the precision is greater than
18. You can use:
For small integers that can exceed 9999, use S9(5). For large integers
that can exceed 999,999,999, use S9(10) COMP-3 to obtain the decimal
data
type. If you use COBOL for integers that exceed the COBOL PICTURE,
then
we recommend specifying the column as decimal to ensure that the data
types match and perform well.
Indicator variables are declared in the same way as host variables. The
declarations of the two types of variables can be mixed in any way that
seems appropriate to you. Indicator variables can be defined as a scalar
variable or as array elements in a structure form or as an array variable
using a single level OCCURS clause. For more information about
indicator
variables, see "Using Indicator Variables with Host Variables" in
topic 3.1.4.1.4.
The following figure shows the syntax for a valid indicator variable.
>>----01----variable-name----PICTURE--------------S9(4)-------------->
-77-- -PIC------ -IS-- -S9999--
>-----------------------BINARY-------------.------------------------><
-USAGE---------- -COMPUTATIONAL-4-
-IS-- -COMP-4----------
-COMPUTATIONAL---
-COMP-------------
-------------------------------------------------------------------------
The following figure shows the syntax for a valid indicator structure.
>>----01----variable-name----PICTURE--------------S9(4)-------------->
-77-- -PIC------ -IS-- -S9999--
>-----------------------BINARY-------------OCCURS--integer----------->
-USAGE---------- -COMPUTATIONAL-4-
-IS-- -COMP-4----------
-COMPUTATIONAL---
-COMP-------------
>-------------.-----------------------------------------------------><
-TIMES--
-------------------------------------------------------------------------
# You can use the subroutine DSNTIAR to convert an SQL return code
into a
# text message. DSNTIAR takes data from the SQLCA, formats it into a
# message, and places the result in a message output area that you provide
# in your application program. For concepts and more information on the
# behavior of DSNTIAR, see "Handling SQL Error Return Codes" in
# topic 3.1.5.4.
-------------------------------------------------------------------------
# The output lines of text, each line being the length specified in
# lrecl, are put into this area. For example, you could specify the
# format of the output area as:
# 01 ERROR-MESSAGE.
# 02 ERROR-LEN PIC S9(4) COMP VALUE +1320.
# 02 ERROR-TEXT PIC X(132) OCCURS 10 TIMES
# INDEXED BY ERROR-INDEX.
# 77 ERROR-TEXT-LEN PIC S9(9) COMP VALUE +132.
# .
# .
# .
# CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE ERROR-
TEXT-LEN.
# If your CICS application requires CICS storage handling, you must use
# the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the
following
# syntax:
# CALL 'DSNTIAC' USING eib commarea sqlca msg lrecl).
# DSNTIAC has extra parameters, which you must use for calls to routines
# that use CICS commands.
# eib
# commarea
# communication area
# The assembler source code for DSNTIAC and job DSNTEJ5A, which
# assembles and linkedits DSNTIAC, are in the data set
DSN310.SDSNSAMP.
-------------------------------------------------------------------------
# An example of calling DSNTIAR from an application appears in the DB2
# sample assembler program DSN8BC3, contained in the library
# DSN8310.SDSNSAMP. See Appendix B, "Sample Applications" in
topic B.0 for
# instructions on how to access and print the source code for the sample
# program.
# When you use object-oriented extensions in an IBM COBOL for MVS &
VM
# application, be aware of the following:
# Where to place the SQLCA, SQLDA, and data declarations: You can put
the
# SQLCA, SQLDA, and SQL data declarations in the WORKING-
STORAGE SECTION of
# a program, class, or method. An SQLCA or SQLDA in a class
WORKING-STORAGE
# SECTION is global to all the methods of the class. An SQLCA or
SQLDA in a
# method WORKING-STORAGE SECTION is local to that method only.
# If a class contains SQL statements, and a second class inherits the first
# class, both classes can contain SQL because there are two instances of an
# SQLCA, one for each class.
# Rules for host variables: You can declare COBOL variables that are used
# as host variables in the WORKING-STORAGE SECTION or
LINKAGE-SECTION of a
# program, class, or method. You can also declare host variables in the
# LOCAL-STORAGE SECTION of a method. The scope of a host
variable is the
# method, class, or program within which it is defined. A host variable
# definition must be unique within a class, even if that host variable is
# defined within a method.
All executable SQL statements in the program must be within the scope of
the declaration of the SQLCOD variable or SQLCA variables. See
Chapter 6
of SQL Reference for more information about the INCLUDE statement
and
Appendix C of SQL Reference for a complete description of SQLCA
fields.
Unlike the SQLCA, there can be more than one SQLDA in a program,
and an
SQLDA can have any valid name. The INCLUDE SQLDA statement is
not
supported within a FORTRAN program. If present, an error message
results.
The DB2 precompiler does not support the exclamation point (!) as a
comment recognition character in FORTRAN programs.
Continuation for SQL Statements: The line continuation rules for SQL
statements are the same as those for FORTRAN statements, except that
EXEC
SQL must be specified within one line. The SQL examples in this section
have Cs in the sixth column to indicate that they are continuations of the
EXEC SQL statement.
Margins: Code the SQL statements (starting with EXEC SQL) between
columns
7 through 72, inclusive. If EXEC SQL starts before the specified margins,
the DB2 precompiler does not recognize the SQL statement.
* DEBUG
* FUNCTION
* IMPLICIT
* PROGRAM
* SUBROUTINE
WHENEVER Statement: The target for the GOTO clause in the SQL
WHENEVER
statement must be a label in the FORTRAN source and must reference a
statement in the same subprogram. A WHENEVER statement only
applies to
SQL statements in the same subprogram.
* You cannot use the SQL INCLUDE statement to include the following
statements: PROGRAM, SUBROUTINE, BLOCK, FUNCTION, or
IMPLICIT.
* The byte data type is not supported within embedded SQL. Byte is not
a recognizable host data type.
You must explicitly declare all host variables used in SQL statements.
Implicit declarations of host variables via default typing or by the
IMPLICIT statement are not supported. A host variable used in an SQL
statement must be declared prior to the first use of the host variable in
an SQL statement.
The FORTRAN statements that are used to define the host variables can
be
preceded by a BEGIN DECLARE SECTION statement and followed by
an END
DECLARE SECTION statement. The BEGIN DECLARE SECTION and
END DECLARE
SECTION statements are only required when the STDSQL(86)
precompiler
option is used.
All host variables within an SQL statement must be preceded with a colon
(:).
The names of host variables should be unique within the program, even if
the host variables are in different blocks, functions, or subroutines.
The declaration for a character host variable must not use an expression
to define the length of the character variable. You can use a character
host variable with an undefined length (for example, CHARACTER *(*)).
The
length of any such variable is determined when its associated SQL
statement is executed.
An SQL statement that uses a host variable must be within the scope of
the
statement in which the variable was declared.
You must be careful when calling subroutines that might change the
attributes of a host variable. Such alteration can cause SQLCODE -804
(SQLSTATE '51001') to be issued while the program is running. See
Appendix C of SQL Reference for more information.
Numeric Host Variables: The following figure shows the syntax for valid
numeric host variable declarations.
>>----INTEGER*2------------------------------------------------------>
-INTEGER----------
-*4--
-REAL-------------
-*4--
-REAL*8-----------
-DOUBLE PRECISION--
<-,------------------------------------------
>----variable-name-------------------------------------------------><
-/--numeric-constant--/--
-------------------------------------------------------------------------
Character Host Variables: The following figure shows the syntax for
valid
character host variable declarations.
>>--CHARACTER-------------------------------------------------------->
-*n--
<-,----------------------------------------------------
>----variable-name-------------------------------------------------><
-*n-- -/--character-constant--/--
-------------------------------------------------------------------------
--------------------------------------------------------------------------
Table 14. SQL Data Types Generated from FORTRAN Declarations
-------------------------------------------------------------------------
FORTRAN Data Type SQLTYPE of SQLLEN of SQL Data Type
Host Variable Host
Variable
---------------------+---------------+--------------+--------------------
INTEGER*2 500 2 SMALLINT
---------------------+---------------+--------------+--------------------
INTEGER*4 496 4 INTEGER
---------------------+---------------+--------------+--------------------
REAL*4 480 4 FLOAT (single
precision)
---------------------+---------------+--------------+--------------------
REAL*8 480 8 FLOAT (double
precision)
---------------------+---------------+--------------+--------------------
CHARACTER*n 452 n CHAR(n)
-----------------------------------------------------------------------
-------------------------------------------------------------------------
Table 15. SQL Data Types Mapped to Typical FORTRAN Declarations
------------------------------------------------------------------------
SQL Data Type FORTRAN Equivalent Notes
---------------+---------------------+----------------------------------
SMALLINT INTEGER*2
---------------+---------------------+----------------------------------
INTEGER INTEGER*4
---------------+---------------------+----------------------------------
DECIMAL(p,s) no exact equivalent Use REAL*8
or
NUMERIC(p,s)
---------------+---------------------+----------------------------------
FLOAT(n) REAL*4 1<n<21
single
precision
---------------+---------------------+----------------------------------
FLOAT(n) REAL*8 22<n<53
double
precision
---------------+---------------------+----------------------------------
CHAR(n) CHARACTER*n 1<n<254
---------------+---------------------+----------------------------------
# VARCHAR(n) or no exact equivalent Use a character host variable
# LONG VARCHAR large enough to contain the
# largest expected VARCHAR value.
# For LONG VARCHAR columns, run
# DCLGEN against the table, using
# any supported language, to
# determine the length that DB2
# assigns to the column. Use that
# length to choose an appropriate
# length for the character host
# variable.
---------------+---------------------+----------------------------------
GRAPHIC(n) not supported
---------------+---------------------+----------------------------------
VARGRAPHIC(n) not supported
---------------+---------------------+----------------------------------
DATE CHARACTER*n If you are using a date exit
routine, n is determined by that
routine; otherwise, n must be at
least 10.
---------------+---------------------+----------------------------------
TIME CHARACTER*n If you are using a time exit
routine, n is determined by that
routine. Otherwise, n must be
at least 6; to include seconds,
n must be at least 8.
---------------+---------------------+----------------------------------
TIMESTAMP CHARACTER*n n must be at least 19. To
include microseconds, n must be
26; if n is less than 26,
truncation occurs on the
microseconds part.
-----------------------------------------------------------------------
Host variables must be type compatible with the column values with
which
they are to be used.
* Numeric data types are compatible with each other. For example, if a
column value is INTEGER, the host variable must be declared
INTEGER*2,
INTEGER*4, REAL, REAL*4, REAL*8, or DOUBLE PRECISION.
CHARACTER*7 CLSCD
INTEGER*2 DAY
CHARACTER*8 BGN, END
INTEGER*2 DAYIND, BGNIND, ENDIND
The following figure shows the syntax for a valid indicator variable.
>>--INTEGER*2--variable-name----------------------------------------><
-/--numeric-constant--/--
-------------------------------------------------------------------------
# You can use the subroutine DSNTIR to convert an SQL return code into
a
# text message. DSNTIR makes some preparations, builds a parameter list,
# and calls DSNTIAR for you. DSNTIAR takes data from the SQLCA,
formats it
# into a message, and places the result in a message output area that you
# provide in your application program. For concepts and more information
on
# the behavior of DSNTIAR, see "Handling SQL Error Return Codes" in
# topic 3.1.5.4.
-------------------------------------------------------------------------
# error-length
# The total length of the message output area.
# The output lines of text are put into this area. For example, you
# could specify the format of the output area as:
# return-code
# Accepts a return code from DSNTIAR.
All executable SQL statements in the program must be within the scope of
the declaration of the SQLCODE variable or SQLCA variables. See
Chapter 6
of SQL Reference for more information about the INCLUDE statement
and
Appendix C of SQL Reference for a complete description of SQLCA
fields.
Unlike the SQLCA, there can be more than one SQLDA in a program,
and an
SQLDA can have any valid name. An SQLDA can be coded in a PL/I
program
either directly or by using the SQL INCLUDE statement. Using the SQL
INCLUDE statement requests the inclusion of a standard SQLDA
declaration:
SQLDA declarations must be placed before the first SQL statement that
references that data descriptor. See Chapter 6 of SQL Reference for more
information about the INCLUDE statement and Appendix C of SQL
Reference
for a complete description of SQLDA fields.
Each SQL statement in a PL/I program must begin with EXEC SQL and
end with
a semicolon (;). The EXEC and SQL keywords must appear all on one
line,
but the remainder of the statement can appear on the next and subsequent
lines.
Declaring Tables and Views: Your PL/I program should also include a
DECLARE statement to describe each table and view the program
accesses.
The DECLARE statements can be generated by the DB2 declarations
generator
(DCGLEN). For details, see "Chapter 3-3. Using DCLGEN" in topic 3.3.
Names: Any valid PL/I name can be used for a host variable. PL/I names
are subject to the following restrictions: external entry names or access
plan names that begin with 'DSN' and host variable names that begin with
'SQL' are not allowed. These names are reserved for DB2.
Sequence Numbers: The source statements generated by the DB2
precompiler
do not include sequence numbers. IEL0378 messages from the PL/I
compiler
identify lines of code without sequence numbers. You can ignore these
messages.
If you use the PL/I macro processor, do not use the PL/I *PROCESS
statement in the source to pass options to the PL/I compiler. You can
specify the needed options on the option parameter of the DSNH
command
or the PARM.PLI*LANGLIST option of the EXEC statement in the
DSNHPLI
procedure.
* If DBCS is used in the PL/I source, then DB2 rules for the following
language elements apply:
- graphic strings
- graphic string constants
- host identifiers
- mixed data in character strings
- MIXED DATA option
For example:
You must explicitly declare all host variables used in SQL statements. A
host variable used in an SQL statement must be declared prior to the first
use of the host variable in an SQL statement, unless the TWOPASS
precompiler option is specified. If the TWOPASS precompiler option is
specified, the declaration for host variables used in a DECLARE
CURSOR
statement must come before the DECLARE CURSOR statement.
The PL/I statements that are used to define the host variables can be
preceded by a BEGIN DECLARE SECTION statement and followed by
an END
DECLARE SECTION statement. The BEGIN DECLARE SECTION and
END DECLARE
SECTION statements are only required when the STDSQL(86)
precompiler
option is used.
The names of host variables should be unique within the program, even if
the host variables are in different blocks or procedures, unless you
qualify them with a structure name to make them unique.
An SQL statement that uses a host variable must be within the scope of
the
statement in which the variable was declared.
Numeric Host Variables: The following figure shows the syntax for valid
numeric host variable declarations.
>>----DECLARE------variable-name------------------------------------->
-DCL------ <-,--------------
-(----variable-name---)--
>------BINARY-------FIXED-------------------------------------------->
-BIN----- -(--precision--------------)--
-DECIMAL- -,scale--
-DEC------ -FLOAT--(--precision--)------------------
>-------------------------------------------------------------------><
-Alignment and/or Scope and/or Storage--
-------------------------------------------------------------------------
Character Host Variables: The following figure shows the syntax for
valid
character host variables.
>>----DECLARE------variable-name------------------------------------->
-DCL------ <-,--------------
-(----variable-name---)--
>----CHARACTER----(--length--)--------------------------------------->
-CHAR------- -VARYING-
-VAR------
>-------------------------------------------------------------------><
-Alignment and/or Scope and/or Storage--
-------------------------------------------------------------------------
Graphic Host Variables: The following figure shows the syntax for valid
graphic host variables.
>>----DECLARE------variable-name------------------------------------->
-DCL------ <-,--------------
-(----variable-name---)--
>--GRAPHIC--(--length--)--------------------------------------------->
-VARYING-
-VAR------
>-------------------------------------------------------------------><
-Alignment and/or Scope and/or Storage--
-------------------------------------------------------------------------
DCL 1 A,
2 B,
3 C1 CHAR(...),
3 C2 CHAR(...);
You can use the structure name as shorthand notation for a list of
scalars. You can qualify a host variable with a structure name (for
example, STRUCTURE.FIELD). Host structures are limited to two
levels. A
host structure for DB2 data can be thought of as a named collection of
host variables.
You must terminate the host structure variable by ending the declaration
with a semicolon. For example:
DCL 1 A,
2 B CHAR,
2 (C, D) CHAR;
DCL (E, F) CHAR;
Host variable attributes can be specified in any order acceptable to PL/I.
For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are
all
acceptable.
The following figure shows the syntax for valid host structures.
>>----DECLARE----level--variable-
name----------------------------,------------------------------------------------->
-DCL------ -Scope and/or storage--
<-,-----------------------------------------------------------------------------------
-------------
>----level----var-1----------------------BINARY-------
FIXED--------------------------------------------;---------><
<-,------ -BIN----- -(--precision--------------)--
-(----var-2---)-- -DECIMAL- -,scale--
-DEC------ -FLOAT-----------------------------------
-(--precision--)--
---CHARACTER---------------------------------------
------
-CHAR------- -(--integer--)-- -VARYING-
-VARY-----
-GRAPHIC----------------------------------------------
----
-(--integer--)-- -VARYING-
-VARY-----
-----------------------------------------------------------------------------------------
------------------------------
-------------------------------------------------------------------------
Table 16. SQL Data Types Generated from PL/I Declarations
------------------------------------------------------------------------
PL/I Data Type SQLTYPE of SQLLEN of SQL Data Type
Host Variable Host
Variable
--------------------+---------------+--------------+--------------------
BIN FIXED(n) 500 2 SMALLINT
1<n<15
--------------------+---------------+--------------+--------------------
BIN FIXED(n) 496 4 INTEGER
16<n<31
--------------------+---------------+--------------+--------------------
DEC FIXED(p,s) 484 p in byte 1, DECIMAL(p,s)
0<p<15 and s in byte 2
0<s<p See note.
--------------------+---------------+--------------+--------------------
BIN FLOAT(p) 480 4 REAL or FLOAT (n)
1<p<21 1<n<21
--------------------+---------------+--------------+--------------------
BIN FLOAT(p) 480 8 DOUBLE PRECISION
22<p<53 or FLOAT(n)
22<n<53
--------------------+---------------+--------------+--------------------
DEC FLOAT(m) 480 4 FLOAT (single
1<m<6 precision)
--------------------+---------------+--------------+--------------------
DEC FLOAT(m) 480 8 FLOAT (double
7<m<16 precision)
--------------------+---------------+--------------+--------------------
CHAR(n) 452 n CHAR(n)
--------------------+---------------+--------------+--------------------
CHAR(n) VARYING 448 n VARCHAR(n)
1<n<254
--------------------+---------------+--------------+--------------------
CHAR(n) VARYING 456 n VARCHAR(n)
n>254
--------------------+---------------+--------------+--------------------
GRAPHIC(n) 468 n GRAPHIC(n)
--------------------+---------------+--------------+--------------------
GRAPHIC(n) VARYING 464 n VARGRAPHIC(n)
1<n<127
--------------------+---------------+--------------+--------------------
GRAPHIC(n) VARYING 472 n VARGRAPHIC(n)
n>127
---------------------------------------------------------------------
Note:
Table 17 can be used to determine the PL/I data type that is equivalent to
a given SQL data type.
-------------------------------------------------------------------------
Table 17. SQL Data Types Mapped to Typical PL/I Declarations
------------------------------------------------------------------------
SQL Data Type PL/I Equivalent Notes
------------------+------------------------+----------------------------
SMALLINT BIN FIXED(n) 1<n<15
------------------+------------------------+----------------------------
INTEGER BIN FIXED(n) 16<n<31
------------------+------------------------+----------------------------
DECIMAL(p,s) or If p<16: p is precision; s is
NUMERIC(p,s) DEC FIXED(p) or scale.
DEC FIXED(p,s) or 1<p<31 and 0<s<p
PICTURE There is no exact
picture-string equivalent for p>15. (See
"PL/I Data Types with No
SQL Equivalent" in
topic 3.4.5.7.1 for more
information.)
------------------+------------------------+----------------------------
REAL or BIN FLOAT(p) or 1<n<21, 1<p<21 and
FLOAT(n) DEC FLOAT(m) 1<m<6
------------------+------------------------+----------------------------
DOUBLE PRECISION BIN FLOAT(p) or 22<n<53, 22<p<53 and
or DEC FLOAT(m) 7<m<16
FLOAT(n)
------------------+------------------------+----------------------------
CHAR(n) CHAR(n) 1<n<254
------------------+------------------------+----------------------------
# VARCHAR(n) or CHAR(n) VAR For LONG VARCHAR
columns,
# LONG VARCHAR run DCLGEN against the
# table to determine the
# length that DB2 assigns to
# the column. Use that
# length to choose an
# appropriate value for n.
------------------+------------------------+----------------------------
GRAPHIC(n) GRAPHIC(n) n refers to the number of
double-byte characters,
not to the number of
bytes.
1<n<127
------------------+------------------------+----------------------------
# VARGRAPHIC(n) or GRAPHIC(n) VAR n refers to the number
of
# LONG VARGRAPHIC double-byte characters,
# not to the number of
# bytes.
# For LONG VARCHAR columns,
# run DCLGEN against the
# table to determine the
# length that DB2 assigns to
# the column. Use that
# length to choose an
# appropriate value for n.
------------------+------------------------+----------------------------
DATE CHAR(n) If you are using a date
exit routine, n is
determined by that
routine; otherwise, n must
be at least 10.
------------------+------------------------+----------------------------
TIME CHAR(n) If you are using a time
exit routine, n is
determined by that
routine. Otherwise, n
must be at least 6; to
include seconds, n must be
at least 8.
------------------+------------------------+----------------------------
TIMESTAMP CHAR(n) n must be at least 19. To
include microseconds, n
must be 26; if n is less
than 26, truncation occurs
on the microseconds part.
-----------------------------------------------------------------------
You should be aware of the following when you declare PL/I variables.
PL/I Data Types with No SQL Equivalent: PL/I supports some data types
with no SQL equivalent (COMPLEX and BIT variables, for example). In
most
cases, PL/I statements can be used to convert between the unsupported
PL/I
data types and the data types supported by SQL.
SQL Data Types with No PL/I Equivalent: PL/I does not provide an
equivalent for the decimal data type when the precision is greater than
15. You can use:
PL/I Scoping Rules: The precompiler does not support PL/I scoping
rules.
PL/I host variables used in SQL statements must be type compatible with
the columns with which they are to be used.
Indicator variables are declared in the same way as host variables. The
declarations of the two types of variables can be mixed in any way that
seems appropriate to you. For more information about indicator variables,
see "Using Indicator Variables with Host Variables" in topic 3.1.4.1.4.
The following figure shows the syntax for a valid indicator variable.
-------------------------------------------------------------------------
The following figure shows the syntax for a valid indicator array.
>>----DECLARE------variable-name--(--dimension--)-------------------->
-DCL------ <-,-------------------------------
-(----variable-name--(--dimension--)---)--
>----BINARY----FIXED(15)--;-----------------------------------------><
-BIN-----
-------------------------------------------------------------------------
# You can use the subroutine DSNTIAR to convert an SQL return code
into a
# text message. DSNTIAR takes data from the SQLCA, formats it into a
# message, and places the result in a message output area that you provide
# in your application program. For concepts and more information on the
# behavior of DSNTIAR, see "Handling SQL Error Return Codes" in
# topic 3.1.5.4.
-------------------------------------------------------------------------
# The DSNTIAR parameters have the following meanings:
# The output lines of text, each line being the length specified in
# lrecl, are put into this area. For example, you could specify the
# format of the output area as:
# If your CICS application requires CICS storage handling, you must use
# the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the
following
# syntax:
# DSNTIAC has extra parameters, which you must use for calls to routines
# that use CICS commands.
# eib
# commarea
# communication area
-------------------------------------------------------------------------
As with any application, you must plan and design programs to meet the
application's requirements. Designing a DB2 database application
requires
that you also plan for binding, locking, recovery, and perhaps for using
distributed data. This chapter includes the following subjects:
After you have precompiled your source program, you create a load
module,
possibly one or more packages, and an application plan. It does not
matter which you do first. Creating a load module is similar to compiling
and link-editing an application containing no SQL statements. Creating a
package or an application plan, a process unique to DB2, involves binding
one or more DBRMs.
PICTURE 2
The DB2 precompiler provides many options. Most of the options do not
affect the way you design or code the program. They allow you to tell the
precompiler what you have already done--for example, what host
language
you use or what value you depend on for the maximum precision of a
decimal
number. Or, they tell the precompiler what you want it to do--how many
lines per page in the listing or whether you want a cross-reference
report. In many cases, you may want to accept the default value provided.
A few options, however, can affect the way you code. For example, you
need to know if you are using NOFOR or STDSQL(86) before you begin
coding.
Before you begin coding, please review the list of options in Table 23 in
topic 4.2.2.4.
# Depending upon how you design your DB2 application, you might bind
all
# your DBRMs in one operation, creating only a single application plan.
Or,
# you might bind some or all of your DBRMs into separate packages in
# separate operations. After that, you must still bind the entire
# application as a single plan, listing the included packages or collections
# and binding any DBRMs not already bound into packages. Regardless of
what
# the plan contains, you must bind a plan before the application can run.
# * Determine how you want to bind the DBRMs. You can bind them
into
# packages, directly into plans, or use a combination of both methods.
# * Develop a naming convention and strategy for the most effective and
# efficient use of your plans and packages.
# Input to binding the plan can include DBRMs only, a package list only,
or
# a combination of the two. When choosing one of those alternatives for
# your application, consider the impact of rebinding and see "Planning for
# Changes to Your Application" in topic 4.1.1.3.
# Binding with a Package List Only: At one extreme, you can bind each
DBRM
# into its own package. Input to binding a package is a single DBRM only.
# A one-to-one correspondence between programs and packages might
easily
# allow you to keep track of each. However, your application could consist
# of too many packages to track easily.
# Binding All DBRMs to a Plan: At the other extreme, you can bind all
your
# DBRMs to a single plan. This approach has the disadvantage that a
change
# to even one DBRM requires rebinding the entire plan, even though most
# DBRMs are unchanged.
# Binding all DBRMs to a plan is suitable for small applications that are
# unlikely to change or that require all resources to be acquired when the
# plan is allocated rather than when your program first uses them.
# Ease of Maintenance: When you use packages, you do not need to bind
the
# entire plan again when you change one SQL statement. You need to bind
# only the package associated with the changed SQL statement.
# Flexibility in Using Name Qualifiers: You can use a bind option to name
a
# qualifier for the unqualified object names in SQL statements in a plan or
# package. By using packages, you can use different qualifiers for SQL
# statements in different parts of your application. By rebinding, you can
# redirect your SQL statements, for example, from a test table to a
# production table.
# With packages, you probably do not need dynamic plan selection and its
# accompanying exit routine. A package listed within a plan is not
# accessed until it is executed. However, it is possible to use dynamic
# plan selection and packages together. Doing so can reduce the number
# of plans in an application, and hence less effort to maintain the
# dynamic plan exit routine. See "Using Packages with Dynamic Plan
# Selection" in topic 4.2.4.4 for information on using packages with
# dynamic plan selection.
-------------------------------------------------------------------------
# As you design your application, consider what will happen to your plans
# and packages when you make changes to your application.
# If you want to change the bind options in effect when the plan or package
# runs, review the descriptions of those options in Chapter 2 of Command
and
# Utility Reference. Not all options of BIND are also available on
REBIND.
# A plan or package can also become invalid for reasons that do not depend
# on operations in your program: for example, if an index is dropped that is
# used as an access path by one of your queries. In those cases, DB2 might
# rebind the plan or package automatically, the next time it is used. (For
# details about that operation, see "Automatic Rebinding" in topic 4.1.1.4.)
-------------------------------------------------------------------------
# Table 18. Changes Requiring BIND or REBIND
------------------------------------------------------------------------
# Change made: Minimum action necessary:
----------------------------+-------------------------------------------
# Drop a table, index or None required; automatic rebind is
# other object, and recreate attempted at the next run.
# the object
----------------------------+-------------------------------------------
# Revoke an authorization to None required; automatic rebind is
# use an object attempted at the next run. Automatic
# rebind fails if authorization is still
# not available; then you must issue REBIND
# for the package or plan.
----------------------------+-------------------------------------------
# Run RUNSTATS to update Issue REBIND for the package or plan to
# catalog statistics possibly change the access path chosen.
----------------------------+-------------------------------------------
# Add an index to a table Issue REBIND for the package or plan to
# use the index.
----------------------------+-------------------------------------------
# Change bind options Issue REBIND for the package or plan, or
# issue BIND with ACTION(REPLACE) if the
# option you want is not available on
# REBIND.
----------------------------+-------------------------------------------
# Change statements in host Precompile, compile, and link the
# language and SQL application program. Issue BIND with
# statements ACTION(REPLACE) for the package or plan.
------------------------------------------------------------------------
# If you drop an object that a package depends on, the following occurs:
# In all cases, the plan does not become invalid unless it has a DBRM
# referencing the dropped object. If the package or plan becomes invalid,
# automatic rebind occurs the next time the package or plan is allocated.
REBIND PACKAGE(SNTRESA.GROUP1.PROGA.(V1))
ENABLE(CICS,REMOTE)
REBIND PACKAGE(*)
If DBRMs for an application are bound directly to a plan, then the entire
plan must be recreated using BIND with the REPLACE option.
If the NOPKLIST keyword is used, any package list specified when the
plan
was bound is deleted.
The following example shows the options for rebinding a plan named
PLANA
and changing the package list.
REBIND PLAN(PLANA) PKLIST(GROUP1.*) MEMBER(ABC)
The following example shows the option for rebinding the plan and
dropping
the entire package list.
DB2 allows more than one application process to access the same data at
essentially the same time. This is known as concurrency, and it cannot be
provided without cost. To control such undesirable effects as lost
updates, access to uncommitted data, and unrepeatable reads, concurrency
must be controlled.
Drain locks are used to control access by DB2 utilities and commands.
A means called draining allows utilities and commands to acquire
partial or full control of a needed object with least interruption to
concurrent access. For information on concurrency control by
utilities and commands, see Section 7 (Volume 3) of Administration
Guide.
The object of a lock is the resource being locked. You have most control
over locks on user data in tables. But there are also locks on DB2
internal objects. Most of those you are never aware of, but sometimes you
have to consider locks on:
---------------
Table lock
----------------
--------- ---------
Page Page
lock lock
---------- ----------
In a simple table space, a single page can contain rows from more than
one
table. A lock on the page or on the entire table space locks all the data
in the page or table space, no matter what table the data belongs to. A
lock needed to access data from one table can make data from other tables
temporarily unavailable.
The duration of a lock is the length of time the lock is held. It varies
according to when the lock is acquired and when it is released.
Duration of Table and Table Space Locks: Table and table space locks
can
be acquired when a plan is first allocated, or you can delay acquiring
them until the resource they lock is first used. For the principal means
of controlling the duration of these locks, see: "The ACQUIRE and
RELEASE
Options" in topic 4.1.2.7.1.
Duration of Page Locks: Page locks are acquired when the page is first
accessed. When they are released depends on many factors. For details
on
controlling the duration of page locks, see "The ISOLATION Option" in
topic 4.1.2.7.2.
The mode (sometimes state) of a lock tells what access to the locked
object is permitted to the lock owner and to any concurrent application
processes.
Figure 23 shows the possible modes for page locks; Figure 24 shows the
modes for table and table space locks. When a page is locked, the table
or table space containing it is also locked. In that case, the table or
table space lock has one of the intent modes: IS, IX, or SIX. (In
contrast, the modes S, U, and X of table and table space locks are
sometimes called gross modes.)
-------------------------------------------------------------------------
Modes and their effects are listed in the order of increasing control
over resources.
S (SHARE)
The lock owner and any concurrent processes can read, but not
change, the locked page. Concurrent processes can acquire S or
U locks on the page or might read data without acquiring a page
lock.
U (UPDATE)
The lock owner can read, but not change, the locked page;
however, the owner can promote the lock to an X lock and then
can change the page. Processes concurrent with the U lock can
acquire S locks and read the page, but no concurrent process can
acquire a U lock.
-------------------------------------------------------------------------
Figure 23. Modes of Page Locks
-------------------------------------------------------------------------
Modes and their effects are listed in the order of increasing control
over resources.
IS (INTENT SHARE)
The lock owner can read data in the table or table space, but
not change it. Concurrent processes can both read and change
the data. The lock owner might acquire a page lock on any data
it reads.
IX (INTENT EXCLUSIVE)
The lock owner and concurrent processes can read and change data
in the table or table space. The lock owner must acquire a page
lock on any data it reads or changes.
S (SHARE)
The lock owner and any concurrent processes can read, but not
change, data in the table or table space. The lock owner does
not need page locks on data it reads.
U (UPDATE)
The lock owner can read, but not change, the locked data;
however, the owner can promote the lock to an X lock and then
can change the data. Processes concurrent with the U lock can
acquire S locks and read the data, but no concurrent process can
acquire a U lock. The lock owner does not need page locks.
X (EXCLUSIVE)
The lock owner can read or change data in the table or table
space. No concurrent process can access the data. The lock
owner does not need page locks.
-------------------------------------------------------------------------
Figure 24. Modes of Table and Table Space Locks
If the two locks are not compatible, B cannot proceed. It must wait until
A releases its lock. (And, in fact, it must wait until all existing
incompatible locks are released.)
-------------------------------
Table 19. Compatibility of
Page Lock Modes
------------------------------
Page
Lock Mode S U X
------------+-----+-----+-----
S Yes Yes No
------------+-----+-----+-----
U Yes No No
------------+-----+-----+-----
X No No No
----------------------------
-------------------------------------------------
Table 20. Compatibility of Table and Table
Space Lock Modes
------------------------------------------------
Lock Mode IS IX S U SIX X
------------+-----+-----+-----+-----+-----+-----
IS Yes Yes Yes Yes Yes No
------------+-----+-----+-----+-----+-----+-----
IX Yes Yes No No No No
------------+-----+-----+-----+-----+-----+-----
S Yes No Yes Yes No No
------------+-----+-----+-----+-----+-----+-----
U Yes No Yes No No No
------------+-----+-----+-----+-----+-----+-----
SIX Yes No No No No No
------------+-----+-----+-----+-----+-----+-----
X No No No No No No
-------------------------------------------
-------------------------------------------------------------------------
*****************************
*****************************
* * * ------------------- *
* * * Table N *
* ------------------- * (3) * ------------------ *
* Job EMPLJCHG ----------------> 000010 Page A *
* ------------------- * Suspend * ------------------ *
* (1) OK * * *
* v * * *
* ------------------- * * ------------------- *
* Table M * * Ø *
* * * (2) OK *
* ------------------ * (4) * ------------------- *
* 000300 Page B <----------------- Job PROJNCHG *
* ------------------ * Suspend * ------------------- *
* ------------------- * * *
*****************************
*****************************
Notes:
-------------------------------------------------------------------------
Figure 25. A Deadlock Example
The IMS Resource Lock Manager (IRLM) is responsible for managing all
requests for locks and for controlling access to DB2 databases. The
effects of locks that you want to minimize are, therefore, known by IRLM
terms: suspension, timeout, and deadlock.
Incoming lock requests are queued. Requests for lock promotion, and
requests for a lock by an application process that already holds a lock on
the same object, precede requests for locks by new applications. Within
those groups, the request order is "first in, first out."
-------------------------------------------------------------------------
-------------------------------------------------------------------------
# * IMS issues a pseudo-abend and backs out all changes. For an MPP
# or IFP region, IMS reschedules the transaction. The application
# does not receive an SQLCODE.
-------------------------------------------------------------------------
-------------------------------------------------------------------------
The ACQUIRE and RELEASE options of the bind process for plans or
packages
help determine when table and table space locks are acquired and
released,
and so they affect the durations of those locks. The ISOLATION option
helps determine when page locks are released. But the release of data
locks used by a cursor can also be affected by using the WITH HOLD
clause
in the DECLARE CURSOR statement. For details on those possibilities,
see
the following topics:
ACQUIRE(ALLOCATE) / RELEASE(DEALLOCATE)
This option avoids deadlocks by locking your resources at the
beginning. Use this option if processing efficiency is more
important than concurrency. Generally, do not use it if your
application contains many SQL statements that are often not executed.
* All tables or table spaces are locked when the plan is allocated.
* All tables or table spaces are unlocked only when the plan
terminates.
* The locks used are the most restrictive needed to execute all SQL
statements in the plan regardless of whether the statements are
actually executed.
* Restrictive states for page sets are not checked until the page
set is accessed. Locking when the plan is allocated insures that
the job is compatible with other SQL jobs. Waiting until the
first access to check restrictive states provides greater
availability; however, it is possible that an SQL transaction
could:
ACQUIRE(USE) / RELEASE(DEALLOCATE)
* All tables or table spaces are unlocked only when the plan
terminates.
ACQUIRE(USE) / RELEASE(COMMIT)
This is the default option and provides the greatest concurrency.
* Locks held by cursors that are defined WITH HOLD are kept past
commit points. Except for those, all tables and table spaces are
unlocked when:
----------------------------------------------------------------
----------------------------------------------------------------
----------------------------------------------------------------
The RELEASE value for a package might not be the same as for a plan
that
uses the package. The RELEASE value for the package takes precedence,
unless both plan and package access the same object; in that case, the
value DEALLOCATE always overrides COMMIT.
If the application process does need to use the update statement, it needs
an IX lock then. DB2 exchanges the IS lock for an IX lock (first waiting
until any incompatible locks held by other processes are released), and
the lock is said to be promoted. When locks are promoted, it is in the
direction of increasing control over resources: for page locks, from S or
U to X; for table or table space locks, from IS to either IX or S, or
directly to U, SIX, or X.
Cursor stability
Application ---> Request Request row
row (on another page) COMMIT
Ø Ø
v
--------------
v v LOCK UNLOCK
-----------------
DB2 -----------> LOCK UNLOCK
---------------------------------------------------------
Repeatable read
Application ---> Request Request row
row (on another page) COMMIT
Ø Ø
v v
-----------------
v LOCK
--------------------------------
DB2 -----------> LOCK UNLOCK
Figure 26. Page Locking for Cursor Stability and Repeatable Read
-------------------------------------------------------------------------
At Your At that time, pages While locks are
time application are locked that released on pages
process .. contain ... containing ...
-----+-----------------+------------------------+-----------------------
T1 Retrieves row A1 --
A1
-----+-----------------+------------------------+-----------------------
T2 Retrieves row A2 A1
A2
-----+-----------------+------------------------+-----------------------
T3 Retrieves row A3 A2
A3
-----+-----------------+------------------------+-----------------------
T4 Updates row A3 A3(X) --
-----+-----------------+------------------------+-----------------------
T5 Inserts row A7 A3(X), A7(X) --
-----+-----------------+------------------------+-----------------------
T6 Retrieves row A3(X), A7(X), A4 --
A4
-----+-----------------+------------------------+-----------------------
T7 Deletes row A4 A3(X), A7(X), A4(X) --
-----+-----------------+------------------------+-----------------------
T8 Retrieves row A3(X), A7(X), A4(X), --
A5 A5
-----+-----------------+------------------------+-----------------------
T9 Reaches end of -- A3, A7, A4, A5
the unit of
work
----------------------------------------------------------------------
Figure 27. How Pages are Locked and Released with Cursor Stability.
(X)
denotes pages that are locked under exclusive control; no other
applications can access a row within that page in any way until
the unit of work completes.
-------------------------------------------------------------------------
At Your At that time, pages While locks are
time application are locked that released on pages
process ... contain ... containing ...
-----+-----------------+------------------------+-----------------------
T1 Retrieves row A1 --
A1
-----+-----------------+------------------------+-----------------------
T2 Retrieves row A1, A2 --
A2
-----+-----------------+------------------------+-----------------------
T3 Retrieves row A1, A2, A3 --
A3
-----+-----------------+------------------------+-----------------------
T4 Updates row A3 A1, A2, A3(X) --
-----+-----------------+------------------------+-----------------------
T5 Inserts row A7 A1, A2, A3(X), A7(X) --
-----+-----------------+------------------------+-----------------------
T6 Retrieves row A1, A2, A3(X), A7(X), --
A4 A4
-----+-----------------+------------------------+-----------------------
T7 Deletes row A4 A1, A2, A3(X), A7(X), --
A4(X)
-----+-----------------+------------------------+-----------------------
T8 Retrieves row A1, A2, A3(X), A7(X), --
A5 A4(X), A5
-----+-----------------+------------------------+-----------------------
T9 Reaches end of -- A1, A2, A3, A7, A4,
the unit of A5
work
----------------------------------------------------------------------
Figure 28. How Pages are Locked and Released with Repeatable Read.
(X)
denotes pages that are locked under exclusive control; no other
applications can access a row within that page in any way until
the unit of work completes. All other pages are locked under
shared control; other applications can retrieve a row from that
page, but cannot update or delete it until the lock is
released.
RR means repeatable read: page locks are held at least until the next
commit point. With this option, if the application process returns to
the same page and reads the same row again, the data cannot have
changed.
Figure 26 shows the difference between the ways that DB2 handles locks
for
cursor stability and for repeatable read.
Cursor Stability: Cursor stability means that the page containing the row
you retrieve (for example, row A1) might not be locked at all. If it is
locked, the lock is held only until you retrieve another row on another
page. Or, if the SQL statement is SELECT...INTO, the lock is released as
soon as the statement is executed.
If the page is not locked, another process might change or delete the row
while your cursor still points to it. If your application depends on
keeping the row unchanged until you retrieve another row, you can ensure
that the page is locked by binding with the option
CURRENTDATA(YES).
Because your application process can update rows, U (update) locks are
acquired on all other pages when they are retrieved; other applications
can retrieve a row from that page, but cannot update or delete it until
the lock is released.
Repeatable Read: Repeatable read means that the pages containing all the
rows you have retrieved, as well as the page containing your current row,
are locked until the end of the unit of work. No other application
process can update or delete one of those rows until the unit of work
completes. But, even though other processes are prevented from updating
and deleting those rows, your application can retrieve and operate on the
rows as many times as necessary until the unit of work completes. For an
example, see Figure 28.
Use repeatable read when your application has to retrieve the same rows
several times and cannot tolerate different data each time.
Figure 29. The Difference between CS and RR. A cursor is opened twice
within one unit of work.
# A plan bound with one set of options can include packages in its package
# list that were bound with different sets of options. In general,
# statements in a DBRM bound as a package use the options that the
package
# was bound with, and statements in DBRMs bound to a plan use the
options
# that the plan was bound with.
# The rules are slightly different for the bind options that affect locking,
# RELEASE and ISOLATION. The values of those two options are set
when the
# lock on the resource is acquired and usually stay in effect until the lock
# is released. But a conflict can occur if a statement that is bound with
# one pair of values requests a lock on a resource that is already locked by
# a statement that is bound with a different pair of values. DB2 resolves
# the conflict by resetting each option with the available value that causes
# the lock to be held for the greatest duration.
# As a result:
Page Locks: All necessary locks are held past the commit point.
However,
X and U page locks are demoted to S locks at that time. (Because
changes
have been committed, exclusive control is no longer needed.) After the
commit point, the locks are released according to the isolation level at
which they were acquired: For CS, when all cursors on the page are
moved
or closed; for RR, at the next commit point.
Table, Table Space, and DBD Locks: All necessary locks are held past
the
commit point. After that, they are released according to the RELEASE
option under which they were acquired: for COMMIT, at the next
commit
point after the cursor is closed; for DEALLOCATE, when the application
is
deallocated.
Claims: Any claims are held past the commit point. They are released at
the next commit point after all held cursors have moved off the object or
have been closed.
You can override DB2 rules for choosing initial lock attributes by using
the SQL statement LOCK TABLE in an application process. The
statement has
two forms:
For segmented table spaces: When you lock a table in share mode, the
table space is locked with an IS lock, and the table with an S lock. With
exclusive mode, the table space is locked with an IX lock, and the table
with an X lock. While these locks exist, all concurrent application
programs that attempt to change data in the table are locked out, and
applications attempting read-only processing are locked out if the LOCK
TABLE statement acquired an X lock. This type of locking can be
appropriate for a particularly high-priority application. If the table
space was created with LOCKSIZE ANY or LOCKSIZE PAGE, other
tables in the
table space continue to acquire page locks.
For nonsegmented table spaces: The LOCK TABLE statement locks the
entire
table space, with either an S or an X lock. If the table space contains
more than one table, all tables in the table space are locked implicitly.
The effects of the LOCK TABLE statement are shown in Table 21. If an
SQL
query or application program accesses remote tables, it acquires locks for
those tables implicitly, by the rules of the server.
-------------------------------------------------------------------------
Table 21. Impact of the LOCK TABLE Statement
------------------------------------------------------------------------
Release Segmented Table Space Nonsegmented Table Space
Parameter
--------------+----------------------------+----------------------------
RELEASE An S or X table lock is An S or X table space lock
(COMMIT) acquired. If the table is acquired. If the table
space has LOCKSIZE ANY or space has LOCKSIZE ANY or
PAGE, page locking resumes PAGE, page locking resumes
after commit. after commit.
--------------+----------------------------+----------------------------
RELEASE A table lock is acquired A table space lock is
(DEALLOCATE) and held until acquired and held until
deallocation. deallocation.
-----------------------------------------------------------------------
* Bind the application plan with cursor stability and have the process
issue the LOCK TABLE statement to the one sensitive table. (For
nonsegmented table spaces, this assumes the table being locked is the
only table in its table space. Otherwise, all other tables in the
table space are also locked by the LOCK TABLE statement.) In this
way, only one table (or table space) is isolated at the level of
repeatable read. All the other tables in other table spaces are
shared for concurrent update.
Caution When Using the LOCK TABLE Statement: A LOCK TABLE
statement locks
all tables in a nonsegmented table space, even if they are not all named
in the statement. The locks are held until a commit point or until
deallocation occurs. No other process can update the table space for that
duration. If the lock is in exclusive mode, no other process can read the
table space.
The access path used can affect the mode, size, and even the object of a
lock. For example, an UPDATE statement using a table space scan might
need a lock on the entire table space. If rows were located through an
index, DB2 might choose to lock only individual pages of the table space.
When an index is used, its pages might also be locked. If you use the
EXPLAIN statement to investigate the access path chosen for an SQL
statement, then check the lock mode in the TSLOCKMODE column of
the
resulting PLAN_TABLE. If the table resides in a nonsegmented table
space,
or is defined with LOCKSIZE TABLESPACE, the mode shown is that of
the
table space lock. Otherwise, the mode is that of the table lock.
* DB2 ensures that your program does not retrieve uncommitted data.
-----------------------------------------------------------------
-----------------------------------------------------------------
A SYNCPOINT command.
-----------------------------------------------------------------
* Set ISOLATION (RR or CS) when you bind the plan or package.
-- With RR (repeatable read), all accessed pages are locked until
the next commit point (longer, for locks held by cursors
defined WITH HOLD).
EXEC SQL
LOCK TABLE table-name IN SHARE MODE
END-EXEC
EXEC SQL
LOCK TABLE table-name IN EXCLUSIVE MODE
END-EXEC
-------------------------------------------------------------------------
Figure 30. Summary of DB2 Locks
During recovery, when a DB2 database is being restored to its most recent
consistent state, any changes to data that were not committed prior to the
program abend or system failure must be backed out without interfering
with other system activities.
When a unit of work completes, all locks implicitly acquired by that unit
of work after it begins are released, allowing a new unit of work to
begin.
A unit of work starts when the first DB2 object updates occur.
Before you can connect to another DBMS you must issue a COMMIT
statement.
If at this point the system fails, DB2 cannot know that your transaction
is complete. In this case, as in the case of a failure during a one-phase
commit operation for a single subsystem, you must make your own
provision
for maintaining data integrity.
If your program abends or a system failure occurs, DB2 backs out data
changes that have not yet been committed. Changed data is restored to its
original condition without interfering with other system activities.
The SQL COMMIT and ROLLBACK statements are not valid in a CICS
environment. CICS functions used in programs can be coordinated with
DB2
so that DB2 and non-DB2 data are consistent.
If a system failure occurs, DB2 backs out data changes that have not yet
been committed. Changed data is restored to its original condition
without interfering with other system activities. Sometimes, DB2 data is
not restored to a consistent state immediately. DB2 does not process
indoubt data (data that is neither uncommitted nor committed) until the
CICS attachment facility is also restarted. To ensure that DB2 and CICS
are synchronized, restart both DB2 and the CICS attachment facility.
A commit point can occur in a program as the result of any one of the
following four events:
* The program issues a SYNC call. The SYNC call is a Fast Path
system
service call to request commit point processing. You can use a SYNC
call only in a nonmessage-driven Fast Path program.
* For a program that processes messages as its input, a commit point can
occur when the program retrieves a new message. IMS considers a new
message the start of a new unit of work in the program. Unless the
transaction has been defined as single- or multiple-mode on the
TRANSACT statement of the APPLCTN macro for the program,
retrieving a
new message does not signal a commit point. For more information
about the APPLCTN macro, see the IMS/VS Version 2 Installation
Guide.
* IMS and DB2 release any locks that the program has held on data since
the last commit point. This makes the data available to other
application programs and users. (The case of a cursor defined as WITH
HOLD in a BMP program is an exception. DB2 holds those locks until
the cursor is closed or the program ends.)
* DB2 closes any open cursors that the program has been using. Your
program must issue CLOSE CURSOR statements before a checkpoint
call or
a GU to the message queue, not after.
* IMS and DB2 make the program's changes to the data base permanent.
* IMS deletes any output messages that the program has produced since
the last commit point (for nonexpress PCBs).
The SQL COMMIT and ROLLBACK statements are not valid in an IMS
environment.
If a system failure occurs, DB2 backs out data changes that have not yet
been committed. Changed data is restored to its original state without
interfering with other system activities. Sometimes DB2 data is not
restored to a consistent state immediately. Data in an indoubt state is
not processed by DB2 until IMS is also restarted. To ensure that DB2 and
IMS are synchronized, you must restart both DB2 and IMS.
Symbolic checkpoint calls indicate to IMS that the program has reached a
SYNC point. Such calls also establish places in the program from which
the program can be restarted.
The restart call (XRST), which you must use with symbolic checkpoints,
provides a method for restarting a program after an abend. It restores
the program's data areas to the way they were when the program
terminated
abnormally, and it restarts the program from the last checkpoint call that
the program issued before terminating abnormally.
* Inform DB2 that the changes your program has made to the database
can
be made permanent. DB2 makes the changes to DB2 data permanent,
and
IMS makes the changes to IMS data permanent.
* Return the next input message to the program's I/O area if the program
processes input messages. In MPPs and transaction-oriented BMPs, a
checkpoint call acts like a call for a new message.
* Multiple-mode programs
* Batch-oriented BMPs
* Single-mode programs
* Programs that access the database in read-only mode (defined with the
processing option GO during a PSBGEN) and are short enough to
restart
from the beginning
* Programs that by their nature must have exclusive use of the database.
* How long it takes to back out and recover that unit of processing.
The program must issue checkpoints frequently enough to make the
program easy to back out and recover.
Checkpoints also close all open cursors, which means you will have to
reopen the cursors you want and re-establish positioning.
When a DL/I CHKP call is issued from an application program using DB2
databases, IMS processes the CHKP call for all DL/I databases, and DB2
commits all the DB2 database resources. No checkpoint information is
recorded for DB2 databases in the IMS log or the DB2 log. It is the
application program's responsibility to record relevant information about
DB2 databases for a checkpoint, if necessary.
* Use a counter in your program to keep track of elapsed time and issue
a checkpoint call after a certain time interval.
* DB2 and DL/I changes are committed as the result of IMS CHKP calls.
However, the application program database positioning is lost in DL/I.
In addition, the program database positioning in DB2 can be affected
as follows:
- If the WITH HOLD option is not specified for a cursor, then the
position of that cursor is lost.
* IMS rollback calls, ROLL and ROLB, can be used to back out DB2 and
DL/I changes to the last commit point. The main difference between
the calls is that when you issue a ROLL call, DL/I terminates your
program with an abend, whereas, when you issue a ROLB call, DL/I
returns control to your program after the call.
- A ROLL call with tape logging (BKO specification does not matter),
or disk logging and BKONO specified. DL/I updates are not backed
out and U0778 abend occurs. DB2 updates are backed out to the
previous checkpoint.
- A ROLB call with tape logging (BKO specification does not matter),
or disk logging and BKONO specified. DL/I updates are not backed
out and an AL status code is returned in the PCB. DB2 updates are
backed out to the previous checkpoint. The DB2 DL/I support
causes the application program to abend due to the ROLB failure.
Using ROLL: Issuing a ROLL call causes IMS to terminate the program
with
a user abend code U0778. This terminates the program without a storage
dump.
When you issue a ROLL call, the only option you supply is the call
function, ROLL.
Using ROLB: The advantage of using ROLB is that IMS returns control
to
the program after executing ROLB, thus the program can continue
processing. The options for ROLB are:
In an online IMS system, recovery and restart are part of IMS system. For
a batch region, recovery and restart are controlled by operational
procedures developed by the location. For more information, refer to
IMS/VS Version 2 Application Programming.
Before you choose which method to use, we recommend that you first
read
about system-directed access, application-directed access and related
topics such as connection management. The scenario in "Coding an
Application with Either Application-Directed Access or System-Directed
Access" in topic 4.1.4.4 illustrates the differences in coding between the
two by showing how to code the same application using each method.
See
Chapter 2 of of SQL Reference for information on connection
management.
For application servers that support the two-phase commit process, both
methods allow for updating data at several remote locations within the
same unit of work. To obtain the more restrictive level of function
available at DB2 Version 2 Release 3, refer to "Restricted Function in
Earlier Versions of DB2" in topic 4.1.4.2 for system-directed access and
"Update Restrictions at Servers that Do Not Support Two-Phase Commit"
in
topic 4.1.4.3 for application-directed access. Table 22 summarizes the
main differences between application-directed access and system-directed
access.
-------------------------------------------------------------------------
Table 22. Differences Between Application-Directed Access and
System-Directed Access
------------------------------------------------------------------------
Item Application-Directed System-Directed Access
Access
------------------+--------------------------+--------------------------
Program Requires a remote BIND A remote BIND is not
preparation of packages applicable
------------------+--------------------------+--------------------------
Plan members Packages only Packages or DBRMs bound
directly to the plan
------------------+--------------------------+--------------------------
Method of Embedded statements are All statements are
execution executed as static executed dynamically.
statements. Embedded statements are
dynamically bound at the
server when they are
first executed in a unit
of work.
------------------+--------------------------+--------------------------
Servers Can use any server that Can use DB2 servers only
uses the DRDA protocols
------------------+--------------------------+--------------------------
SQL statements Can use any SQL Limited to SQL INSERT,
statement supported by UPDATE, and DELETE
the system which statements, and to
executes the statement statements supporting
SELECT
------------------+--------------------------+--------------------------
Connection The CONNECT statement is Three-part names and
management used to connect an aliases are used to
application process to a refer to objects at
server. another server.
-----------------------------------------------------------------------
location-name.aaaaaa.ssssss
Alias names have the same allowable forms as table or view names. The
name can refer to a table or view at the current server or to a table or
view elsewhere. For more on aliases, see the description of CREATE
ALIAS
in Chapter 6 of SQL Reference. For more on three-part names, and on
SQL
naming conventions in general, see Chapter 3 of SQL Reference.
-------------------------------------------------------------------------
-------------------------------------------------------------------------
A system that uses DRDA can request the execution of SQL statements at
any
DB2. Preparing DB2 for incoming SQL requests is discussed in Section 3
(Volume 1) of Administration Guide.
Your application can update only one system within a unit of work.
---------------------------------------------------------------------
---------------------------------------------------------------------
Suppose that Spiffy Computer has a master project table that supplies
information about all projects currently active throughout the company.
Spiffy has several branches in various locations across the United States,
each a DB2 location maintaining a copy of this project table named
DSN8310.PROJ. Occasional inserts are made to all the copies of the table
from the main branch location. The application that makes the inserts
uses a table of location names. For each row inserted, the application
executes an INSERT statement in DSN8310.PROJ for each location.
DB2 Version 3 lets you update many remote relational databases within a
logical unit of work for all DB2 application environments and for either
access method. System-directed access uses three-part names and aliases
that are subject to the update restrictions discussed previously.
Application-directed access uses new bind options and extensions to SQL
(CONNECT, RELEASE, and SET CONNECTION) to perform operations
at multiple
sites. Both application-directed access and system-directed access use a
two-phase commit process to guarantee the uniform commitment of the
actions of a logical unit of work at all sites. That is to say, either
all of the data changes (updates, inserts, deletes) persist or none of the
changes persist, despite component, system, or communications failures.
The application assigns the character string to the variable INSERTX and
then executes these statements:
EXEC SQL
PREPARE STMT1 FROM :INSERTX;
EXEC SQL
EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO,
:RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffy's project table are declared as they are for
the sample project table in "Project Table (DSN8310.PROJ)" in topic A.4.
Committing the work only when the loop has executed for all locations
keeps the data consistent at all locations. Either every location has
committed the insert or, if a failure has prevented any location from
inserting, all other locations have rolled back the insert. (If a failure
occurs during the commit process, the entire unit of work can be left
indoubt.)
EXEC SQL
CONNECT TO :LOCATION_NAME;
EXEC SQL
INSERT INTO DSN8310.PROJ VALUES (:PROJNO,
:PROJNAME, :DEPTNO, :RESPEMP,
:PRSTAFF, :PRSTDATE, :PRENDATE,
:MAJPROJ);
Committing the work only when the loop has executed for all locations
keeps the data consistent at all locations. Either every location has
committed the insert or, if a failure has prevented any location from
inserting, all other locations have rolled back the insert. (If a failure
occurs during the commit process, the entire unit of work can be left
indoubt.)
The host variables for Spiffy's project table are declared as they are for
the sample project table in "Project Table (DSN8310.PROJ)" in topic A.4.
LOCATION_NAME is a character-string variable of length 16.
Closing Cursors with CONNECT (Type 1): The SQL CONNECT (Type
1) statement
implicitly closes open cursors. However, CONNECT (Type 1) does not
always
release all locks held at the current server if the current server is the
local DB2 subsystem. We do not recommend that you retain these locks
while processing at a remote location. Instead, we recommend the
following before processing at a remote operation:
1. Explicitly close all open cursors, using the SQL CLOSE CURSOR
statement.
2. COMMIT
3. CONNECT TO :location-name
* The options available for the program preparation process are not
identical on all DBMSs. See Appendix F, "Program Preparation
Options
for Remote Packages" in topic F.0 for more information on program
preparation options.
* The available clauses and statements of SQL are not identical on all
DBMSs.
DB2 provides CONNECT precompiler options that let you specify the
type of
CONNECT. CONNECT(1) or CT(1) specifies type 1 CONNECT rules,
and
CONNECT(2) or CT(2) specifies CONNECT (Type 2) rules. These
options can
be specified on the Other Options field on the DB2I Precompile panel.
See
Chapter 6 of SQL Reference for more information on the differences
between
CONNECT (Type 1) and CONNECT (Type 2).
If you do not specify the CONNECT option when you precompile your
program,
the rules of the CONNECT (Type 2) statement apply. However, if you use
the CURRENTSERVER bind option, then the rules of the CONNECT
(Type 1) are
implicit.
The SQLRULES option of the BIND PLAN subcommand lets you specify
whether a
CONNECT (Type 2) statement is executed according to DB2 rules or the
ANSI/ISO SQL standard of 1992 This option applies to any application
process that uses the plan and executes CONNECT (Type 2) statements.
Checking BIND Options: If DB2 is the requesting system, and you are
binding a package to run on a non-DB2 server, you can request only the
DB2
bind options that are supported by the server. You will have to specify
these options at the requester using DB2's bind syntax. To find out which
options are supported by a specific server DBMS, check the
documentation
provided for that server. If the server recognizes the option by a
different name, the table of generic descriptions in Appendix F, "Program
Preparation Options for Remote Packages" in topic F.0 might help to
identify it.
If DB2 is the server, and you are binding a package to run on DB2, there
are two considerations: how will you specify the bind options to the
requester and what does the server do as a result of receiving the bind
option?
For the syntax of DB2 BIND and REBIND subcommands, see Chapter
2 of
Command and Utility Reference.
For a list of DB2 bind options in generic terms, including options you
cannot request from DB2 but can use if you request from a non-DB2
server, see Appendix F, "Program Preparation Options for Remote
Packages" in topic F.0.
Generally, your program can use the SQL statements and clauses that are
supported by a remote DBMS on which they are to execute, even though
they
are not supported on the local DBMS. To find what SQL is supported at
the
requester and the server, check the appropriate documentation for the
DBMS
product. For DB2, see SQL Reference.
* All statements that are supported by a server can be issued from any
DBMS.
* Some statements are not sent to a remote server but are processed
completely by the requesting system. You can use those statements
only on a DBMS that supports them.
* Release a specific connection that will not be used in the next unit
of work:
* Release the current SQL connection that will not be used in the next
unit of work:
How to Ensure Block Fetching: To use either type of block fetch, DB2
needs to know that the cursor is not used for update or delete. You can
indicate that in your program by adding the phrase FOR FETCH ONLY
to the
query contained within the DECLARE CURSOR statement. If you do not
use
FOR FETCH ONLY, DB2 still uses block fetch for the query if:
* The result table of the cursor is read-only. (See SQL Reference for a
description of read-only tables.)
* The result table of the cursor is not read-only, but the cursor is not
referenced in an UPDATE or a DELETE statement, and the program
does
not contain PREPARE or EXECUTE IMMEDIATE statements.
# * Declare the cursor with the clause FOR UPDATE OF. This is the
# simplest method.
# * In the package or plan containing the cursor where you prefer to fetch
# single rows, do the following steps:
# - The first FROM clause identifies more than one table or view
# - The first SELECT clause specifies the keyword DISTINCT
# - The outer subselect contains a GROUP BY clause
# - The outer subselect contains a HAVING clause
# - The first SELECT clause contains a column function
# - It contains a subquery such that the base object of the outer
# subselect, and of the subquery, is the same table
# - The first FROM clause identifies a read-only view
# - The first FROM clause identifies a catalog table with no
# updateable columns
# - A UNION or UNION ALL operator is present
# - An ORDER BY clause is present
# - A FOR FETCH ONLY clause is present.
The Best Cases: To gain the greatest efficiency when accessing remote
subsystems, compared to that on similar tables at the local subsystem, try
to write queries that send few messages over the network. To achieve
that, try to:
* Use a small answer set, relative to the total data. For example, you
can scan a million rows, even if you intend to retrieve or update only
a dozen of them. You can update thousands of them, if you can make
the same update to a few large sets. (Use a clause like WHERE
EDLEVEL>15, so each set of updates requires only one message.)
Answer: To copy a table from one location to another, you must either
write your own application program or use Data Extract Facility (DXT
(*)).
DXT does not create an exact image copy.
Question: What can I do with local objects that I cannot do with remote
objects?
Answer: No, there are exceptions. See "SQL Statements and Distributed
Data" in topic 4.1.4.5.3 for information about these exceptions.
This chapter details the steps to prepare your application program for
running. It includes instructions for the five main steps required to
produce an application program, additional steps that are required by
some
users, and steps for rebinding.
There are five basic steps that you need to perform to run an application
containing programs with embedded SQL statements. The following
sections
provide details on each step:
"Step 1: Precompile the Application" in topic 4.2.2
"Step 2: Bind the DBRM to a Package" in topic 4.2.3
"Step 3: Bind an Application Plan" in topic 4.2.4
"Step 4: Compile (or Assemble) and Link-Edit the Application" in
topic 4.2.5
"Step 5: Run the Application" in topic 4.2.6.
PICTURE 3
Figure 31. The Program Preparation Process. DB2 is required for the bind
process and for running the program, but not for
precompilation.
-------------------------------------------------------------------------
# * a C++ program
# * a COBOL program that you compile with IBM COBOL for MVS &
VM
Input to and output from the precompiler are the same regardless of which
of these methods you choose.
You can invoke the precompiler at any time to process a program with
embedded SQL statements. DB2 does not have to be active, because the
precompiler does not refer to DB2 catalog tables. For this reason, the
names of tables and columns used in SQL statements are not validated
against current DB2 databases, though they are checked against any SQL
DECLARE TABLE statements present in the program. Therefore, you
should
use DCLGEN to obtain accurate SQL DECLARE TABLE statements.
The primary input for the DB2 precompiler consists of statements in the
host programming language and embedded SQL statements. Host
language
statements and SQL statements must be written using the same margins,
as
specified in the MARGINS precompiler option.
The input data set, SYSIN, must have the attributes RECFM F or FB,
LRECL
80.
The SQL INCLUDE statement can be used to get secondary input from
the
include library, SYSLIB. The SQL INCLUDE statement reads input from
the
specified member of SYSLIB until the end of the member is reached.
Include library input cannot contain other precompiler INCLUDE
statements,
but can contain both host language and SQL statements. SYSLIB must be
a
partitioned data set, with attributes RECFM F or FB, LRECL 80.
Listing Output: The output data set, SYSPRINT, used to print output
from
the DB2 precompiler has an LRECL of 133 and a RECFM of FBA.
Statement
numbers in the output of the precompiler listing are always displayed as
they appear in the listing. However, statement numbers greater than
32,767 are stored as 0 in the DBRM.
The data set requires space to hold all the SQL statements plus space for
each host variable name and some header information. The header
information alone requires approximately two records for each DBRM, 20
bytes for each SQL record, and 6 bytes for each host variable. For an
exact format of the DBRM, see the DBRM mapping macro,
DSNXDBRM in library
DSN310.SDSNMACS. The DCB attributes of the data set are RECFM
FB, LRECL
80. The precompiler sets the characteristics. You can use IEBCOPY,
IEHPROGM, TSO commands COPY and DELETE, or other PDS
management tools for
maintaining these data sets.
On the other hand, you must bind and run on the same DB2 subsystem.
The
selection of access paths during binding depends on characteristics of a
particular subsystem.
Table of Options: Table 23 shows the options you can specify when
invoking the precompiler, and abbreviations for those options if they are
available. A vertical bar () is used to separate mutually exclusive
options. Brackets (â ã) indicate that the enclosed option can sometimes
be omitted.
--------------------------------------------------------------------------
Table 23. DB2 Precompiler Options
-------------------------------------------------------------------------
Option Keyword Meaning
---------------------------+---------------------------------------------
APOST Recognize the apostrophe (') as the string
delimiter within host language statements
The option is not available in all
languages. See Table 24.
--------------------------------------------------------------------------
Table 24. Language-dependent Precompiler Options and Defaults
-------------------------------------------------------------------------
Language Defaults
-----------------+-------------------------------------------------------
Assembler APOST(1), APOSTSQL(1), PERIOD(1), TWOPASS,
MARGINS(1,71,16)
-----------------+-------------------------------------------------------
# C or C++ APOST(1), APOSTSQL(1), PERIOD(1), ONEPASS,
# MARGINS(1,72)
-----------------+-------------------------------------------------------
# COBOL, COB2, QUOTE(2), QUOTESQL(2), PERIOD,
ONEPASS(1),
# or IBMCOB MARGINS(8,72)(1)
-----------------+-------------------------------------------------------
FORTRAN APOST(1), APOSTSQL(1), PERIOD(1),
ONEPASS(1),
MARGINS(1,72)(1)
-----------------+-------------------------------------------------------
PL/I APOST(1), APOSTSQL(1), PERIOD(1), ONEPASS,
MARGINS(2,72)
------------------------------------------------------------------------
Notes:
-------------------------------------------------------------------------
You can run CICS applications only from CICS address spaces. This
restriction applies to the RUN option on the second program DSN
command processor. All of those possibilities occur in TSO.
You can append JCL from a job created by the DB2 Program Preparation
panels to the CICS translator JCL to prepare an application program.
To run the prepared program under CICS, you might need to update the
RCT and define programs and transactions to CICS. Your system
programmer must make the appropriate resource control table (RCT) and
CICS resource or table entries. For information on the required
resource entries, see Section 2 (Volume 1) of Administration Guide,
CICS/MVS Resource Definition (Macro), CICS/MVS Resource
Definition
(Online), CICS/OS/VS Resource Definition (Macro), and CICS/OS/VS
Resource Definition (Online).
* DB2 precompilation
* Link-edit
* Bind
-------------------------------------------------------------------------
(*) Trademark of the IBM Corporation.
You do not have to bind a DBRM whose SQL statements all come from
this
list:
CONNECT
COMMIT
ROLLBACK
DESCRIBE TABLE
RELEASE
SET CONNECTION
-------------------------------------------------------------------------
In DB2 you can run a plan, by naming it in the RUN subcommand, but
you
cannot run a package directly. You must include the package in a plan
and
then run the plan.
When you bind a package, you specify the collection to which the
package
belongs. (You must have the CREATE IN privilege on the collection, or
else have SYSCTRL or SYSADM authority.) The collection is not a
physical
entity, and you do not create it: the collection name is merely a
convenient way of referring to a group of packages.
-------------------------------------------------------------------------
Table 25. BIND PACKAGE Subcommand Keywords and Parameters
------------------------------------------------------------------------
Keyword (parameters) Meaning
---------------------------------+--------------------------------------
(location-name.collection-id) The location where the package is to
be bound (Default: the local DBMS)
and the collection to which it will
belong.
---------------------------------+--------------------------------------
OWNER (authorization-id) The authorization ID of the package
owner. Default: primary
authorization ID of the package
creator.
---------------------------------+--------------------------------------
QUALIFIER (qualifier-name) The qualifier for all unqualified
table names, views, indexes, and
aliases contained in the package.
Default: owner's authorization ID.
---------------------------------+--------------------------------------
MEMBER (dbrm-member-name) The name of the DBRM used to
create
the package. If you do not use
MEMBER, you must use COPY.
---------------------------------+--------------------------------------
LIBRARY (dbrm-pds-name) The partitioned dataset (library)
containing the DBRM named by MEMBER.
---------------------------------+--------------------------------------
COPY (collection-id.package-id) An existing package used to create
the new package. If you do not use
COPY, you must use MEMBER.
---------------------------------+--------------------------------------
COPYVER (version-id) The version of the package to be
copied. Default: empty string if
COPYVER is omitted.
---------------------------------+--------------------------------------
ACTION Whether the package is an addition
or a replacement.
To bind a package at a remote DB2 system, you must have all the
privileges
or authority there that you would need to bind the package on your local
system. To bind a package at another type of a system, such as SQL/DS,
you need any privileges that system requires to execute its SQL
statements
and use its data objects.
The bind process for a remote package is the same as that for a local
package, except that the location name used must be recognized by the
local communications database as resolving to a remote location.
To bind the DBRM PROGA at the location PARIS, in the collection
GROUP1,
use:
BIND PACKAGE(PARIS.GROUP1)
MEMBER(PROGA)
Then, include the remote package in the package list of a local plan, say
PLANB, by using:
If you have used DB2 before, you might have an existing application that
you want to run at a remote location, using remote unit of work. To do
that, you need to have the DBRMs that are bound to the application plan
bound again as packages at the remote location. You also need a new
plan
that includes those remote packages in its package list.
4. Bind a new application plan at your local DB2, using these options:
PKLIST (location-name.REMOTE1.*)
CURRENTSERVER (location-name)
When you now run the existing application at your local DB2, using the
new
application plan, these things happen:
The ENABLE or DISABLE option lists the system connections that are
enabled
or disabled for use. Minimize the list associated with the ENABLE or
DISABLE keyword on the BIND PACKAGE subcommand. Each time
the plan or
package is allocated, the list must be validated; a long list requires
more time to validate.
* All static SQL statements must be fully validated by the bind process
You must bind a plan before your DB2 application will run. A plan can
contain DBRMs, a package list specifying packages or collections of
packages, or a combination of DBRMs and a package list. Plans are
bound
locally, whether or not they contain packages that will be run remotely.
Packages that will be run at remote locations, however, must be bound at
those locations.
To include packages in the package list of a plan, list them after the
PKLIST keyword of BIND PLAN. To include an entire collection of
packages
in the list, use an asterisk after the collection name. For example,
PKLIST(GROUP1.*)
To bind DBRMs directly to the plan, and also include packages in the
package list, use both MEMBER and PKLIST. The example below
includes:
MEMBER(PROG1,PROG2)
PKLIST(TEST2.*,GROUP1.PROGA,GROUP1.PROGC)
Table 26 summarizes BIND PLAN keywords and their meanings. For
complete
descriptions, see Chapter 2 of Command and Utility Reference.
-----------------------------------------------------------------------------
Table 26. BIND PLAN Subcommand Keywords and Parameters
----------------------------------------------------------------------------
Keyword (Parameters) Meaning
---------------------------------+------------------------------------------
OWNER (authorization-id) The authorization id of the plan owner.
Default: primary authorization ID of the
process doing the bind.
---------------------------------+------------------------------------------
QUALIFIER (qualifier-name) The qualifier for all unqualified table
names, views, indexes, and aliases
contained in the plan. Default: owner's
authorization ID.
---------------------------------+------------------------------------------
MEMBER (dbrm-member-name) The names of the DBRMs that are
bound
directly to the plan. If you do not use
MEMBER, you must use PKLIST.
---------------------------------+------------------------------------------
LIBRARY (dbrm-pds-name,...) The partitioned dataset libraries that
contain the DBRMs named after MEMBER.
---------------------------------+------------------------------------------
PKLIST The packages to be included in the
(location.collection.package) package list for the plan. If you do
not use PKLIST, you must use MEMBER.
You can use an asterisk (*) for
location, collection, or package, to
specify that the value is determined at
run time. For the effect, see
"Identifying Packages at Run Time" in
topic 4.2.4.3.
---------------------------------+------------------------------------------
ACQUIRE Whether resources are acquired when
first accessed, or when the plan is
allocated.
(USE) Default: Open table spaces
and acquire locks only as they are
used.
(ALLOCATE) Open all table spaces
and acquire all table and table
space locks when the plan is
allocated. If you use
ACQUIRE(ALLOCATE), you must also use
RELEASE(DEALLOCATE).
---------------------------------+------------------------------------------
ACTION Whether the plan is an addition or a
replacement.
But you need other identifiers also. The consistency token alone uniquely
identifies a DBRM that is bound directly to a plan, but it does not
necessarily identify a unique package. When you bind DBRMs directly to
a
particular plan, you bind each one only once. But you can bind the same
DBRM to many packages, at different locations and in different
collections, and then you can include all those packages in the package
list of the same plan. All those packages will have the same consistency
token. As you might expect, there are ways to specify a particular
location or a particular collection at run time.
When your program executes an SQL statement, DB2 uses the value in
the
CURRENT SERVER special register to determine the location of the
necessary
package or DBRM. If the current server is your local DB2 and it does not
have a location name, the value is blank.
You can change the value of CURRENT SERVER by using the SQL
CONNECT
statement in your program. If you do not use CONNECT, the value of
CURRENT SERVER is the location name of your local DB2 (or blank, if
your
DB2 has no location name).
When your program executes an SQL statement, DB2 uses the value in
the
CURRENT PACKAGESET special register as collection name for a
necessary
package. To set or change that value within your program, use the SQL
SET
CURRENT PACKAGESET statement.
The order in which you specify packages in a package list can affect
run-time performance. Searching for the specific package involves
searching the DB2 directory, which can be costly. When you use
collection-id.* with PKLIST keyword, you should specify first the
collections in which a package is most likely to be found.
For example, if you performed the following bind: BIND PLAN (PLAN1)
PKLIST
(COL1.*, COL2.*, COL3.*, COL4.*) and you then executed program
PROG1, DB2
does the following:
2. All packages that have already been allocated as the plan has run.
# If you use the BIND PLAN option DEFER(PREPARE), DB2 does not
search
# all collections in the package list. See "Deferring Remote PREPARE
# for Better Performance" in topic 4.2.4.8 for more information.
If the CURRENT PACKAGESET register is set, DB2 skips the check for
programs that are part of the plan and uses the value of the CURRENT
PACKAGESET special register as the collection. For example, if the
CURRENT PACKAGESET special register is set to COL5, then DB2 uses
COL5.PROG1.timestamp for the search.
PKLIST (collection.*)
# You can add packages to the collection even after binding the plan. DB2
# lets you bind packages having the same package name into the same
# collection, only if their version IDs are unique. DB2 does not let you
bind two packages with the same name into the same collection.
PKLIST (*.collection.*)
Sometimes, however, you want to have more than one package with the
same
name available to your plan. The VERSION option of the precompiler
makes
that possible. Using VERSION identifies your program with a specific
version of a package. If you bind the plan with PKLIST (COLLECT.*),
then
you can do this:
-------------------------------------------------------------------------
For Version 1 For Version 2
------------------------------------+-----------------------------------
1. Precompile program 1, using 1. Precompile program 2, using
VERSION(1). VERSION(2).
------------------------------------+-----------------------------------
2. Bind the DBRM with the 2. Bind the DBRM with the
collection name COLLECT and your collection name COLLECT and
chosen package name (say, PACKA). package name PACKA.
------------------------------------+-----------------------------------
3. Link-edit program 1 into your 3. Link-edit program 2 into your
application. application.
------------------------------------+-----------------------------------
4. Run the application; it uses 4. Run the application; it uses
program 1 and PACKA, VERSION 1. program 2 and PACKA,
VERSION 2.
------------------------------------------------------------------------
You can do that with many versions of the program, without having to
rebind the application plan. Neither do you have to rename the plan or
change any RUN subcommands that use it.
1. Change the source code (but not the SQL statements) in the
precompiler
output of a bound program.
2. Compile and link-edit the changed program.
3. Run the application without rebinding a plan or package.
Packages and dynamic plan selection can be used together. The only
restriction is that the CURRENT PACKAGESET special register must be
blank when dynamic plan switching is required.
PROGA PLANA
PROGB PKGB
PROGC PLANC
you could create packages and plans using the following bind
statements:
-------------------------------------------------------------------------
11. EXEC CICS SYNCPOINT The dynamic plan exit is not called
when
the next SQL statement is executed
because the CURRENT SQLID special
register has been modified.
13. EXEC SQL SELECT... SQLCODE -815 occurs because the plan
is
currently PLANC and the program is
PROGB.
-------------------------------------------------------------------------
The CACHESIZE keyword (optional) allows you to specify the size of the
cache to be acquired for the plan. This cache is used for caching
authorization IDs of those users executing a plan. DB2 uses the
CACHESIZE
value specified to determine the amount of storage to acquire for the
authorization cache. Storage is acquired from the EDM storage pool. The
default CACHESIZE value is 1024.
The size of the cache you specify depends on the number of individual
authorization IDs actively using the plan. Each authorization ID takes up
8 bytes of storage. The minimum cache size is 256 bytes (enough for 28
entries and overhead information) and the maximum is 4096 bytes (enough
for 508 entries and overhead information). You should specify size in
multiples of 256 bytes; otherwise, the specified value is rounded up to
the next highest value that is a multiple of 256. If you specify 0, no
cache is created when the plan is executed.
Any plan that you execute repeatedly is a good candidate for tuning using
the CACHESIZE keyword. Also, if you have a plan that is executed
concurrently by a large number of users, you might want to use a larger
CACHESIZE. When DB2 determines that you have the EXECUTE
privilege on the
plan, it caches your name so that if you execute the plan again, DB2 can
check your authorization more quickly.
You can change the server location name in the CURRENT SERVER
special
register by:
* All static SQL statements must be fully validated by the bind process
* Validity and/or authorization checking is to be performed at execution
time for the SQL statements that:
For sample catalog queries to identify plans that could cause performance
problems, see Section 7 (Volume 3) of Administration Guide.
# PKLIST(LOCB.COLLA.*, LOCB.COLLB.*)
# PKLIST(LOCB.COLLB.*, LOCB.COLLA.*)
* DB2I panels
* The DSNH command procedure (a TSO CLIST)
* JCL procedures supplied with DB2
# * JCL procedures you create. This is the only method available for
# programs you compile with the IBM COBOL for MVS & VM
compiler or C++
# programs.
The purpose of the link edit step is to produce an executable load module.
To enable your application to interface with the DB2 subsystem, you must
use a link-edit procedure that builds a load module that satisfies these
requirements:
-------------------------------------------------------------------------
-------------------------------------------------------------------------
You can link DSNCLI with your program in either 24 bit or 31 bit
addressing mode (AMODE31). If you have written an application
program that is to run in 31-bit addressing mode, link-edit the DSNCLI
stub to your application with the attributes AMODE31 and RMODEANY
in
order for your application to run above the 16M line. For more
information on compiling and link-editing CICS application programs,
see the appropriate CICS manual.
-------------------------------------------------------------------------
The size of the executable load module produced by the link-edit step may
vary depending on the values that the DB2 precompiler inserts into the
source code of the program.
After you have completed all the previous steps, you are ready to run your
application. At this time, DB2 verifies that the information in the
application plan and its associated packages is consistent with the
corresponding information in the DB2 system catalog. If any changes
occur
(either to the data structures your application accesses or to the
binder's authorization to access those data structures), DB2 automatically
rebinds packages or the plan as needed.
This sequence also works in ISPF option 6. You can package this
sequence
in a CLIST. DB2 does not support access to multiple DB2 subsystems
from a
single address space.
The PARMS keyword of the RUN subcommand allows you to pass
parameters to
the run-time processor and to your application program:
The slash (/) is important, indicating where you want to pass the
parameters. Please check your host language references for more details.
Most application programs written for the batch environment run under
the
TSO Terminal Monitor Program (TMP) in background mode. The
following
example shows the JCL statements you need in order to start such a job.
The list that follows explains each statement.
Usage Notes
For more information on invoking the TSO TMP in batch mode, see
OS/VS2 TSO
Terminal User's Guide.
-------------------------------------------------------------------------
To Run a Program
First, ensure that the corresponding entries in the RCT, SNT, and
RACF* control areas allow run authorization for your application.
These functions are the responsibility of the system administrator and
are described in Section 5 (Volume 2) of Administration Guide.
Also, ensure that the transaction code assigned to your program and
the program itself are defined to CICS.
Make a New Copy of the Program
Issue the NEWCOPY command if CICS has not been reinitialized since
the
program was last compiled and bound.
-------------------------------------------------------------------------
(*)
* Use DB2 interactive (DB2I) panels, which lead you step by step
through
the preparation process.
# * Use JCL procedures you create. This is the only method available if
# your application is written in C++ or if you use the IBM COBOL for
MVS
# & VM compiler.
--- CICS
------------------------------------------------------------------------------------------
* Bind the application plan. This can be done any time after DB2
precompilation. Binding can
be done either on line by the DB2I panels or as a batch step in this or
another MVS job.
//TESTC01 JOB
//*
//****************************************************
*****
//* DB2 PRECOMPILE THE COBOL PROGRAM
//****************************************************
*****
(1) //PC EXEC PGMDSNHPC,
(1) // PARM'HOST(COB2),XREF,SOURCE,FLAG(I),APOST'
(1) //STEPLIB DD DISPSHR,DSNDSN310.SDSNEXIT
(1) // DD DISPSHR,DSNDSN310.SDSNLOAD
(1) //DBRMLIB DD
DISPOLD,DSNUSER.DBRMLIB.DATA(TESTC01)
(1) //SYSCIN DD
DSN&&DSNHOUT,DISP(MOD,PASS),UNITSYSDA,
(1) // SPACE(800,(500,500))
(1) //SYSLIB DD DISPSHR,DSNUSER.SRCLIB.DATA
(1) //SYSPRINT DD SYSOUT*
(1) //SYSTERM DD SYSOUT*
(1) //SYSUDUMP DD SYSOUT*
(1) //SYSUT1 DD SPACE(800,
(500,500),,,ROUND),UNITSYSDA
(1) //SYSUT2 DD SPACE(800,
(500,500),,,ROUND),UNITSYSDA
(1) //SYSIN DD
DISPSHR,DSNUSER.SRCLIB.DATA(TESTC01)
(1) //*
-----------------------------------------------------------------------------------------
----------
Step 4. The output of the DB2 precompiler becomes the input to the
CICS command language
translator.
Step 5. Reflect an application load library in the data set name of the
SYSLMOD DD
statement. The name of this load library must be included in the
DFHRPL DD statement of the
CICS execution JCL.
Step 6. Name the DB2 load library that contains the module DSNCLI.
Step 7. Direct the linkage editor to include the DB2 language interface
module for CICS
(DSNCLI). In this example, the order of the various control sections
(CSECTS) is of no
concern since the structure of the procedure automatically satisfies any
order requirements.
This chapter describes the first two methods: using DB2I panels and
using JCL. For information on the other methods listed above, see
Chapter
2 of Command and Utility Reference. For a general overview of the DB2
program preparation process as performed by the DSNH CLIST, see
Figure 31
in topic 4.2.1.
If you develop programs using TSO and ISPF, you can prepare them to
run by
using the Program Preparation panels supplied with DB2. These panels
guide you step by step through the process of preparing your application
to run. There are other ways to prepare a program to run, but using DB2I
is the easiest, as it leads you automatically from task to task.
# You must use a JCL procedure to prepare these programs. See "Using
# JCL to Prepare a C++ or IBM COBOL for MVS & VM Program" in
# topic 4.2.7.3 for information.
-------------------------------------------------------------------------
As you proceed, you can use the DB2I help panels for additional
information. Help is available for:
To display the main HELP panel, press PF key 1 (HELP) (14). Use the
help
instructions to become familiar with the commands and PF keys you can
use
with the panels. Then use the panels to perform your DB2 tasks.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Figure 32. Initiating Program Preparation via DB2I. Specify Program
Preparation on the DB2I Primary Option Menu.
The following explains the functions on the DB2I Primary Option Menu.
1 SPUFI
Allows you to develop and execute one or more SQL statements
interactively. For further information, see "Chapter 2-4. Using
SPUFI: Executing SQL from Your Terminal" in topic 2.4.
2 DCLGEN
Allows you to generate C, COBOL or PL/I data declarations of
tables. For further information, see "Chapter 3-3. Using
DCLGEN" in topic 3.3.
3 PROGRAM PREPARATION
Allows you to prepare an application program to run and runs it.
For more information, see "The Program Preparation: Compile,
Link, and Run Panel" in topic 4.2.7.1.14.
4 PRECOMPILE
Allows you to convert embedded SQL statements into statements
that your host language can process. For further information,
see "The Precompile Panel" in topic 4.2.7.1.7.
5 BIND/REBIND/FREE
Allows you to bind or free a package or application plan. For
more information, see "The BIND PLAN Panel" in topic 4.2.7.1.11
and "The BIND PACKAGE Panel" in topic 4.2.7.1.8.
6 RUN
Allows you to run an application program in a TSO or batch
environment.
7 DB2 COMMANDS
Allows you to issue DB2 commands. For more information about
DB2 commands, see Chapter 2 of Command and Utility Reference.
8 UTILITIES
Allows you to invoke DB2 utility programs. For more
information, see Chapter 3 of Command and Utility Reference.
D DB2I DEFAULTS
Allows you to set DB2I defaults.
X EXIT
Allows you to exit DB2I.
* Defaults for BIND PLAN--the panel used to change your defaults for
the
bind plan operation. See page 4.2.7.1.12.
* Program Preparation: Compile, Link, and Run--a panel that allows you
to control the compiler or assembler and the linkage editor. See page
4.2.7.1.14.
---------------------------------------------------------------------
The following sections lead you panel by panel through the program
preparation process, describing the various options you can specify on
each of these program preparation panels. For the purposes of describing
the process, the program being prepared is assumed to be written in
COBOL
and to run under TSO.
On the DB2 Program Preparation panel, shown in Figure 33, enter the
name
of the source program data set (in this case SAMPLEPG.COBOL) and
specify
the other options you want to include. When finished, press ENTER to
view
the next panel.
-----------------------------------------------------------------------------------
DSNEPP01 DB2 PROGRAM PREPARATION
SSID: DSN
COMMAND >_
-----------------------------------------------------------------------------------
Figure 33. The DB2 Program Preparation Panel. Enter the source
program
data set name and other options.
# For programs that are prepared in the background or use the EDITJCL
# PREPARATION ENVIRONMENT option, a data set named
# tsoprefix.qualifier.CNTL is created to contain the program
# preparation JCL. tsoprefix represents the prefix TSO assigns, and
# qualifier represents the value entered in the DATA SET NAME
QUALIFIER
# field. If a data set with this name already exists, then it is
# deleted.
3 PREPARATION ENVIRONMENT
Allows you to specify whether program preparation is to occur in
FOREGROUND or BACKGROUND. You can also specify
EDITJCL, in which
case you are able to edit and then submit the job.
If you want to make changes, however, you can ask for any of the panels.
DB2I then displays each of the panels that you requested. After all the
panels are displayed, DB2 proceeds with the actual processing involved in
preparing your program for running.
6 CHANGE DEFAULTS
Allows you to specify whether to change the DB2I defaults.
Enter Y in the Display Panel field for this option; otherwise
enter N. Minimally, you will want to specify your subsystem
identifier and programming language on the defaults panel.
8 PRECOMPILE
Allows you to specify whether to display the Precompile panel.
To see this panel enter Y in the Display Panel column next to
this option; otherwise enter N.
9 CICS COMMAND TRANSLATION
Allows you to specify whether to use the CICS command
translator. This field is only applicable to programs written
for a CICS environment.
If you run under TSO or IMS, ignore this step; this will
allow the Perform function field to default to N.
-------------------------------------------------------------
-------------------------------------------------------------
10 BIND PACKAGE
Allows you to specify whether to display the BIND PACKAGE
panel. To see it, enter Y in the Display panel field next to
this option; otherwise, enter N.
11 BIND PLAN
Allows you to specify whether to display the BIND PLAN panel.
To see it, enter Y in the Display panel field next to this
option; otherwise, enter N.
12 COMPILE OR ASSEMBLE
Allows you to specify whether to display the Program Prep:
Compile, Link, and Run panel. To see this panel enter Y in
the Display Panel field next to this option; otherwise, enter
N.
13 PRELINK
Allows you to invoke the IBM C/370 Compiler Prelink Utility to
make your program reentrant. This utility concatenates
compile-time initialization information from one or more text
decks into a single initialization unit. To invoke enter Y in
the Display Panel field next to this option; otherwise, enter
N. If this step is requested, then it must follow the compile
step, and precede the link-edit step.
14 LINK
Allows you to specify whether to display the Program Prep:
Compile, Link, and Run panel. Enter Y in the Display Panel
field next to this option; otherwise, enter N. If you
specified Y in the Display Panel field for the COMPILE OR
ASSEMBLE option, you do not need to make any changes to this
field: the panel displayed for COMPILE OR ASSEMBLE is the
same as the panel displayed for LINK. You can make the
changes you want to affect the link-editing of your
application at the same time you make your compilation
changes.
15 RUN
Allows you to specify whether to run your program. The RUN
option is only for TSO programs.
IMS and CICS programs cannot run using DB2I. If you are
using IMS or CICS, use N in these fields.
-------------------------------------------------------------
--- TSO and Batch -------------------------------------------
If you are using TSO and want to run your program, you
must enter Y in the Perform function column next to this
option. You can also use this panel to specify options
and values you want to affect the actual running of the
program. In this situation, you do not ask to see the RUN
panel in field 15.
-------------------------------------------------------------
Pressing ENTER takes you to the first panel in the series you specified,
in this case to the DB2I Defaults panel. If, at any point in your
progress from panel to panel, you press the END key, you return to this
first panel from which you can change your processing specifications.
Asterisks, (*), in the Display Panel column of rows 7 through 14 indicate
which panels you have already examined. You can see a panel again by
writing a Y over an asterisk.
The DB2I Defaults panel allows you to change various system defaults
that
were set at the time your DB2 system was installed. Figure 34 shows the
fields that affect the processing of the other DB2I panels.
-----------------------------------------------------------------------------------------
----
-----------------------------------------------------------------------------------------
----
1 DB2 NAME
Allows you to specify the DB2 subsystem that processes your DB2I
requests. If you specify a different DB2 subsystem, that subsystem
will be displayed as the SSID (subsystem identifier) field located at
the top, right-hand side of your screen.
2 DB2 CONNECTION RETRIES
Allows you to specify the number of additional times connection to
DB2 should be attempted if DB2 is not up when the DSN command is
issued. This option is not used for the program preparation process.
3 APPLICATION LANGUAGE
Allows you to specify your default programming language. If you
specify COB or COB2, DB2 displays the COBOL Defaults panel next.
4 LINES/PAGE OF LISTING
Allows you to specify the number of lines printed on each page of
listing output.
5 MESSAGE LEVEL
Allows you to specify the lowest level of message returned to you
during the BIND phase of the preparation process.
7 DECIMAL POINT
Allows you to specify whether decimal points are represented in your
source program by a comma or a period.
9 NUMBER OF ROWS
Allows you to specify the default number of input entry rows to be
generated on the initial display of the panel for ISPF panels only.
The number of rows displayed on later invocations is based on the
actual number of rows with non-blank entries.
10 DB2I JOB STATEMENT
Allows you to change your default job card.
-----------------------------------------------------------------------------------
You can leave both fields blank to accept both default values.
Pressing ENTER takes you to the next panel you specified on the DB2
Program Preparation panel, in this case, to the Precompile panel.
-----------------------------------------------------------------------------------
DSNETP01 PRECOMPILE SSID: DSN
COMMAND >_
-----------------------------------------------------------------------------------
Figure 36. The Precompile Panel. Specify the include library, if any, to
be used by your program, and any other options you need.
The following explains the functions on the Precompile panel and how to
fill in the fields in order to prepare to precompile.
If you reached this panel directly from the DB2I Primary Option
Menu,
you must enter the data set name of the program you want to
precompile.
2 INCLUDE LIBRARY
Allows you to enter the name of a library containing members to be
included by the precompiler. These members can contain output from
the DCLGEN operation. Here you enter our library name,
SRCLIB.DATA,
because what you are including are COBOL structure definitions
produced by DCLGEN processing.
3 DSNAME QUALIFIER
If you reached this panel by means of the DB2 Program Preparation
panel, this field is initialized to the data set name qualifier
specified there.
If you reached this panel directly from the DB2I Primary Option
Menu,
you can either specify a DSNAME QUALIFIER or allow the field to
take
its default value.
For IMS and TSO programs, the precompiled source statements (to
be passed to the compile or assemble step) are stored in a data
set named tsoprefix.qualifier.suffix. The precompiler print
listing is stored in a data set named tsoprefix.qualifier.PCLIST.
--------------------------------------------------------------------
--------------------------------------------------------------------
If you reached this panel from the DB2 Program Preparation panel,
this field is again blank. In that case, when you press the ENTER
key, the value of field 2 of the DB2 Program Preparation panel, DATA
SET NAME QUALIFIER, concatenated with DBRM, is used as the
specification for this field.
You can enter another data set name in this field, but if you do,
remember that this data set must have been previously allocated and
cataloged. This is true even if the data set name that you enter
corresponds to what is otherwise the default value of this field.
5 WHERE TO PRECOMPILE
If you reached this panel from the DB2 Program Preparation panel, the
field contains the preparation environment specified there.
If you reached this panel directly from the DB2I Primary Option
Menu,
you can either specify a processing environment or allow this field
to take its default value.
6 VERSION
Allows you to specify the version of the program and its DBRM. See
"Identifying a Package Version" in topic 4.2.4.3.5 for more
information about this option.
7 OTHER OPTIONS
Allows you to enter any option that the DSNH CLIST accepts; this
gives you great flexibility in controlling your program. For an
explanation of DSNH options, see Chapter 2 of Command and Utility
Reference.
When you request this option, the panel displayed is the first of two BIND
PACKAGE panels. The BIND PACKAGE panel can be reached either
directly
from the DB2I Primary Option Menu, or as a part of the program
preparation
process. If you enter the BIND PACKAGE panel from the Program
Preparation
panel, many of the BIND PACKAGE entries are initialized from values in
the
Primary and Precompile panels. See Figure 37.
-----------------------------------------------------------------------------------
PRESS: ENTER to process END to save and exit HELP for more
information
-----------------------------------------------------------------------------------
Figure 37. The BIND PACKAGE Panel
1 LOCATION NAME
Allows you to specify where the package is to be bound.
2 COLLECTION-ID
Allows you to specify the collection the package is in.
3 DBRM: COPY:
Allows you to specify whether you are creating a new package
(DBRM)
or making a copy of a package that already exists (COPY).
4 MEMBER or COLLECTION-ID
MEMBER: If you are creating a new package, this option allows you
to
specify the DBRM itself. Its default name depends on the input data
set.
5 PASSWORD or PACKAGE-ID
PASSWORD: If you are creating a new DBRM, this allows you to
enter
passwords for the library you listed in field 6. You can use this
field only if you reached the BIND PACKAGE panel directly from the
DB2 Primary Option Menu.
6 LIBRARY or VERSION
LIBRARY: If you are creating a new package, this allows you to
specify the name of the library that contains the DBRM for the bind
process.
8 ENABLE/DISABLE CONNECTIONS?
Placing YES in this field displays a panel (shown in Figure 39 in
topic 4.2.7.1.10) that allows you to specify whether various system
connections are valid for this application. This is valid only if
the location name names your local DB2 system.
11 ACTION ON PACKAGE
Allows you to specify whether to replace an existing package or
create a new one. If the package named in field 5 already exists,
you must specify REPLACE. If the package named in field 5 is a new
package, you can specify either ADD or REPLACE because
REPLACE adds a
package if a package of the same name does not exist. The default
value of this field is REPLACE.
12 REPLACE VERSION
Allows you to specify whether to replace a specific version of an
existing package or create a new one. If the package and the version
named in fields 5 and 6 already exist, you must specify REPLACE.
On this panel, enter new defaults for the bind package operation.
-----------------------------------------------------------------------------------
This panel allows you to change your defaults for options of the BIND
PACKAGE operation.
1 ISOLATION LEVEL
Use RR for "repeatable read" or CS for "cursor stability." For a
description of the effects of those values, see "The ISOLATION
Option" in topic 4.1.2.7.2.
2 VALIDATION TIME
Use RUN or BIND to tell whether authorization will be checked at run
time or at bind time. See "Validating SQL Statements" in
topic 4.2.3.5 for more information about this option.
5 DATA CURRENCY
Use YES or NO to tell whether or not data currency is required for
ambiguous cursors opened at remote locations. For a description of
data currency, see "Cursor Stability and Data Currency" in
topic 4.1.4.6.2.
7 SQLERROR PROCESSING
Use CONTINUE or NOPACKAGE to tell whether or not to continue
processing to produce a package after finding SQL errors.
-----------------------------------------------------------------------------------
For each connection type that has a second arrow, under SPECIFY
CONNECTION NAMES?, enter Y if you want to list specific
connection
names of that type. Leave N (the default) if you do not. If you use
Y in any of those fields, you see another panel on which you can
enter the connection names.
If you enter the BIND PLAN panel from the Program Preparation panel,
many
of the BIND PLAN entries are initialized from values in the Primary and
Precompile panels. See Figure 40.
----------------------------------------------------------------------------------------
PRESS: ENTER to process END to save and exit HELP for more
information
----------------------------------------------------------------------------------------
The following explains the functions on the BIND PLAN panel and how
to
fill the necessary fields in order to bind your program.
1 MEMBER
Allows you to specify the DBRMs. Its default name depends on the
input data set.
If you reached this panel directly from the DB2I Primary Option Menu,
you
must provide values for fields 1 and 3. If you plan to use more than one
DBRM in the process, you can include the library name and member
name of
each DBRM in fields 1 and 3, separating entries with commas. You can
also
specify more DBRMs by using field 4 on this panel.
2 PASSWORD
Allows you to enter passwords for the libraries you listed in field
3. You can use this field only if you reached the BIND PLAN panel
directly from the DB2 Primary Option Menu.
3 LIBRARY
Allows you to specify the name of the library or libraries that
contain the DBRMs you will use for the bind process.
4 ADDITIONAL DBRMS?
If you need more room for DBRM entries, or if you reached this panel
as part of the program preparation process, you can include more
DBRMs by entering YES in this field. You are then taken to a
separate panel, where you can enter more DBRM library and member
names.
5 PLAN NAME
Allows you to name the application plan that is created during BIND
processing. If you began with the DB2 Program Preparation panel, the
default value entered in this field is determined by the value you
entered in field 1 of that panel, INPUT DATA SET NAME. The
default
name depends on the input data set.
If you reached this panel directly from the DB2 Primary Option Menu,
you must include a PLAN NAME if you want the BIND process to
create
an application plan.
7 ENABLE/DISABLE CONNECTIONS?
Placing YES in this field displays a panel (shown in Figure 39 in
topic 4.2.7.1.10) that allows you to specify whether various system
connections are valid for this application. This is valid only if
the location name names your local DB2 system.
10 QUALIFIER
Allows you to specify the qualifier for tables, views and aliases.
If this is left blank, the default qualifier is the owner of the
plan.
11 CACHESIZE
Allows you to specify the size of the authorization cache. Valid
values are in the range 0 to 4096. Values that are not multiples of
256 will be rounded up to the next highest multiple of 256. A value
of 0 means that DB2 will not use an authorization cache. See
"Determining Optimal Cache Size" in topic 4.2.4.5 for more
information about this option. For each concurrent user of a plan, 8
bytes of storage are required, with an additional 32 bytes for
overhead.
12 ACTION ON PLAN
Allows you to specify whether this is a new or changed application
plan. If the plan named in field 5 already exists, you must specify
REPLACE. If the plan named in field 5 is a new plan, you can specify
either ADD or REPLACE because REPLACE adds a plan as though a
plan of
the same name exists. The default value of this field is REPLACE.
14 CURRENT SERVER
Allows you to specify the initial server. If you specify a remote
server, DB2 connects to that server when the first SQL statement is
executed. If you do not specify a location name, the initial server
is your local DB2. The default is the name of the local DB2
subsystem. See "Identifying the Server at Run Time" in topic 4.2.4.6
for more information about this option.
When you finish making changes to this panel, press ENTER to go to the
second of the program preparation panels, Program Prep: Compile, Link,
and
Run.
On this panel, enter new defaults for the bind plan operation.
-----------------------------------------------------------------------------------
PRESS: ENTER to process END to save and exit HELP for more
information
-----------------------------------------------------------------------------------
This panel allows you to change your defaults for options of the bind plan
operation.
1 ISOLATION LEVEL
Use RR for "repeatable read" or CS for "cursor stability." For a
description of the effects of those values, see "The ISOLATION
Option" in topic 4.1.2.7.2.
2 VALIDATION TIME
Use RUN or BIND to tell whether authorization will be checked at run
time or at bind time. See "Validating SQL Statements" in
topic 4.2.3.5 for more information about this option.
5 DATA CURRENCY
Use YES or NO to tell whether or not data currency is required for
ambiguous cursors opened at remote locations. For a description of
data currency, see "Cursor Stability and Data Currency" in
topic 4.1.4.6.2.
8 DEFER PREPARE
Use YES or NO to tell whether or not to defer sending a PREPARE
statement to a remote location until the first OPEN, DESCRIBE, or
EXECUTE is encountered that refers to the statement to be prepared.
For guidance in using this option, see "Deferring Remote PREPARE
for
Better Performance" in topic 5.1.3.2.
9 SQLRULES
Use DB2 or STD to specify whether a CONNECT (Type 2) statement
is
executed according to DB2 rules (DB2) or the ANSI/ISO SQL standard
of
1992 (STD).
10 DISCONNECT
Use EXPLICIT to end connections in the released state only,
AUTOMATIC
to end all remote connections, or CONDITIONAL to end remote
connections that have no open cursors WITH HOLD associated with
them.
See "BIND Options" in topic 4.1.4.5.2 for more information about this
option.
-----------------------------------------------------------------------------------
""""
""""
""""
""""
""""
""""
The second of the program preparation panels (Figure 43) allows you to
accomplish the last two steps in the program preparation process,
compilation and link-editing, as well as the PL/I MACRO PHASE for
programs
requiring this option. For TSO programs, the panel also allows you to run
programs.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Figure 43. The Program Preparation: Compile, Link, and Run Panel
4, 5, 6 INCLUDE LIBRARY
Fields 4, 5, and 6 allow you to enter the names of up to three
libraries containing members to be included by the linkage editor. A
COBOL library is to be included, SAMPLIB.COBOL.
7 LOAD LIBRARY
Allows you to specify the name of the library that holds the load
module. The default value is RUNLIB.LOAD. If the load library
specified is a PDS, and the input data set is a PDS, the member name
specified in field 1 of the program preparation panel is the load
module name. If the input data set is sequential, the second
qualifier of the input data set is the load module name.
8 PRELINK OPTIONS
Allows you to enter a list of prelink-edit options, separating items
in the list with commas, blanks, or both.
9 LINK OPTIONS
Allows you to enter a list of link-edit options, separating items in
the list with commas, blanks, or both.
To prepare a program that uses 31-bit addressing and runs above the
16-megabyte line, specify the following link-edit options: AMODE31,
RMODEANY.
If you are preparing an IMS or CICS program, you must leave fields 10
through 12 blank; IMS and CICS programs cannot be run using DB2I.
Instead, press ENTER to run the phases of the program preparation
process
(precompiling, binding, compiling, and link-editing) that you have
specified. Your program is then ready to be run from the appropriate
environment.
If you are preparing a TSO program, you can specify the following:
10 PARAMETERS
Allows you to name a list of parameters you want to pass either
to your host language run-time processor, or to your application.
Options for your run-time processor are separated from those for
your program by a slash, (/). For PL/I, run-time processor
parameters must appear on the left of the slash, while
application parameters must appear on the right. For COBOL, this
order is reversed. In our example, we want to pass parameters to
our COBOL application program.
12 SYSPRINT DS
Allows you to specify the names of a SYSPRINT (or in FORTRAN,
FT06F001) data set for your application program, if it needs one.
The default for this field is also TERM.
Your application could require other data sets besides SYSIN and
SYSPRINT. If so, remember that they must be cataloged and allocated
before you run your program.
-------------------------------------------------------------------------
# For C++ and IBM COBOL for MVS & VM, you must create a procedure
yourself.
# See "Using JCL to Prepare a C++ or IBM COBOL for MVS & VM
Program" in
# topic 4.2.7.3 for instructions. (*)
-------------------------------------------------------------------------
Table 27. Precompilation Procedures
------------------------------------------------------------------------
Program Procedure Invocation Included
in...
-----------------------+------------------------+-----------------------
Assembler XF DSNHASM DSNTEJ2A (see note)
-----------------------+------------------------+-----------------------
Assembler H DSNHASMH DSNTEJ2A
-----------------------+------------------------+-----------------------
High Level Assembler DSNHASMH DSNTEJ2A (see note)
-----------------------+------------------------+-----------------------
C DSNHC DSNTEJ2D
-----------------------+------------------------+-----------------------
COBOL DSNHCOB DSNTEJ2C
-----------------------+------------------------+-----------------------
SAA AD/Cycle DSNHCOB2 DSNTEJ2C (see note)
COBOL/370*
-----------------------+------------------------+-----------------------
VS COBOL II DSNHCOB2 DSNTEJ2C (see note)
-----------------------+------------------------+-----------------------
FORTRAN DSNHFOR DSNTEJ2F
-----------------------+------------------------+-----------------------
PL/I DSNHPLI DSNTEJ2P
----------------------------------------------------------------------
Note: The invocations for these programs must be adapted to invoke
the procedures listed in this table. For information on how to adapt
these invocations, see Section 2 (Volume 1) of Administration Guide.
-------------------------------------------------------------------------
If the PL/I macro processor is used, the PL/I *PROCESS statement must
not
be used in the source to pass options to the PL/I compiler. The needed
options can be specified on the PARM.PLI parameter of the EXEC
statement
in DSNHPLI procedure.
To include the proper interface code when invoking the JCL procedures,
use
one of the sets of statements shown below in your JCL; or, if you are
using the call attachment facility, follow the instructions given in
"Accessing the CAF Language Interface" in topic 5.5.2.3.
//LKED.SYSIN DD *
INCLUDE SYSLIB(member)
/*
-------------------------------------------------------------------------
--- IMS -----------------------------------------------------------------
//LKED.SYSIN DD *
INCLUDE SYSLIB(DFSLI000)
ENTRY (specification)
/*
-------------------------------------------------------------------------
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNCLI)
/*
-------------------------------------------------------------------------
Precompiler Option List Format: The option list must begin on a two-
byte
boundary. The first 2 bytes contain a binary count of the number of bytes
in the list (excluding the count field). The remainder of the list is
EBCDIC and can contain precompiler option keywords, separated by one
or
more blanks, a comma, or both of these.
DDname List Format: The ddname list must begin on a 2-byte boundary.
The
first 2 bytes contain a binary count of the number of bytes in the list
(excluding the count field). Each entry in the list is an 8-byte field,
left-justified, and padded with blanks if needed.
The precompiler adds 1 to the last page number used in the precompiler
listing and puts this value into the page-number field before returning
control to the invoking routine. Thus, if the precompiler is reinvoked,
page numbering is continuous.
4.2.7.3 # Using JCL to Prepare a C++ or IBM COBOL for MVS & VM
Program
# To prepare a C++ or IBM COBOL for MVS & VM program, you need to
create a
# JCL procedure. The way you prepare your program depends on the
number of
# compilation units in the program and whether the application uses
# object-oriented extensions.
# JCL procedures for IBM COBOL for MVS & VM programs: If your
program
# contains no object-oriented extensions, you can prepare it using a JCL
# procedure like the one in Figure 44.
# If the program consists of more than one compilation unit, you can
# precompile, compile, prelink, and link-edit each compilation unit
# separately, then link-edit all the separate modules to build the program.
#
//************************************************************
********
# //* DSNHICOB - PRECOMPILE, COMPILE, PRELINK, AND LINK
A COBOL *
# //* PROGRAM. *
#
//************************************************************
********
# //DSNHICOB PROC WSPC500,MEMTEMPNAME,
# // USERUSER
#
//************************************************************
********
# //* PRECOMPILE THE COBOL CLIENT PROGRAM.
*
#
//************************************************************
********
# //PC EXEC
PGMDSNHPC,PARM'HOST(IBMCOB)',REGION4096K
# //DBRMLIB DD DISPOLD,DSN&USER..DBRMLIB.DATA(&MEM)
# //STEPLIB DD DSN&DSNVR0..SDSNEXIT,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSIN DD DUMMY
# //SYSCIN DD
DSN&&DSNHOUT,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSLIB DD DISPSHR,DSN&USER..SRCLIB.DATA
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSUDUMP DD SYSOUT*
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* COMPILE THE COBOL PROGRAM IF THE RETURN CODE
FROM *
# //* THE PRECOMPILE IS 4 OR LESS. *
#
//************************************************************
********
# //COB EXEC PGMIGYCRCTL,REGION2048K,COND(4,LT,PC)
# //STEPLIB DD DSNIGY.V1R2M0.SIGYCOMP,DISPSHR
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSLIN DD DSN&&LOADSET,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSIN DD DSN&&DSNHOUT,DISP(OLD,DELETE)
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT3 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT4 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT5 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT6 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT7 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* PRELINK IF THE RETURN CODE FROM THE COMPILE
STEP *
# //* IS 4 OR LESS. *
#
//************************************************************
********
# //PLKED EXEC
PGMEDCPRLK,REGION2048K,COND(4,LT,COB)
# //STEPLIB DD DSNAMECEE.V1R5M0.SCEERUN,DISPSHR
# //SYSMSGS DD
DSNAMECEE.V1R5M0.SCEEMSGP(EDCPMSGE),DISPSHR
# //SYSLIB DD
DISPSHR,DSNAMECEC.SOMMVS.V1R1M0.SGOSPLKD
# //SYSIN DD DSN&&LOADSET,DISP(OLD,DELETE)
# //SYSMOD DD
DSNAME&&PLKSET,UNITSYSDA,DISP(MOD,PASS),
# // SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSDEFSD DD DUMMY
# //SYSOUT DD SYSOUT*
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
#
//************************************************************
********
# //* LINKEDIT IF THE PRELINK RETURN CODE FOR THE
PRELINK *
# //* STEP IS 4 OR LESS. *
#
//************************************************************
********
# //LKED EXEC PGMIEWL,PARM'MAP',
# // COND(4,LT,PLKED)
# //SYSLIB DD DSNCEE.V1R5M0.SCEELKED,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSLIN DD DSN&&PLKSET,DISP(OLD,DELETE)
# // DD DDNAMESYSIN
# //SYSLMOD DD DSN&USER..RUNLIB.LOAD(&MEM),DISPSHR
# //SYSPRINT DD SYSOUT*
# //SYSUT1 DD SPACE(1024,(50,50)),UNITSYSDA
# //*DSNHICOB PEND REMOVE * FOR USE AS AN INSTREAM
PROCEDURE
#
//************************************************************
********
# For IBM COBOL for MVS & VM applications that use object-oriented
# extensions, you must use a different technique. You need to precompile
# each of the class definitions and their client programs separately, then
# input all of the precompiler output to the compiler together.
# For example, you can use a JCL procedure similar to this one to prepare
an
# IBM COBOL for MVS & VM program that consists of one client
program and one
# class:
#
//************************************************************
********
# //* DSNHOCOB - PRECOMPILE, COMPILE, PRELINK, AND LINK
A COBOL *
# //* CLIENT AND CLASS. *
#
//************************************************************
********
# //DSNHOCOB PROC WSPC500,MEM1TEMPN1,MEM2TEMPN2,
# // USERUSER
#
//************************************************************
********
# //* PRECOMPILE THE COBOL CLIENT PROGRAM.
*
#
//************************************************************
********
# //PC1 EXEC
PGMDSNHPC,PARM'HOST(IBMCOB)',REGION4096K
# //DBRMLIB DD
DISPOLD,DSN&USER..DBRMLIB.DATA(&MEM1)
# //STEPLIB DD DSN&DSNVR0..SDSNEXIT,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSIN DD DUMMY
# //SYSCIN DD DSN&&DSNHOT1,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSLIB DD DISPSHR,DSN&USER..SRCLIB.DATA
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSUDUMP DD SYSOUT*
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* PRECOMPILE THE COBOL CLASS DEFINITION.
*
#
//************************************************************
********
# //PC2 EXEC PGMDSNHPC,PARM'HOST(IBMCOB)',
# // REGION4096K
# //DBRMLIB DD
DISPOLD,DSN&USER..DBRMLIB.DATA(&MEM2)
# //STEPLIB DD DSN&DSNVR0..SDSNEXIT,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSIN DD DUMMY
# //SYSCIN DD DSN&&DSNHOT2,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSLIB DD DISPSHR,DSN&USER..SRCLIB.DATA
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSUDUMP DD SYSOUT*
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* COMPILE THE COBOL CLIENT PROGRAM AND CLASS
IF THE *
# //* RETURN CODES FOR THE PRECOMPILES ARE 4 OR
LESS. *
#
//************************************************************
********
# //COB EXEC
PGMIGYCRCTL,REGION2048K,COND((4,LT,PC1),(4,LT,PC2))
# //STEPLIB DD DSNIGY.V1R2M0.SIGYCOMP,DISPSHR
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSLIN DD DSN&&LOADSET,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSIN DD DSN&&DSNHOT1,DISP(OLD,DELETE)
# // DD DSN&&DSNHOT2,DISP(OLD,DELETE)
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT3 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT4 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT5 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT6 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT7 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* PRELINK IF THE RETURN CODE FROM THE COMPILE
STEP *
# //* IS 4 OR LESS. *
#
//************************************************************
********
# //PLKED EXEC
PGMEDCPRLK,REGION2048K,COND(4,LT,COB)
# //STEPLIB DD DSNAMECEE.V1R5M0.SCEERUN,DISPSHR
# //SYSMSGS DD
DSNAMECEE.V1R5M0.SCEEMSGP(EDCPMSGE),DISPSHR
# //SYSLIB DD
DISPSHR,DSNAMECEC.SOMMVS.V1R1M0.SGOSPLKD
# //SYSIN DD DSN&&LOADSET,DISP(OLD,DELETE)
# //SYSMOD DD
DSNAME&&PLKSET,UNITSYSDA,DISP(MOD,PASS),
# // SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSDEFSD DD DUMMY
# //SYSOUT DD SYSOUT*
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
#
//************************************************************
********
# //* LINKEDIT IF THE PRELINK RETURN CODE FOR THE
PRELINK *
# //* STEP IS 4 OR LESS. *
#
//************************************************************
********
# //LKED EXEC PGMIEWL,PARM'MAP',
# // COND(4,LT,PLKED)
# //SYSLIB DD DSNCEE.V1R5M0.SCEELKED,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSLIN DD DSN&&PLKSET,DISP(OLD,DELETE)
# // DD DDNAMESYSIN
# //SYSLMOD DD DSN&USER..RUNLIB.LOAD(&MEM1),DISPSHR
# //SYSPRINT DD SYSOUT*
# //SYSUT1 DD SPACE(1024,(50,50)),UNITSYSDA
# //*DSNHOCOB PEND REMOVE * FOR USE AS AN INSTREAM
PROCEDURE
#
//************************************************************
********
# Figure 45. JCL Procedure for Preparing a COBOL Program with a Class
# JCL procedures for C++ programs: You can use a procedure like the one
in
# Figure 46 to prepare a C++ program that contains no object-oriented
# extensions. If the program consists of more than one compilation unit,
# you can precompile, compile, prelink, and link-edit each compilation unit
# separately, then link-edit all the separate modules to build the program.
#
//************************************************************
********
# //* DSNHCPP - PRECOMPILE, COMPILE, PRELINK, AND LINK A
C++ *
# //* PROGRAM. *
#
//************************************************************
********
# //DSNHCPP PROC WSPC500,MEMTEMPNAME,
# // USERUSER
#
//************************************************************
********
# //* PRECOMPILE THE C++ PROGRAM. *
#
//************************************************************
********
# //PC EXEC PGMDSNHPC,PARM'HOST(CPP)',REGION4096K
# //DBRMLIB DD
DISPOLD,DSN&USER..DBRMLIB.DATA(&MEM1)
# //STEPLIB DD DSN&DSNVR0..SDSNEXIT,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSIN DD DUMMY
# //SYSCIN DD
DSN&&DSNHOUT,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSLIB DD DISPSHR,DSN&USER..SRCLIB.DATA
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSUDUMP DD SYSOUT*
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* COMPILE THE C++ PROGRAM IF THE RETURN CODE
*
# //* FOR THE PRECOMPILE IS 4 OR LESS. *
#
//************************************************************
********
# //CP EXEC PGMCBC310PP,COND((4,LT,PC),
# // REGION4096K
# //STEPLIB DD DSNCEE.V1R5M0.SCEERUN,DISPSHR
# //SYSMSGS DD
DSNCBC.V3R1M1.SCBC3MSG(EDCMSGE),DISPSHR
# //SYSXMSGS DD
DSNCBC.V3R1M1.SCBC3MSG(CBCMSGE),DISPSHR
# //SYSLIB DD DSNCBC.V3R1M1.SCLB3H.H,DISPSHR
# // DD DSNCEE.V1R5M0.SCEEH.H,DISPSHR
# //SYSLIN DD DSN&&LOADSET,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSPRINT DD SYSOUT*
# //SYSCPRT DD SYSOUT*
# //SYSOUT DD SYSOUT*
# //SYSIN DD DSN&&DSNHOUT,DISP(OLD,DELETE)
# //SYSUT1 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSUT4 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSUT5 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT6 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT7 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT8 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT9 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMVB,LRECL137,BLKSIZE882)
# //SYSUT10 DD SYSOUT*
# //SYSUT14 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT15 DD SYSOUT*
#
//************************************************************
********
# //* PRELINK IF THE RETURN CODE FROM THE COMPILE
STEP *
# //* IS 4 OR LESS. *
#
//************************************************************
********
# //PLKED EXEC PGMEDCPRLK,REGION2048K,COND(4,LT,CP)
# //STEPLIB DD DSNCEE.V1R5M0.SCEERUN,DISPSHR
# //SYSMSGS DD
DSNCEE.V1R5M0.SCEEMSGP(EDCPMSGE),DISPSHR
# //SYSLIB DD DSNCEE.V1R5M0.SCEECPP,DISPSHR
# // DD DSNCBC.V3R1M1.SCLB3CPP,DISPSHR
# //SYSIN DD DSN&&LOADSET,DISP(OLD,DELETE)
# //SYSMOD DD DSN&&PLKSET,UNITSYSDA,DISP(MOD,PASS),
# // SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSOUT DD SYSOUT*
# //SYSPRINT DD SYSOUT*
#
//************************************************************
********
# //* LINKEDIT IF THE PRELINK RETURN CODE FOR THE
PRELINK *
# //* STEP IS 4 OR LESS. *
#
//************************************************************
********
# //LKED EXEC PGMIEWL,PARM'MAP',
# // COND(4,LT,PLKED)
# //SYSLIB DD DSNCEE.V1R5M0.SCEELKED,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSLIN DD DSN&&PLKSET,DISP(OLD,DELETE)
# // DD DDNAMESYSIN
# //SYSLMOD DD DSN&USER..RUNLIB.LOAD(&MEM1),DISPSHR
# //SYSPRINT DD SYSOUT*
# //SYSUT1 DD SPACE(32000,(30,30)),UNITSYSDA
# //*DSNHCPP PEND REMOVE * FOR USE AS AN INSTREAM
PROCEDURE
#
//************************************************************
********
# Figure 46. JCL Procedure for Preparing a C++ Program with No Classes
# DB2 allows you to precompile C++ functions and classes in the same
# compilation unit. Therefore, if your application contains object-oriented
# extensions but consists of a single compilation unit, you can use the
# procedure above.
# If, on the other hand, your classes and associated client programs are in
# separate compilation units, you need to precompile each of the
compilation
# units separately, then input all of the precompiler output to the compiler
# together.
#
//************************************************************
********
# //* DSNHOCPP - PRECOMPILE, COMPILE, PRELINK, AND LINK
A C++ *
# //* CLIENT AND CLASS. *
#
//************************************************************
********
# //DSNHOCPP PROC WSPC500,MEM1TEMPN1,MEM2TEMPN2,
# // USERUSER
#
//************************************************************
********
# //* PRECOMPILE THE C++ CLIENT PROGRAM.
*
#
//************************************************************
********
# //PC1 EXEC PGMDSNHPC,PARM'HOST(CPP)',REGION4096K
# //DBRMLIB DD
DISPOLD,DSN&USER..DBRMLIB.DATA(&MEM1)
# //STEPLIB DD DSN&DSNVR0..SDSNEXIT,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSIN DD DUMMY
# //SYSCIN DD DSN&&DSNHOT1,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSLIB DD DISPSHR,DSN&USER..SRCLIB.DATA
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSUDUMP DD SYSOUT*
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* PRECOMPILE THE C++ CLASS DEFINITION. *
#
//************************************************************
********
# //PC2 EXEC PGMDSNHPC,PARM'HOST(CPP)',
# // REGION4096K
# //DBRMLIB DD
DISPOLD,DSN&USER..DBRMLIB.DATA(&MEM2)
# //STEPLIB DD DSN&DSNVR0..SDSNEXIT,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSIN DD DUMMY
# //SYSCIN DD DSN&&DSNHOT2,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(800,(&WSPC,&WSPC))
# //SYSLIB DD DISPSHR,DSN&USER..SRCLIB.DATA
# //SYSPRINT DD SYSOUT*
# //SYSTERM DD SYSOUT*
# //SYSUDUMP DD SYSOUT*
# //SYSUT1 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
# //SYSUT2 DD SPACE(800,
(&WSPC,&WSPC),,,ROUND),UNITSYSDA
#
//************************************************************
********
# //* COMPILE THE C++ CLIENT PROGRAM AND CLASS IF
THE *
# //* RETURN CODES FOR THE PRECOMPILES ARE 4 OR
LESS. *
#
//************************************************************
********
# //CP EXEC PGMCBC310PP,COND((4,LT,PC1),(4,LT,PC2)),
# // REGION4096K
# //STEPLIB DD DSNCEE.V1R5M0.SCEERUN,DISPSHR
# //SYSMSGS DD
DSNCBC.V3R1M1.SCBC3MSG(EDCMSGE),DISPSHR
# //SYSXMSGS DD
DSNCBC.V3R1M1.SCBC3MSG(CBCMSGE),DISPSHR
# //SYSLIB DD DSNCBC.V3R1M1.SCLB3H.H,DISPSHR
# // DD DSNCEE.V1R5M0.SCEEH.H,DISPSHR
# //SYSLIN DD DSN&&LOADSET,DISP(MOD,PASS),UNITSYSDA,
# // SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSPRINT DD SYSOUT*
# //SYSCPRT DD SYSOUT*
# //SYSOUT DD SYSOUT*
# //SYSIN DD DSN&&DSNHOT1,DISP(OLD,DELETE)
# // DD DSN&&DSNHOT2,DISP(OLD,DELETE)
# //SYSUT1 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSUT4 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSUT5 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT6 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT7 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT8 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT9 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMVB,LRECL137,BLKSIZE882)
# //SYSUT10 DD SYSOUT*
# //SYSUT14 DD UNITSYSDA,SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL3200,BLKSIZE12800)
# //SYSUT15 DD SYSOUT*
#
//************************************************************
********
# //* PRELINK IF THE RETURN CODE FROM THE COMPILE
STEP *
# //* IS 4 OR LESS. *
#
//************************************************************
********
# //PLKED EXEC PGMEDCPRLK,REGION2048K,COND(4,LT,CP)
# //STEPLIB DD DSNCEE.V1R5M0.SCEERUN,DISPSHR
# //SYSMSGS DD
DSNCEE.V1R5M0.SCEEMSGP(EDCPMSGE),DISPSHR
# //SYSLIB DD DSNCEE.V1R5M0.SCEECPP,DISPSHR
# // DD DSNCBC.V3R1M1.SCLB3CPP,DISPSHR
# //SYSIN DD DSN&&LOADSET,DISP(OLD,DELETE)
# //SYSMOD DD DSN&&PLKSET,UNITSYSDA,DISP(MOD,PASS),
# // SPACE(32000,(30,30)),
# // DCB(RECFMFB,LRECL80,BLKSIZE3200)
# //SYSOUT DD SYSOUT*
# //SYSPRINT DD SYSOUT*
#
//************************************************************
********
# //* LINKEDIT IF THE PRELINK RETURN CODE FOR THE
PRELINK *
# //* STEP IS 4 OR LESS. *
#
//************************************************************
********
# //LKED EXEC PGMIEWL,PARM'MAP',
# // COND(4,LT,PLKED)
# //SYSLIB DD DSNCEE.V1R5M0.SCEELKED,DISPSHR
# // DD DSN&DSNVR0..SDSNLOAD,DISPSHR
# //SYSLIN DD DSN&&PLKSET,DISP(OLD,DELETE)
# // DD DDNAMESYSIN
# //SYSLMOD DD DSN&USER..RUNLIB.LOAD(&MEM1),DISPSHR
# //SYSPRINT DD SYSOUT*
# //SYSUT1 DD SPACE(32000,(30,30)),UNITSYSDA
# //*DSNHOCPP PEND REMOVE * FOR USE AS AN INSTREAM
PROCEDURE
#
//************************************************************
********
# Figure 47. JCL Procedure for Preparing a C++ Program with a Class
This section describes how to design a test data structure and how to fill
tables with test data.
-------------------------------------------------------------------------
When you test an application that accesses DB2 data, you will want to
have
DB2 data available for testing. To do this, you can create test tables
and views.
Test Tables.
To create a test table, you need a database and table space. Talk
with your DBA to make sure that a database and table spaces are
available for your use.
If the data you want to change already exists in a table, you might
want to consider using the LIKE clause of CREATE TABLE. If you
have
one or more secondary authorization IDs, as well as a primary
authorization ID, you will want to specify a secondary ID as the owner
of the table, in case you want others besides yourself to have
ownership of this table for test purposes. You can do this with the
SET CURRENT SQLID statement, which is described in more detail in
Chapter 6 of SQL Reference. See Section 5 (Volume 2) of
Administration Guide for more information on authorization IDs.
If your location has a separate DB2 system for testing, you can create the
test tables and views on the test system, then test your program
thoroughly on that system. This chapter assumes all testing is done on a
separate system, and that the person who created the test tables and views
has an authorization ID of TEST. The table names are TEST.EMP,
TEST.PROJ
and TEST.DEPT.
To design test tables and views, first analyze your application's data
needs.
1. List the data your application accesses and describe how each data
item is accessed. For example, suppose you are testing an application
that accesses the DSN8310.EMP, DSN8310.DEPT, and
DSN8310.PROJ tables.
You might record the information about the data as shown in Table 29.
--------------------------------------------------------------------------
Table 29. Description of the Application's Data
-------------------------------------------------------------------------
Table or Insert Delete Column Update
View Name Rows? Rows? Name Data Type Access?
----------------+---------+---------+---------+---------------+----------
DSN8310.EMP No No EMPNO CHAR (6)
LASTNAME VARCHAR (15)
WORKDEPT CHAR (3) Yes
PHONENO CHAR (4) Yes
JOB DECIMAL (3) Yes
----------------+---------+---------+---------+---------------+----------
DSN8310.DEPT No No DEPTNO CHAR (3)
MGRNO CHAR (6)
----------------+---------+---------+---------+---------------+----------
DSN8310.PROJ Yes Yes PROJNO CHAR (6)
DEPTNO CHAR (3) Yes
RESPEMP CHAR (6) Yes
PRSTAFF DECIMAL (5,2) Yes
PRSTDATE DECIMAL (6) Yes
PRENDATE DECIMAL (6) Yes
---------------------------------------------------------------------
2. Determine the test tables and views you need to test your application.
-----------------------------------------------------------------
EMPNO LASTNAME WORKDEPT PHONENO JOB
------------+------------+------------+------------+------------
. . . . .
. . . . .
. . . . .
-------------------------------------------------------------
To support the example, you will also want to create a test view of
the DSN8310.DEPT table. Because the application does not change
any
data in the DSN8310.DEPT table, the view can be based on the table
itself (rather than on a test table). However, it is safer to have a
complete set of test tables and to test the program thoroughly using
only test data. The TEST.DEPT view has the following format:
---------------------------------------------------------------------
DEPTNO MGRNO
----------------------------------+---------------------------------
. .
. .
. .
--------------------------------------------------------------------
4.3.1.1.2 Authorization
Before you can create a table, you need to be authorized to create tables
and to use the table space in which the table is to reside. In addition,
you must have authority to bind and run programs you want to test. Your
DBA can grant you the authorization needed to create and access tables
and
to bind and run programs.
If you intend to use existing tables and views (either directly or as the
basis for a view), you need to be authorized to access those tables and
views. Your DBA can authorize that access.
If you want to create a view, you must have authorization for each table
and view on which the view is based. You then have the same privileges
over the view that you have over the tables and views on which the view
is
based. Before trying the examples, have your DBA issue GRANT
statements
on your behalf to ensure your authorization to create new tables and views
and to access existing tables. Obtain the names of tables and views you
are authorized to access (as well as the privileges you have for each
table) from your DBA. See "Chapter 2-2. Creating Tables and Modifying
Data" in topic 2.2 for more information on creating tables and views.
For details about each CREATE statement, see SQL Reference or Section
4
(Volume 2) of Administration Guide.
* INSERT ... VALUES (an SQL statement) puts one row into a table
each
time the statement is executed. The INSERT statement is described in
"Inserting a Row: INSERT" in topic 2.2.2.1.
* INSERT ... SELECT (an SQL statement) obtains data from an existing
table (based on a SELECT clause) and puts it into the table identified
with the INSERT statement. This technique is described in "Filling a
Table from Another Table: Mass INSERT" in topic 2.2.2.1.3.
* The LOAD utility obtains data from a sequential file (a non-DB2 file),
formats it for a table, and puts it into a table. For more details
about the LOAD utility, see Section 4 (Volume 2) of Administration
Guide or Chapter 3 of Command and Utility Reference.
The following INSERT statements can be issued via SPUFI to load the
SPUFINUM test table with data:
* Name an input data set to hold SQL statements to be passed to DB2 for
execution
* Name an output data set to contain the results of executing the SQL
statements
SQL statements executed via SPUFI operate on actual tables (in this case,
the tables you have created for testing). Consequently, before you access
DB2 data, you need to either:
* Make sure that all tables and views your SQL statements refer to exist
* If the tables or views do not exist, create them (or have your
database administrator create them). You can use SPUFI to issue the
CREATE statements used to create the tables and views you need for
testing.
For more details about how to use SPUFI, see "Chapter 2-4. Using
SPUFI:
Executing SQL from Your Terminal" in topic 2.4.
Most sites have guidelines regarding what must be done if your program
abends. The following suggestions are some common ones.
Documenting the errors returned from test will help you investigate and
correct problems in the program. The following information can be
useful:
PROC 0
TEST 'DSN310.SDSNLOAD(DSN)' CP
DSN SYSTEM(DB4)
AT MYPROG.MYPROG.+0 DEFER
GO
RUN PROGRAM(MYPROG)
LIBRARY('L186331.RUNLIB.LOAD(MYPROG)')
For more information about the TEST command, see OS/VS2 TSO
Terminal
User's Guide.
ISPF Dialog Test is another option to help you in the task of debugging.
Documenting the errors returned from test will help you investigate and
correct problems in the program. The following information can be
useful:
* The contents of the PCB that was referred to in the call that was
executing
* If a DL/I database call was executing, the SSAs, if any, that the
call used
* The abend completion code, abend reason code, and any dump error
messages.
When your program encounters an error, it can pass all the required error
information to a standard error routine. Online programs can also send an
error message to the originating logical terminal.
Some sites run a BMP at the end of the day to list all the errors that
occurred during the day. If your location does this, you can send a
message using an express PCB that has its destination set for that BMP.
Documenting the errors returned from test will help you investigate and
correct problems in the program. The following information can be
useful:
Using CICS facilities, you can have a printed error record; you can also
print the SQLCA (and SQLDA) contents.
For more details about each of these topics, see CICS/OS/VS Application
Programmer's Reference Manual (Command Level).
.
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
This is the type of SQL statement being executed. The SQL statement
can be any valid SQL statement, such as COMMIT, DROP TABLE,
EXPLAIN,
FETCH, or OPEN.
* DBRMdbrm name
* STMTstatement number
* SECTsection number
SQL Statements Containing Input Host Variables: The IVAR (input host
variables) section and its attendant fields only appears when the
statement being executed contains input host variables.
The host variables used for input to DB2 include the variables from
predicates, the values used for insertion or updating, and the text of SQL
statements being prepared. The address of the input variable is AT
X'nnnnnnnn'.
Additional Host Variable Information:
* TYPEdata type
Specifies the data type for this host variable. The basic data types
include character string, graphic string, binary integer,
floating-point, decimal, date, time, and timestamp. For additional
information refer to Chapter 6 of SQL Reference.
* LENlength
-----------------------------------------------------------------------------------
TRANSACTION: XP03 PROGRAM: TESTP03 TASK NUMBER:
0000054 DISPLAY: 00
STATUS: COMMAND EXECUTION COMPLETE
CALL TO RESOURCE MANAGER DSNCSQL
EXEC SQL UPDATE P.AUTHGROUP1A ,
S.AUTHGROUP
PLANTESTP03, DBRMTESTP03, STMT00108, SECT00001
SQL COMMUNICATION AREA:
SQLCABC 136 AT X'001B5D1C'
SQLCODE 000 AT X'001B5D20'
SQLERRML 000 AT X'001B5D24'
SQLERRMC '' AT X'001B5D26'
SQLERRP 'DSN' AT X'001B5D6C'
SQLERRD(1-6) 000, 000, 00000, -1, 00000, 000 AT
X'001B5D74'
SQLWARN(0-A) '_ _ _ _ _ _ _ _ _ _ _' AT
X'001B5D8C'
SQLSTATE 00000 AT X'001B5D97'
-----------------------------------------------------------------------------------
* P.AUTHprimary authorization ID
The primary DB2 authorization ID, which is the CICS sign-on user ID
(USERID).
* S.AUTHsecondary authorization ID
If the RACF list of group options is not active, then DB2 will use the
connected group name supplied by the CICS attachment facility as the
secondary authorization ID. If the RACF list of group options is
active, then the connected group name supplied by the CICS
attachment
facility is ignored but the value will appear in the DB2 list of
secondary authorization IDs.
* PLANplan name
The name of plan that is currently being run. The PLAN represents the
control structure produced during the bind process and used by DB2 to
process SQL statements encountered while the application is running.
Plus signs (+) on the left of the screen indicate that additional EDF
output can be seen by using PF keys to scroll the screen forward or back.
Figure 50 and Figure 51 contain the rest of the EDF output for our
example.
-----------------------------------------------------------------------------------
TRANSACTION: XP03 PROGRAM: TESTP03 TASK NUMBER:
0000054 DISPLAY: 00
STATUS: COMMAND EXECUTION COMPLETE
CALL TO RESOURCE MANAGER DSNCSQL
+ OVAR 002: TYPEVARCHAR, LEN00012 AT
X'001B5DA3'
DATAX'0006C5C9D3C5C5D5'
OVAR 003: TYPECHAR, LEN00001 AT
X'001B5DB1'
DATAX'E6'
OVAR 004: TYPEVARCHAR, LEN00015 AT
X'001B5DB2'
DATAX'0009C8C5D5C4C5D9E2D6D5'
OVAR 005: TYPECHAR, LEN00003 AT
X'001B5DC3'
DATAX'C5F1F1'
OVAR 006: TYPECHAR, LEN00004 AT
X'001B5DC6'
DATAX'F4F5F3F1'
OVAR 007: TYPECHAR, LEN00010 AT
X'001B5DCA'
DATAX'F1F9F7F060F0F860F1F5'
+ OVAR 008: TYPECHAR, LEN00008 AT
X'001B5DD4'
OFFSET:X'0004F4' LINE:UNKNOWN EIBFNX'0000'
RESPONSE:
REPLY:
ENTER: CONTINUE
PF1 : UNDEFINED PF2 : UNDEFINED PF3 : END EDF
SESSION
PF4 : SUPPRESS DISPLAYS PF5 : WORKING STORAGE PF6 :
USER DISPLAY
PF7 : SCROLL BACK PF8 : SCROLL FORWARD PF9 : STOP
CONDITIONS
PF10: PREVIOUS DISPLAY PF11: UNDEFINED PF12:
ABEND USER TASK
-----------------------------------------------------------------------------------
Figure 50. EDF Screen after a DB2 SQL Statement, Continued
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
If your program does not run correctly, you need to isolate the problem.
If the program's application plan was not invalidated, start by checking
the following items:
* Output from the compiler or assembler. Ensure that all error messages
are resolved.
- Have all necessary modules been included and are they in the
correct order?
- Did you specify a plan name? If not, the bind process assumes you
want to process the DBRM for diagnostic purposes, but do not want
to produce an application plan.
- Have you specified all the DBRMs and packages associated with the
programs that make up the application and their partitioned data
set (PDS) names in a single application plan?
* Your JCL.
* IMS users: Have you included the DL/I option statement in the
correct format?
---------------------------------------------------------------------
- Have you included the region size parameter in the EXEC statement?
Does it specify a region size large enough for the storage
required for the DB2 interface, the TSO, IMS, or CICS system, and
your program?
- Have you included the names of all data sets (DB2 and non-DB2)
required by the program?
* Your program.
You can also use dumps to help localize problems in your program.
For
example, one of the more common error situations occurs when your
program is running and you receive a message that it has abended. In
this instance, your test procedure might be to capture a TSO dump. To
do so, you must allocate a SYSUDUMP or SYSABEND dump data set
before
invoking DB2. When you press the ENTER key (after the error
message
and READY message), a dump is requested by the system. You then
need
to FREE the dump data set.
For more information about warning messages, see the following host
language sections:
-----------------------------------------------------------------------------------------
--
-----------------------------------------------------------------------------------------
--
1. Error message.
SYSPRINT output is what the DB2 precompiler provides when you use a
procedure to precompile your program. See Table 27 in topic 4.2.7.2 for a
list of JCL procedures provided by DB2
When you use the Program Preparation panels to prepare and run your
program, DB2 allocates SYSPRINT on the basis of the TERM option you
specify (on line 12 of the PROGRAM PREPARATION: COMPILE,
PRELINK, LINK,
AND RUN panel). As an alternative, when you use the DSNH command
procedure (CLIST), you can obtain SYSPRINT output at your terminal by
specifying PRINT(TERM), or you can specify PRINT(qualifier) to place
the
SYSPRINT output into a data set named
authorizationid.qualifier.PCLIST.
Assuming that you do not specify PRINT as LEAVE, NONE, or TERM,
DB2 issues
a message when precompilation has been completed, telling you where to
find your precompiler listings. This helps you locate your diagnostics
quickly and easily.
* A list of the DB2 precompiler options (Figure 53) in effect during the
precompilation (if NOOPTIONS is not specified).
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
1. This section lists the options specified by the user when the DB2
precompiler was invoked. This list will not appear if the
NOOPTIONS option was specified at the invocation of the
precompiler.
-----------------------------------------------------------------------------------------
--
1 TMN5P40:PROCEDURE OPTIONS(MAIN) ;
00000100
2
/*******************************************************000002
00
3 * < program description and prologue > 00000300
1324
/*************************************************/ 00132400
1325 /* GET INFORMATION ABOUT THE PROJECT FROM
THE */ 00132500
1326 /* PROJECT TABLE. */ 00132600
1327
/*************************************************/ 00132700
1328 EXEC SQL SELECT ACTNO, PREQPROJ, PREQACT
00132800
1329 INTO PROJ_DATA 00132900
1330 FROM TPREREQ 00133000
1331 WHERE PROJNO :PROJ_NO;
00133100
1332 00133200
1333
/*************************************************/ 00133300
1334 /* PROJECT IS FINISHED. DELETE IT. */
00133400
1335
/*************************************************/ 00133500
1336 00133600
1337 EXEC SQL DELETE FROM PROJ
00133700
1338 WHERE PROJNO :PROJ_NO;
00133800
-----------------------------------------------------------------------------------------
--
-----------------------------------------------------------------------------------
...
-----------------------------------------------------------------------------------
DEFN
specifies the line number at which the name is defined (that is,
the line number generated by the DB2 precompiler). **** means
that the object was not defined or the precompiler did not
recognize the declarations.
REFERENCE
contains two types of information: the definition of the symbolic
name, and where it is referred to. If the symbolic name refers to
a valid host variable, the data type or STRUCTURE is also
identified.
-----------------------------------------------------------------------------------
SOURCE STATISTICS
SOURCE LINES READ: 1523(1)
NUMBER OF SYMBOLS: 128(2)
SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 6432(3)
-----------------------------------------------------------------------------------
8. Error messages (in this example, only one error was detected).
"Features and Functions of DB2 DL/I Batch Support," below, tells what
you
can do in a DL/I batch program. "Requirements for Using DB2 in a DL/I
Batch Job" in topic 4.4.1.2 tells, in general, what you must do to make it
happen.
By using the DB2 DL/I batch support, a batch DL/I program can issue:
* Any IMS batch call, except ROLS, SETS and SYNC calls. ROLS and
SETS
calls provide intermediate backout point processing, which is not
supported by DB2. The SYNC call provides commit point processing
without identifying the commit point with a value. IMS does not allow
a SYNC call in batch and neither does the DB2 DL/I batch support.
* GSAM calls.
The restart capabilities for DB2 and IMS databases, as well as for
sequential data sets accessed through GSAM, are available through the
IMS
Checkpoint and Restart facility.
The DB2 DL/I batch support allows access to both DB2 and DL/I data
through
the use of:
Using DB2 in a DL/I batch job requires the following changes to the
application program and the job step JCL:
* To run the application program, you need access to the DB2 load
library from JOBLIB, STEPLIB, or LINKLIST, to allow the DB2
modules to
be loaded.
#* You must either specify an input data set using the DDITV02 DD
# statement, or specify a subsystem member using the SSM parameter on
# the DL/I batch invocation procedure. Either of these defines
# information about the batch job step and DB2. For detailed
# information, see "DB2 DL/I Batch Input" in topic 4.4.3.1. Batch
# Input" on page 4-130.
When the batch application tries to execute the first SQL statement, DB2
checks whether the authorization ID has the EXECUTE privilege for the
plan. The same ID is used for later authorization checks and also
identifies records from the accounting and performance traces.
Using DB2 DL/I batch support can affect your application design and
programming in the areas described below.
A DL/I batch region is independent of both the IMS control region and the
CICS address space. The DL/I batch region loads the DL/I code into the
application region along with the application program.
4.4.2.2 Commits
Commit IMS batch applications frequently so that you do not tie up
resources for an extended time. If you do need coordinated commits for
recovery, see Section 6 (Volume 2) of Administration Guide.
Write your program with SQL statements and DL/I calls, and use
checkpoint
calls. All checkpoints issued by a batch application program must be
unique. The frequency of checkpoints depends on the application design.
At a checkpoint, DL/I positioning is lost, DB2 cursors are closed (with
the possible exception of cursors defined as WITH HOLD), commit
duration
locks are freed (again with some exceptions), and database changes are
considered permanent to both IMS and DB2.
A problem can arise if a failure occurs after the program terminated, but
before the job step ended. At program termination, if a checkpoint call
has not been made, DB2 commits the unit of work without involving IMS.
It
is possible for DB2 to commit the data, but if a failure occurs before
DL/I commits the data, then the DB2 data is out of synchronization with
the DL/I changes. DB2 could be indoubt if the failure occurs during DB2
commit processing.
If an XRST call is used, the DB2 DL/I batch support assumes that any
checkpoint issued is a symbolic checkpoint. The options of the symbolic
checkpoint call differ from the options of a basic checkpoint call. Using
the incorrect form of the checkpoint call can cause problems.
If an XRST call is not used, then the DB2 DL/I batch support assumes
that
any checkpoint call issued is a basic checkpoint.
Checkpoint IDs must be EBCDIC characters to make restart easier.
# The DB2 DL/I batch support uses the IMS attachment facility. You can
# provide the required input in one of two ways:
# * Use a DDITV02 DD statement that refers to a file that has DCB
options
# of LRECL80 and RECFMF or FB.
SSN,LIT,ESMT,RTT,ERR,CRC,CONNECTION_NAME,PLAN,PRO
G
Field Content
If the application program is using the XRST call, then the error
option is ignored if coordinated recovery is required on the XRST
call. In that case, the application program is terminated
abnormally if DB2 is not operational.
CRC Since DB2 commands are not supported in this environment, the
command recognition character is not used at this time.
DB2 requires unique connection names for the DB2 DL/I batch
support. If two applications try to connect with the same
connection name, then the second application program is not
allowed to connect to DB2.
PLAN The DB2 plan name can be specified. If the plan name is not
specified, then the application program module name is checked
against the optional resource translation table. If a match is
found in the resource translation table, the translated name is
used as the DB2 plan name. If no match is found, then the
application program module name is used as the plan name.
DSN,SYS1,DSNMIN10,,R,-,BATCH001,DB2PLAN,PROGA
You might want to save and print the data set, as the information is
useful for diagnostic purposes. The IMS module, DFSERA10, can be
used to
print the variable-length data set records in both hexadecimal and
character format.
4.4.4.1 Precompiling
When SQL statements are added to an application program, you must
precompile the application program and bind the resulting DBRM into a
plan
or package, as described in "Chapter 4-2. Preparing an Application
Program to Run" in topic 4.2.
4.4.4.2 Binding
The owner of the plan or package must have all the privileges required to
execute the SQL statements embedded in it. Before a batch program can
issue SQL statements, a DB2 plan must exist.
The plan name can be specified to the DB2 DL/I batch support in one of
the
following ways:
* By default; the plan name is then the application load module name
specified in DDITV02
The DB2 DL/I batch support passes the plan name to the IMS attach
package.
If a resource translation table (RTT) does not exist or the name is not in
the RTT, then the passed name is used as the plan name. If the name
exists in the RTT, then the name is translated to the specified RTT DB2
plan name.
The recommended approach is to give the DB2 plan the same name as
that of
the application load module, which is the IMS attach default. The plan
name must be the same as the program name.
4.4.4.3 Link-Editing
DB2 has language interface routines for each unique supported
environment.
The DB2 DL/I batch support requires the IMS language interface routine.
It is also necessary to have DFSLI000 link-edited with the application
program.
To run a program using DB2 you need a DB2 plan. The DB2 plan is
created
by the bind process. DB2 first verifies whether or not the DL/I batch job
step can connect to batch job DB2, and then whether or not the
application
program can access DB2 and enforce user identification of batch jobs
accessing DB2.
* The DL/I batch procedure can run your application program without
using module DSNMTV01. To accomplish this, do the following:
* The second step shows how to use the DFSERA10 IMS program to
print the
contents of the DDOTV02 output data set.
To restart a batch program that updates data, you must first run the IMS
batch backout utility, followed by a restart job indicating the last
successful checkpoint ID.
The skeleton JCL example that follows illustrates a batch backout for
PSBIVP8CA.
Restarting a DL/I batch job step for an application program using IMS
XRST
and symbolic CHKP calls is accomplished by operational procedures.
* Look at the SYSOUT listing for the job step to find message
DFS0540I
containing the checkpoint IDs issued. Use the last checkpoint ID
listed.
* Submit the IMS Batch Backout utility to back out the DL/I databases
to
the last (default) checkpoint ID. Upon completion of batch backout,
message DFS395I provides the last valid IMS checkpoint ID. Use this
checkpoint ID on restart.
In dynamic SQL, the SQL statements are prepared and executed within a
program while the program is running. The SQL source is contained in
host
language variables rather than being written into the application program.
The SQL statement can change several times while the program is
running.
"Dynamic SQL: Pro and Con" in topic 5.1.1 suggests some reasons for
choosing, or not choosing, dynamic SQL.
In the remainder of the chapter, instructions for using dynamic SQL are
given for three types of statements, in order of increasing complexity.
Of the following, read all categories up to and including the one you
need.
You know the format of a static SQL statement when you write your
program.
Running the program does not change the statement.
However, you might not know the full format of an SQL statement. when
you
write your program. Some part of it, perhaps the whole statement, is
generated while the program is running.
Static SQL allows your program much flexibility. For example, static
SQL
statements can include host variables that your program can change, as in
the example that follows (written in COBOL):
MOVE LEVEL-5 TO NEW-SALARY.
MOVE 000240 TO EMPID.
.
.
.
EXEC SQL
UPDATE DSN8310.EMP
SET SALARY :NEW-SALARY
WHERE EMPNO :EMPID
END-EXEC.
We recommend that you use dynamic SQL only if you really require its
flexibility. Using a few dynamic SQL statements in a program that runs
for a long time will not make a significant difference, but using dynamic
SQL for the same number of statements in a small transaction is much
more
noticeable.
The resource limit facility (RLF, or governor) is used to limit the amount
of CPU time an SQL statement can take, thus preventing SQL statements
from
making excessive requests. The governor controls only the dynamic SQL
manipulative statements SELECT, UPDATE, DELETE, and INSERT.
If there is no cursor associated with the failed SQL statement, then all
changes made by that statement are undone before the error code is
returned to the application. The application can either issue another SQL
statement or commit all work done so far.
Programs that provide for dynamic SQL are usually written in PL/I,
assembler, VS COBOL II, or C. Non-SELECT and fixed-list SELECT
statements
can be written in any of the DB2 supported languages. Keep in mind that
a
program containing a varying-list SELECT statement is more difficult to
write in OS/VS COBOL or FORTRAN, because the program cannot run
without
the help of a subroutine to manage address variables (pointers) and
storage allocation.
2. Load the input SQL statement into a data area. The procedure for
building or reading the input SQL statement is not discussed here; the
statement depends on your environment and sources of information.
You
can read in complete SQL statements, or you can get information to
build the statement from data sets, a user at a terminal,
previously-set program variables, or tables in the database.
If you attempt to execute an SQL statement dynamically that DB2 does
not allow, you get an SQL error.
3. Execute the statement. Two methods you can use are discussed below:
4. Handle any errors that might result. The requirements are the same as
those for static SQL statements. The return code from the most
recently executed SQL statement appears in the SQLCODE and
SQLSTATE
fields of the SQLCA. See "Checking the Execution of SQL
Statements"
in topic 3.1.5 for information on the SQLCA and the fields it
contains.
If you know in advance that you will use only the DELETE statement and
only the table DSN8310.EMP, then you can actually use static SQL, the
more
efficient method. Suppose further that there are several different tables
with rows identified by employee numbers, and that users enter a table
name as well as a list of employee numbers to be deleted. Though the
employee numbers can be represented by variables, the table name
cannot,
so the entire statement must be constructed and executed dynamically.
Your program must now do these things differently:
Using the PREPARE statement, you can use a character string like this, to
prepare an SQL statement that can be executed using EXECUTE.
For example, let the variable :DSTRING have the value "DELETE
FROM
DSN8310.EMP WHERE EMPNO ?". To prepare an SQL statement
from that
string and assign it the name S1, write
The prepared statement still contains a parameter marker, for which you
must supply a value when the statement is executed. After the statement
is prepared, the table name is fixed, but the parameter marker allows you
to execute the same statement many times with different values of the
employee number.
The example began with a DO loop that executed a static SQL statement
repeatedly:
You can now write an equivalent example for a dynamic SQL statement:
The PREPARE statement prepares the SQL statement and calls it S1. The
EXECUTE statement executes S1 repeatedly, using different values for
EMP.
The prepared statement (S1 in the example) can contain more than one
parameter marker. If it does, the USING clause of EXECUTE specifies a
list of variables or a host structure. The variables must contain values
that match the number and data types of parameters in S1 in the proper
order. You must know the number and types of parameters in advance
and
declare the variables in your program, or you can use an SQLDA (SQL
descriptor area).
The term "fixed-list" does not imply that you must know in advance how
many rows of data will be returned; however, you must know the number
of
columns and the data types of those columns. A fixed-list SELECT
statement returns a result table which can contain any number of rows;
your program looks at those rows one at a time, using the FETCH
statement.
Each successive fetch returns the same number of values as the last, and
the values have the same data types each time. Hence, you can specify
host variables as you do for static SQL.
1. Include an SQLCA
6. Fetch rows from the result table, as described in "Fetch Rows from the
Result Table" in topic 5.1.3.1.4.
8. Handle any resulting errors. This step is the same as for static SQL,
except for the number and types of errors that can result. In
particular, there are errors associated with violating the key feature
described in "Fetch Rows from the Result Table" in topic 5.1.3.1.4.
Assume that your program retrieves last names and phone numbers by
dynamically executing SELECT statements of this general form:
The statements are read from a terminal, and the WHERE clause is
determined by the user.
Dynamic SELECT statements cannot use INTO; hence, you must use a
cursor to
put the results into the host variables you name. In declaring the
cursor, use the statement name (call it STMT), and give the cursor itself
a name (for example, C1):
The key feature of this statement is the use of a list of host variables
to receive the values returned by FETCH. The list has a known number of
items (two--:NAME and :PHONE) of known data types (both are
character
strings, of lengths 15 and 4, respectively).
It is possible to use this list in the FETCH statement only because the
program has been planned to execute only fixed-list SELECTs. Every
row
that cursor C1 points to must contain exactly two character values of
appropriate length. If the program is to handle anything else, it must
use the techniques described under "Dynamic SQL for Varying-List
SELECT
Statements."
This step does not differ from what is required for static SQL. For
example, a WHENEVER NOT FOUND statement in your program can
name a routine
that contains this statement:
1. Include an SQLCA
3. Prepare and execute the statement. This step is more complex than for
fixed-list SELECTs, and is described in detail under "Preparing a
Varying-List SELECT Statement" in topic 5.1.4.2 and "Executing a
Varying-List SELECT Statement Dynamically" in topic 5.1.4.3. It
involves the following steps:
Assume that your program dynamically executes SQL statements, but this
time without any limitation on their form. The statements are read from a
terminal, and nothing about them is known in advance. They need not
even
be SELECT statements.
Now there is a new wrinkle. The program must find out whether the
statement is a SELECT. If it is, the program must also find out how many
values are retrieved in each row, and what their data types are. The
information comes from an SQL descriptor area (SQLDA).
* Provide the largest SQLDA that could ever be needed. The maximum
number of columns in a result table is 750, and an SQLDA with 750
occurrences of SQLVAR occupies 33016 bytes. But most SELECTs
do not
retrieve 750 columns, so most of the time the majority of that space
is not used.
How many columns should you allow for? You must choose a number
that
is large enough for most of your SELECT statements, but not too
wasteful of space; 40 is a good compromise. In our case, we want to
illustrate what you must do for statements that return more columns
than you allowed for, so our example uses a minimum SQLDA that
allows
for 100 columns.
EXEC SQL
DECLARE C1 CURSOR FOR STMT;
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL
PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
Do not use the USING clause in either of these examples. At the moment,
only the minimum SQLDA is being used.
---------------------------------------------------
16-byte fixed header > SQLDAID SQLDABC 10 SQLD
------------------------------------------------
In other words, if you leave too little room, or if SQLN says there is too
little room, you do not lose merely the overflow: you lose all of the
SQLVAR information, and you have to do it over. But you do find out
whether the statement was a SELECT and, if so, how many columns it
returns.
Now your program can query the SQLD field in MINSQLDA. If the field
contains 0, the statement was not a SELECT, and your program can
proceed
to execute it. The statement is already prepared. If there are no
parameter markers in the statement, you can use:
(If the statement does contain parameter markers, you must use an SQL
descriptor area; for instructions, see "Executing Arbitrary Statements
with Parameter Markers" in topic 5.1.4.4.)
If the statement was a SELECT, then the SQLD field contains the number
of
columns in the result table.
Now you can allocate storage for a second, full-size SQLDA; call it
FULSQLDA. Its structure is shown schematically in Figure 58.
---------------------------------------------------
16-byte fixed header > SQLDAID SQLDABC SQLN
SQLD
-----------------------+------------
+------------------------------------------------
SQLVAR set of fields > SQLTYPE SQLLEN SQLDATA
SQLIND 11* SQLNAME
(element 1, 44 bytes) -----------+-----------+------------+------------------
+----+-------------------------
-------------------------------------------
Complete SQLDA size > SQLDA 8816 200 200
-----------------------+------+-----+------------
SQLVAR Element 1 (44 bytes) > 452 3 Undefined 0 8
WORKDEPT
-----+-----+-----------+------+-----+------------
SQLVAR Element 2 (44 bytes) > 453 4 Undefined 0 7
PHONENO
---------------------------------------------
If the SQLTYPE field indicates that the value can be null, the program
must also put the address of an indicator variable in the SQLIND field.
Figure 60, Figure 61, and Figure 62 illustrate the situation with the
values listed in Table 30. In Figure 60, all the values except one have
been inserted by the DESCRIBE statement. The exception is the first
occurrence of the number 200, which was inserted by the program before
executing DESCRIBE to tell how many occurrences of SQLVAR there is
room
for. If the result table of the SELECT has more columns than this,
nothing is described in the SQLVAR fields.
The next set of five values, the first occurrence of SQLVAR, pertains to
the first column of the result table (the WORKDEPT column). SQLVAR
element 1 contains fixed-length character strings and does not allow null
values (SQLTYPE452); the length attribute is 3.
-------------------------------------------
Complete SQLDA size > SQLDA 8816 200 200
-----------------------+------+-----+------------
SQLVAR Element 1 (44 bytes) > 452 3 Undefined 0 8
WORKDEPT
-----+-----+-----------+------+-----+------------
SQLVAR Element 2 (44 bytes) > 453 4 Undefined 0 7
PHONENO
---------------------------------------------
-------------------------------------------
Complete SQLDA size > SQLDA 8816 200 200
FLDA, CHAR(3): FLDB, CHAR(4):
-----------------------+-----------+-----+-------------- ------
-------
SQLVAR element 1 > 452 3 Addr FLDA Addr FLDAI 8
WORKDEPT
-----+-----+-----------+------------+-----+------------- ------
-------
SQLVAR element 2 > 453 4 Addr FLDB Addr FLDBI 7
PHONENO
---------------------------------------------------- Indicator
variables (halfword):
FLDAI:
FLDBI:
------ ------
------ ------
-------------------------------------------
Complete SQLDA size > SQLDA 8816 200 200
FLDA, CHAR(3): FLDB, CHAR(4):
-----------------------+-----------+-----+-------------- ------
-------
SQLVAR element 1 > 452 3 Addr FLDA Addr FLDAI 8
WORKDEPT E11 4502
-----+-----+-----------+------------+-----+------------- ------
-------
SQLVAR element 2 > 453 4 Addr FLDB Addr FLDBI 7
PHONENO
---------------------------------------------------- Indicator
variables (halfword):
FLDAI:
FLDBI:
------ ------
0 0
------ ------
-------------------------------------------------------------------------
Table 30. Values to be Inserted in the SQLDA
------------------------------------------------------------------------
Value Field Description
----------+-----------+-------------------------------------------------
SQLDA SQLDAID An "eye-catcher"
----------+-----------+-------------------------------------------------
8816 SQLDABC The size of the SQLDA in bytes (16 + 44 * 200)
----------+-----------+-------------------------------------------------
200 SQLN The number of occurrences of SQLVAR, set by the
program
----------+-----------+-------------------------------------------------
200 SQLD The number of occurrences of SQLVAR actually
used by the DESCRIBE statement
----------+-----------+-------------------------------------------------
452 SQLTYPE The value of SQLTYPE in the first occurrence of
SQLVAR. It indicates that the first column
contains fixed-length character strings, and
does not allow nulls
----------+-----------+-------------------------------------------------
3 SQLLEN The length attribute of the column
----------+-----------+-------------------------------------------------
Undefined SQLDATA Bytes 3 and 4 contain the CCSID of a string
or CCSID column. Undefined for other types of columns.
value
----------+-----------+-------------------------------------------------
Undefined SQLIND
----------+-----------+-------------------------------------------------
8 SQLNAME The number of characters in the column name
----------+-----------+-------------------------------------------------
WORKDEPT SQLNAME+2 The column name of the first column
-----------------------------------------------------------------------
The situation, before the program has obtained any rows of the result
table, is shown schematically in Figure 61 in topic 5.1.4.2.9. Addresses
of fields and indicator variables have been inserted in SQLVAR.
EXEC SQL
DESCRIBE STMT INTO FULSQLDA USING LABELS;
You can also write USING BOTH to obtain both the name and the label
when
both exist, but this introduces a slight complication. If you want to
obtain both names and labels, you need a second set of occurrences of
SQLVAR in FULSQLDA. The first set contains descriptions of all the
columns using names, the second set contains descriptions using labels.
This means that you must allocate a longer SQLDA for the second
DESCRIBE
statement ((16 + SQLD * 88 bytes) instead of (16 + SQLD * 44)). You
must
also put double the number of columns (SLQD * 2) in the SQLN field of
the
second SQLDA. Otherwise, if there is not enough space available,
DESCRIBE
does not enter descriptions of any of the columns.
Retrieving rows of the result table is very easy. The statements differ
only a little from those for the fixed-list example.
For cases when there are parameter markers, see "Executing Arbitrary
Statements with Parameter Markers" in topic 5.1.4.4 below.
This statement differs from the corresponding one for the case of a
fixed-list select. Write:
EXEC SQL
FETCH C1 USING DESCRIPTOR FULSQLDA;
This step is the same as for the fixed-list case. When there are no more
rows to process, execute the following statement:
The only new element in the example is that the number of parameter
markers, and perhaps the kinds of parameter they stand for, is not known
in advance. Techniques described previously will do if the number and
types of parameter are known, as in the following examples:
In both cases, the number and types of host variables named must agree
with the number of parameter markers in STMT and the types of
parameter
they stand for. The first variable (VAR1 in the examples) must have the
type expected for the first parameter marker in the statement, the second
variable must have the type expected for the second marker, and so on.
There must be at least as many variables as parameter markers.
5.1.4.4.2 When the Number and Types of Parameters Are Not Known
When the number and types of parameters are not known, you can use a
new
adaptation of a familiar device: an SQL descriptor area. There is no
limit to the number of SQLDAs your program can include, and they need
not
all be used for the same purpose. We now introduce one that describes a
set of parameters. Its name is arbitrary- call it DPARM.
The structure of this SQLDA is the same as that of any other SQLDA.
The
number of occurrences of SQLVAR can vary, as in previous examples. In
this case, there must be one for every parameter marker. Each occurrence
of SQLVAR describes one host variable that replaces one parameter
marker
at execution time, either when a non-SELECT statement is executed or
when
a cursor is opened for a SELECT statement.
SQLTYPE The code for the type of variable, and whether it allows nulls
SQLLEN The description of the length of the host variable
SQLDATA The address of the host variable
SQLIND The address of an indicator variable, if one is needed
SQLNAME Ignored
All forms of dynamic SQL can be used in VS COBOL II, but not in
OS/VS
COBOL. OS/VS COBOL programs using an SQLDA require an
assembler
subroutine to manage address variables (pointers) and storage allocation.
In VS COBOL II, non-SELECT statements and fixed-list SELECT
statements are
easier to process, but the varying-list SELECT statements can be
processed
as well.
The steps for VS COBOL II are the same as for assembler and PL/I, but
some
of the coding techniques differ. The same SQL statements are used:
PREPARE, DESCRIBE, OPEN, CLOSE, and FETCH.
*****************************************************
* SQL DESCRIPTOR AREA IN VS COBOL II
*****************************************************
01 SQLDA.
02 SQLDAID PIC X(8) VALUE 'SQLDA '.
02 SQLDABC PIC S9(8) COMPUTATIONAL VALUE
13216.
02 SQLN PIC S9(4) COMPUTATIONAL VALUE 300.
02 SQLD PIC S9(4) COMPUTATIONAL VALUE 0.
02 SQLVAR OCCURS 1 TO 300 TIMES
DEPENDING ON SQLN.
03 SQLTYPE PIC S9(4) COMPUTATIONAL.
03 SQLLEN PIC S9(4) COMPUTATIONAL.
03 SQLDATA POINTER.
03 SQLIND POINTER.
03 SQLNAME.
49 SQLNAMEL PIC S9(4) COMPUTATIONAL.
49 SQLNAMEC PIC X(30).
PROGRAM ONE...
WORKING-STORAGE SECTION.
01 WORKAREA-IND.
02 WORKIND PIC S9(4) COMP OCCURS 300 TIMES.
01 RECWORK.
02 RECWORK-LEN PIC S9(8) COMP VALUE 32700.
02 RECWORK-CHAR PIC X(1) OCCURS 32700 TIMES.
PROCEDURE DIVISION.
PROGRAM TWO...
WORKING-STORAGE SECTION.
01 RECPTR POINTER.
01 RECNUM REDEFINES RECPTR PICTURE S9(8)
COMPUTATIONAL.
01 IRECPTR POINTER.
01 IRECNUM REDEFINES IRECPTR PICTURE S9(8)
COMPUTATIONAL.
01 WORKAREA2.
02 WORKINDPTR POINTER OCCURS 300 TIMES.
LINKAGE SECTION.
01 LINKAREA-IND.
02 IND PIC S9(4) COMP OCCURS 300 TIMES.
01 LINKAREA-REC.
02 REC1-LEN PIC S9(8) COMP.
02 REC1-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES
DEPENDING ON REC1-LEN.
01 LINKAREA-QMARK.
02 INDREC PIC X(1).
This chapter helps you use the results from EXPLAIN to improve the
data-access performance of your SQL statements. The access paths
shown
for the example queries in this chapter are intended only to illustrate
those examples. If you try running the queries in this chapter, it is
possible that the chosen access paths could be different.
EXPLAIN informs you about the access paths DB2 chooses. Use
EXPLAIN to:
For each single table access, EXPLAIN tells you if an index access or
table space scan is used.
You can also use EXPLAIN to determine if DB2 plans to use multiple
concurrent I/O streams to access your data.
Although you can use EXPLAIN for SELECT, UPDATE, and DELETE
statements,
the primary use of EXPLAIN is to observe the access paths for the
SELECT
parts of your statements. You can also use EXPLAIN on UPDATE and
DELETE
WHERE CURRENT OF CURSOR statements, and INSERT statements,
but you only
receive limited information in your PLAN_TABLE output.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
Figure 63. Format of PLAN_TABLE
Before you can use EXPLAIN, you must create a table called
PLAN_TABLE to
hold the results of EXPLAIN. A copy of the statements needed to create
the table are in the DB2 sample library, under the member name
DSNTESC.
The format of PLAN_TABLE is shown in Figure 63 in topic 5.2.1.
Figure 64
shows the content of each column of PLAN_TABLE.
When you bind a package with EXPLAIN(YES), you can use the 25-
column
format, the 28-column format, the 30-column format, or the 34-column
format, but the 34-column format is recommended because it provides you
with the most information. If you alter an existing PLAN_TABLE to add
new
columns, specify the columns as NOT NULL WITH DEFAULT, so that
default
values are included for the rows already in the table. However, the
columns ACCESS_DEGREE, ACCESS_PGROUP_ID, JOIN_DEGREE,
and JOIN_PGROUP_ID
# allow nulls. Do not specify those four columns as NOT NULL WITH
DEFAULT.
APPLNAME The name of the application plan for the row. Applies
only
to embedded EXPLAIN statements executed from a plan or to
statements explained when binding a plan. Blank if not
applicable.
CREATOR The creator of the new table accessed in this step; blank
if METHOD is 3.
TNAME The name of the new table (or materialized view) that is
accessed in this step; blank if METHOD is 3.
TSLOCKMODE The lock mode of the table space that contains the
new
table if the table space is not segmented or if the
LOCKSIZE is TABLESPACE. Otherwise, the lock mode of the
table. ISintent share; IXintent exclusive; Sshare;
Uupdate; Xexclusive; SIXshare with intent exclusive.
REMARKS A field into which you can insert any character string of
254 or fewer characters.
Regrouping the rows in PLAN_TABLE can help you understand how the
access
path executes. The following queries improve the row orders:
# All rows with the same non-zero value for QUERYNO relate to the same
SQL
statement.
All rows with the same value for QBLOCKNO and the same value for
QUERYNO
relate to a step within the query. QBLOCKNOs are not necessarily
executed
in the order shown in PLAN_TABLE. But within a QBLOCKNO, the
PLANNO
column gives the substeps in the order they execute.
For each substep, the TNAME column identifies the table accessed. Sorts
can be shown as part of a table access or as a separate step.
LOCATION.COLLECTION.PACKAGE_ID.VERSION
------ ------
(Method 1)
Nested
composite T4 ----> Loop <---- T3 new
Join
------ ------
v
------ ------
(Method 2)
composite work ------> Merge Scan <------- T1 new
file Join
------ ------
(Sort)
v
Result
----------------------------------------------------------------------------------------
---------------------------
METHOD TNAME TABNO ACCESS- MATCH-
ACCESS- INDEX- TSLOCK-
TYPE COLS NAME
ONLY MODE
----------------------+-------+-------+-----------------------+--------
+-----------------------+--------+---------
0 T4 1 I 1 T4X1 N IS
----------------------+-------+-------+-----------------------+--------
+-----------------------+--------+---------
1 T3 4 I 1 T3X1 N IS
----------------------+-------+-------+-----------------------+--------
+-----------------------+--------+---------
2 T1 3 I 0 T1X1 Y S
----------------------+-------+-------+-----------------------+--------
+-----------------------+--------+---------
2 T5 5 R 0 N S
----------------------+-------+-------+-----------------------+--------
+-----------------------+--------+---------
1 T2 2 I 1 T2X1 N IS
----------------------+-------+-------+-----------------------+--------
+-----------------------+--------+---------
3 0 0 N
----------------------------------------------------------------------------------------
--------------------
----------------------------------------------------------------------------------------
---------------------------
SORTN SORTN SORTN SORTN SORTC
SORTC SORTC SORTC
UNIQ JOIN ORDERBY GROUPBY UNIQ JOIN
ORDERBY GROUPBY
----------------------+-------+---------+----------------------+-------
+----------------------+---------+---------
N N N N N N N N
----------------------+-------+---------+----------------------+-------
+----------------------+---------+---------
N N N N N N N N
----------------------+-------+---------+----------------------+-------
+----------------------+---------+---------
N Y N N N Y N N
----------------------+-------+---------+----------------------+-------
+----------------------+---------+---------
N Y N N N Y N N
----------------------+-------+---------+----------------------+-------
+----------------------+---------+---------
N N N N N N N N
----------------------+-------+---------+----------------------+-------
+----------------------+---------+---------
N N N N N N Y N
----------------------------------------------------------------------------------------
--------------------
-----------------------------------------------------------------------------------------
------------------------------
Figure 65. Join Methods as Displayed in PLAN_TABLE
Using the literal '10' would likely produce a different filter factor and
maybe a different access path from the original static SQL. (A filter
factor is the proportion of rows that remain after a predicate has
"filtered out" the rows that do not satisfy it. For more information on
filter factors, see "Predicate Filter Factor" in topic 5.2.7.2.) The
parameter marker behaves just like a host variable, in that the predicate
is assigned a default filter factor.
Even when using parameter markers, you could see different access paths
for your static and dynamic queries. With placement markers, DB2
assumes
that the host variable value has the same length and precision as the
column, so there are no additional implications for whether a predicate is
indexable or stage 1. However, if a host variable definition does not
match the column definition, then the predicate may become a stage 2
predicate and, hence, non-indexable.
The following are examples of when the host variable definition does not
match the column definition:
* The length of the host variable is greater than the column's length.
* The data type of the host variable is not compatible with the column.
For example, using a host variable with a data type of DECIMAL with a
column that has a data type of SMALLINT is not compatible. Using a
host variable with a data type of SMALLINT with a column that has a
data type of INT or DECIMAL is compatible.
5.2.2 Understanding How Your SQL Queries Are Accessing Your Tables
The first step is to retrieve all the qualifying row identifiers (RIDs)
where C11, using index IX1. The next step is to retrieve all the
qualifying RIDs where C21, using index IX2. The intersection of these
lists is the final set of RIDs. This final RID list is used to access the
data pages that are needed to retrieve the qualified rows.
-----------------------------------------------------------------------------------------
-----
SELECT * FROM T
WHERE C1 1 AND C2 1;
----------------------------------------------------------------------------------------
--
TNAME ACCESS- MATCH- ACCESS- INDEX-
PREFETCH MIXOP-
TYPE COLS NAME ONLY SEQ
-------------------------------+---------+--------+---------+--------+----------
+--------
T M 0 N L 0
-------------------------------+---------+--------+---------+--------+----------
+--------
T MX 1 IX1 Y 1
-------------------------------+---------+--------+---------+--------+----------
+--------
T MX 1 IX2 Y 2
-------------------------------+---------+--------+---------+--------+----------
+--------
T MI 0 N 3
------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------
-----
Figure 66. PLAN_TABLE Output for Example with Intersection Operator
The same index can be used more than once in a multiple index access,
because more than one predicate could be matching. The following
example
illustrates this:
-----------------------------------------------------------------------------------------
-----
SELECT * FROM T
WHERE C1 BETWEEN 100 AND 199 OR
C1 BETWEEN 500 AND 599
----------------------------------------------------------------------------------------
--
TNAME ACCESS- MATCH- ACCESS- INDEX-
PREFETCH MIXOP-
TYPE COLS NAME ONLY SEQ
-------------------------------+---------+--------+---------+--------+----------
+--------
T M 0 N L 0
-------------------------------+---------+--------+---------+--------+----------
+--------
T MX 1 IX1 Y 1
-------------------------------+---------+--------+---------+--------+----------
+--------
T MX 1 IX1 Y 2
-------------------------------+---------+--------+---------+--------+----------
+--------
T MU 0 N 3
------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------
-----
Figure 67. PLAN_TABLE Output for Example with Union Operator
The first step is to retrieve all the qualifying RIDs (C1 is between 100
and 199) using index IX1. The next step is to retrieve all the qualifying
RIDs where C1 is between 500 and 599. This retrieval uses the same
index.
The union of these lists gives the final set of RIDs. The final RID list
is used to retrieve the qualified rows.
The index XEMP5 is the chosen access path for this query, with
MATCHCOLS
3. There are two equal predicates on the first two columns and a range
predicate on the third column. Though there are four columns in the
index, only three of them can be considered matching columns.
If all the columns needed for the SQL statement can be found in the index,
then the INDEXONLY (index-only access) column shows the value Y.
There is
no need to access the table. There is a restriction on index only access;
if one or more of the columns needed by the query is defined as
VARCHAR,
then the data comprising the table, itself, must be accessed. You do not
see the index-only access path for queries involving varying-length
character data.
Any step that uses list prefetch (described under "What Kind of
Prefetching Will Be Done?" in topic 5.2.2.6) is also not index only,
because list prefetch always involves reading some data pages. For the
MX
steps, INDEXONLY is Y, because the data pages are not actually
accessed
until all of the intersection (MI) or union (MU) steps take place.
If DB2 uses sequential prefetch and the table is partitioned, your queries
could use parallel I/O streams to fetch the data. The ACCESS_DEGREE
and
JOIN_DEGREE columns of the PLAN_TABLE can tell you if DB2
planned to use
parallel I/O streams to access data for a query.
When the column functions (SUM, AVG, MAX, MIN, COUNT) are
evaluated is
based on the access path chosen for the SQL statement.
The following sections describe all the different access paths you could
see in PLAN_TABLE. They also discuss some of the statistics described
in
this section, and how these statistics can influence the different access
path choices.
ACCESSTYPER; PREFETCHS
Table space scan is most often used for one of the following reasons:
* The indexes that have matching predicates have low cluster ratios and
are therefore efficient only for small amounts of data.
The principal statistics used for calculating the cost of a table space
scan are NACTIVE from SYSTABLESPACE and NPAGES from
SYSTABLES.
For table space scans of segmented table spaces, DB2 might not actually
perform sequential prefetch at execution time if it can be determined that
fewer than four data pages need to be accessed. For guidance on
monitoring sequential prefetch, see "Sequential Prefetch and Query
Performance" in topic 5.2.4.
When data has been compressed by DB2, the extra processing needed to
decompress it makes a table space scan a less likely choice for access
path.
MATCHCOLS>0
In the general case, the rules for determining the number of matching
columns are simple, although there are a few exceptions to the rule.
* Look at the index columns from leading to trailing. For each index
column, if there is at least one indexable Boolean term predicate on
that column, it is a match column. (See "Properties of Predicates" in
topic 5.2.7.1 for a definition of Boolean term.)
SELECT * FROM T
WHERE C11 AND C2>1
AND C31;
There are two matching columns in this example. The first one comes
from
the predicate C11, and the second one comes from C2>1. The range
predicate on C2 prevents C3 from becoming a matching column.
In index screening, predicates are specified on index key columns but are
not part of the matching columns. These index screening predicates
improve the index access by reducing the rows that qualify while
searching
the index. For example, with an index on T(C1,C2,C3,C4):
SELECT * FROM T
WHERE C1 1
AND C3 > 0 AND C4 2
AND C5 8;
C3>0 and C42 are index screening predicates. They can be applied on the
index, but they are not matching predicates. C58 is not an index
screening predicate, and it must be evaluated when data is retrieved. The
MATCHCOLS value in PLAN_TABLE would be 1.
Index screening is not performed when list prefetch is used or during the
MX phases of multiple index access. EXPLAIN does not directly tell
when
index screening is being used. However, you can tell from the query, the
indexes used, and the MATCHCOLS value in PLAN_TABLE.
ACCESSTYPEI; MATCHCOLS0
Index screening provides filtering so that not all of the data pages
are accessed.
ACCESSTYPEN
You can regard the IN-list index scan as a series of matching index scans
with the values in the IN predicate being used for each matching index
scan. The following example has an index on (C1,C2,C3,C4) and might
use
an IN-list index scan:
SELECT * FROM T
WHERE C11 AND C2 IN (1,2,3)
AND C3>0 AND C4<100;
Multiple index access uses more than one index to access a table.
Multiple index access is a good access path when:
DB2 chooses a multiple index access path for the following query:
Value Meaning
M Start of multiple index access processing
MX Indexes are to be scanned for later union or intersection
MI An intersection (AND) is performed
MU A union (OR) is performed
The following steps relate to the previous query and the PLAN_TABLE
output
shown in Table 31:
2. Index EMPX1, with matching predicate AGE 40, also provides a set
of
candidates for the result of the query. This is MIXOPSEQ 2.
3. Index EMPX2, with matching predicate JOB'MANAGER', also
provides a
set of candidates for the result of the query. This is MIXOPSEQ 3.
-------------------------------------------------------------------------
Table 31. PLAN_TABLE Output for Example Query that Uses Multiple
Indexes. Depending on the filter factors of the predicates,
the multiple index access steps can appear in different
order.
------------------------------------------------------------------------
PLAN- TNAME ACCESS- MATCH- ACCESS- PREFETCH
MIXOP-
NO TYPE COLS NAME SEQ
-----------+-----------+---------+--------+---------+----------+--------
1 EMP M 0 L 0
-----------+-----------+---------+--------+---------+----------+--------
1 EMP MX 1 EMPX1 1
-----------+-----------+---------+--------+---------+----------+--------
1 EMP MX 1 EMPX1 2
-----------+-----------+---------+--------+---------+----------+--------
1 EMP MX 1 EMPX2 3
-----------+-----------+---------+--------+---------+----------+--------
1 EMP MI 0 4
-----------+-----------+---------+--------+---------+----------+--------
1 EMP MU 0 5
-------------------------------------------------------------------
In this example, the steps in the multiple index access follow the
physical sequence of the predicates in the query. This is not always the
case. The multiple index steps are arranged in an order that uses RID
pool storage most efficiently and for the least amount of time.
ACCESSTYPEI1
Queries Using One-fetch Index Access: The following queries use one-
fetch
index scan with an index existing on T(C1,C2 DESC,C3):
INDEXONLYY
With index-only access, the access path does not require any data pages
because the access information is available in the index. Conversely,
when an SQL statement requests a column that is not in the index, updates
any column in the table, or deletes a row, DB2 has to access the
associated data pages. Because the index is almost always smaller than
the table itself, an index-only access path usually processes the data
efficiently.
Any access to VARCHAR data cannot be index only, because the actual
length
of the VARCHAR column is not stored in the index.
To use a matching index scan to update an index in which its key columns
are being updated, the following conditions must be met:
With list prefetch or multiple index access, any index or indexes can be
used in an UPDATE operation. Of course, these access paths must
provide
efficient access to the data in order to be chosen.
Statistics from the DB2 catalog help determine the most economical
access
path. The statistics, listed below, used to determine index access cost
are found in the corresponding columns of the SYSIBM.SYSINDEXES
catalog
table. Monitoring these statistics can help you determine if your indexes
are working effectively.
------------
CLUSTERED Root
INDEX SCAN * 25 * 61 * page
+----+----+-
---------------- ----------------
v v v
------------ ------------ ------------
Intermediate
* 8 * 13 * * 33 * 45 * * 75 * 86 * pages
+----+----+- +----+----+- +----+----+-
-- - -- -- -- --- -
v v v v v v v v v
----- ----- ----- ----- ----- ----- ----- ----- -----
> > > > > > > > Leaf
**** **** **** **** **** pages
----- ----- ---+- ++++- ++++- ++++- +---- ----- -----
----------
--------- ------------
----------- -----------
---------- ----------
-- ----
- ------
-----
-------
---------+-+-+--------+-+-+-------+-+-+-+------+-+-+-+--------------
T -----+-+-+--------+-+-+-------+-+-+-+------+-+-+-+-----
A --+-+-+--- --+-+-+--- -+-+-+-+-- -+-+-+-+--
B vvv vvv vvvv vvvv
L T -- -- -- --
E A -- -- -- -- -- -- -- --
B -- -- -- --<+--+---Row
S L -- -- -- -- -- --
P E -- -- -- -- -- --
A ---------- ---------- ---------- ----------
C Data page Data page Data page Data page
E -------------------------------------------------------
--------------------------------------------------------------------
------------
NONCLUSTERED Root
INDEX SCAN * 25 * 61 * page
+----+----+-
---------------- ----------------
v v v
------------ ------------ ------------
Intermediate
* 8 * 13 * * 33 * 45 * * 75 * 86 * pages
+----+----+- +----+----+- +----+----+-
-- - -- -- -- --- -
v v v v v v v v v
----- ----- ----- ----- ----- ----- ----- ----- -----
> > > > > > > > Leaf
**** **** **** **** **** pages
----- ----- ----- ++++- ++++- ----- ----- ----- -----
------------++++-------------------
+++----++---------------+--
------------ ---------------+-+----
+----+----
------------ ---+----
---------+-+-+--------------------+---+--------+-+---+--------------
T -----+-+-+--------------------+---+--------+-+---+-----
A --+-+-+--- ---------- -+---+---- -+-+---+--
B vvv v v vv v
L T -- -- -- --
E A -- -- -- -- -- -- -- --
B -- -- -- --<+--+---Row
S L -- -- -- -- -- --
P E -- -- -- -- -- --
A ---------- ---------- ---------- ----------
C Data page Data page Data page Data page
E -------------------------------------------------------
--------------------------------------------------------------------
* CLUSTERRATIO only affects the cost of data access and not the
actual
index access.
When the number of matching columns is greater than 1 and less than the
number of index key columns, the filtering of the index is located between
1/FIRSTKEYCARD and 1/FULLKEYCARD.
NLEAF: This is the number of active leaf pages in the index. NLEAF is
a
portion of the cost to traverse the index. The smaller the number is, the
less the cost. It is also less when the filtering of the index is high,
which comes from FIRSTKEYCARD, FULLKEYCARD, and other
indexable
predicates.
Although indexes provide selective access to data, they can also order
data, eliminating the need for a sort. If the index provides ordering for
ORDER BY, GROUP BY, join, or DISTINCT in a column function,
some sorts can
be avoided.
When list sequential prefetch is used, the index does not provide useful
ordering, and a sort of the data might take place to satisfy desired data
ordering.
Not all sorts are inefficient. For example, if the index that provides
ordering is not an efficient one and many rows qualify, it is possible
that using another access path to retrieve and then sort the data could be
more efficient than the inefficient, ordering index.
PREFETCHS
1. On the inner table of a nested loop join when the data is being
accessed sequentially. This is the most common use of sequential
detection.
Sequential detection does not occur on data pages for UPDATE and
DELETE statements when cursor control is not used and the selection
criteria are based solely on indexed columns. For INSERT statements,
sequential detection can only occur on index leaf pages.
3. When DB2 does not turn on sequential prefetch at bind time due to an
inaccurate estimate of the number of pages to be accessed.
Sequential detection does not occur for operations that have referential
constraints, except when you run a utility against an object that has
referential constraints.
Page access patterns are tracked when the application scans DB2 data
through an index. Tracking is done to detect situations where the access
pattern that develops is sequential or nearly sequential.
* Let A be the page being requested. RUN1 is defined as the page range
of length P/2 pages starting at A.
For example, assume page A is 10, the following figure illustrates the
page ranges that DB2 calculates.
A B C
----------------------------------------------------------
Run1 Run2 Run3
--------------------------------------------------------
Page # 10 26 42
P 32 pages 16 16 32
For subsequent page requests where the page is 1) page sequential and 2)
data access sequential is still in effect, prefetch is requested as
follows:
# When DB2 plans to use sequential prefetch to access data from a table in
a
partitioned table space, it could initiate multiple parallel I/O
operations to access the table. For queries whose response time is
dominated by I/O operations, the response time can be significantly
reduced.
If these steps are taken, when DB2 estimates that a given query's I/O
processing cost is much higher than its processor cost, it will activate
multiple parallel I/O streams. Queries that can take advantage of
parallel I/O processing are those in which DB2 spends most of the time
fetching pages and very little time actually processing rows. Those
include:
* Queries with intensive data scans and high selectivity. These queries
involve large volumes of data to be scanned but relatively few rows
that meet the search criteria.
* Queries accessing long data rows. These queries access tables with
long data rows, and the ratio of rows per page is very low (one row
per page, for example).
To understand how DB2 uses parallel I/O operations, and how the contents
of the PLAN_TABLE columns relate to these I/O operations, consider the
following examples. The columns mentioned in these examples are
described
in Figure 64 in topic 5.2.1.1.
All steps with the same value for ACCESS_PGROUP_ID or
JOIN_PGROUP_ID
indicate that a set of operations are in the same parallel group.
Usually, the set of operations involves various types of join methods and
sort operations. For a complete description of join methods, see "Join
Methods" in topic 5.2.8.
Assume that DB2 decides at bind time to initiate three concurrent I/O
requests to retrieve data from table T1. Part of the PLAN_TABLE
appears below. If DB2 decides not to use parallel I/O operations for
# a step, ACCESS_DEGREE and ACCESS_PGROUP_ID contain null
values.
-------------------------------------------------------------------------
TNAME METHOD ACCESS_ ACCESS_ JOIN_ JOIN_
DEGREE PGROUP_ID DEGREE PGROUP_ID
-------------------+--------+---------+-----------+---------+-----------
# T1 0 3 1 (null) (null)
--------------------------------------------------------------------
-------------------------------------------------------------------------
TNAME METHOD ACCESS_ ACCESS_ JOIN_ JOIN_
DEGREE PGROUP_ID DEGREE PGROUP_ID
-------------------+--------+---------+-----------+---------+-----------
# T1 0 3 1 (null) (null)
-------------------+--------+---------+-----------+---------+-----------
T2 1 3 1 3 1
-------------------+--------+---------+-----------+---------+-----------
T3 1 3 1 3 1
--------------------------------------------------------------------
Consider a query that causes a merge scan join between two tables, T1
and T2. DB2 decides at bind time to initiate three concurrent I/O
requests for T1 and six concurrent I/O requests for T2. Because the
retrieval of T1 is used as the input for sort, it is in a parallel
group by itself; the same is true for T2. This is because the sort
process terminates a parallel group. Furthermore, the merging phase
can potentially initiate three concurrent I/O requests on each
temporary sorted table; the merging phase is in a different parallel
group from the table retrievals. Part of the PLAN_TABLE appears as
follows:
-------------------------------------------------------------------------
TNAME METHOD ACCESS_ ACCESS_ JOIN_ JOIN_
DEGREE PGROUP_ID DEGREE PGROUP_ID
-------------------+--------+---------+-----------+---------+-----------
# T1 0 3 1 (null) (null)
-------------------+--------+---------+-----------+---------+-----------
T2 2 6 2 3 3
--------------------------------------------------------------------
-------------------------------------------------------------------------
TNAME METHOD ACCESS_ ACCESS_ JOIN_ JOIN_
DEGREE PGROUP_ID DEGREE PGROUP_ID
-------------------+--------+---------+-----------+---------+-----------
T1 0 3 1
-------------------+--------+---------+-----------+---------+-----------
T2 4 6 2 6 2
--------------------------------------------------------------------
At query execution time, the following factors influence the actual number
of parallel I/O streams used to access the data:
Specify FOR FETCH ONLY for read-only cursors. For cursors that can
be
updated or deleted, queries might not use parallel I/O streams to
access the data.
To determine the actual number of parallel I/O streams used after buffer
negotiation, refer to field QW0221AD in IFCID 0221, as mapped by
macro
DSNDQW03. IFCID 0222, which is mapped by macro DSNDQW03,
contains the
elapsed time information for each parallel group in each SQL query.
To increase the number of parallel I/O streams that DB2 uses to access
data, check the following factors:
Put data partitions on separate physical devices and enable I/O streams to
operate on separate channel paths to minimize I/O contention.
PREFETCHL
The list of RIDs is then sorted in the RID pool in ascending page
number order.
The needed data pages are accessed in order using the sorted RID list.
The data pages are not read one at a time. They are read in
quantities equal to the sequential prefetch quantity (usually 32
pages), which depends on buffer pool size.
Unlike a regular index with no list prefetch, an index scan with list
prefetch does not preserve the data ordering given by the index. Because
the RIDs are sorted in page number order before accessing the data, the
data is not in order by any column after retrieval. If the data needs to
be ordered because of an ORDER BY clause or any other reason, an
additional sort of the data is needed.
In some cases where list sequential prefetch is used for hybrid join, the
page numbers might not be sorted before accessing the data. This occurs
when the index is highly clustered.
All predicates that can normally be matching predicates for an index scan
can be matching predicates for an index scan using list prefetch, except
for IN-list predicates. IN-list predicates cannot be used as matching
predicates when list prefetch is used.
4. List prefetch is always used to access data from the inner table
during a hybrid join.
At bind time, DB2 does not consider list prefetch if the estimated number
of RIDs to be processed would take more than 50% of the RID pool. RID
storage is virtual storage acquired from the RID storage pool at the time
the query is executed. You can modify the size of the RID pool by
updating the RID POOL SIZE field on installation panel DSNTIPC. The
maximum size of a RID pool is 1000MB. The size of a single RID list is
limited to approximately 16 million RIDs. For information on calculating
RID pool size, see Section 7 (Volume 3) of Administration Guide.
* If the access path step is the index access portion of multiple index
ANDing, that step terminates and processing continues with the next
multiple index access step. If there is no remaining step, and no RID
list has been accumulated, DB2 falls back to a table space scan.
During multiple index ANDing, if any RID list has 32 or less RIDs, any
further intersection of RID lists stops, and the RID list with 32 or fewer
RIDs is used to access the data.
SELECT * FROM T1
WHERE C1 10 AND
C2 BETWEEN 10 AND 20 AND
C3 NOT LIKE 'A%'
Simple or Compound
Any predicate with an 'AND' or 'OR' is a compound predicate. Others
are simple predicates.
Operational type
A simple predicate is described by the type of operator it uses:
subquery, equal, in-list, range, or not.
Local or Join
A predicate that references more than one table or correlated
reference is a join predicate. Others are local predicates.
Indexable or Non-indexable
Indexable predicates can be applied to index entries; others cannot.
Stage 1 or Stage 2
Stage 1 predicates are evaluated at the time the data rows are
retrieved. Stage 2 predicates are evaluated after data retrieval.
Boolean Term
Any predicate that is not contained by a compound OR predicate
structure is a Boolean term. If a Boolean term is evaluated false
for a particular row, the whole WHERE clause is evaluated false for
that row.
Table 32 in topic 5.2.7.2 lists all DB2 simple predicates and tells
whether each is indexable or stage 1.
Type Description
Equal Any predicate that is not a subquery predicate and has an equal
operator and no NOT operator. Also included are predicates of
the form C1 IS NULL. Example: C1100
The following two examples show how the predicate type can influence
DB2's
choice of an access path. In each one, assume that a unique index I1 (C1)
exists on table T1 (C1, C2), and that all values of C1 are positive
integers.
The query,
has a range predicate. However, the predicate does not filter any rows of
T1. Therefore, it could be determined during bind that a table space scan
is more efficient than the index scan.
The query,
has an equal predicate. DB2 chooses the index access in this case,
because only one scan is needed to return the result.
Join predicates involve more than one table or correlated reference. They
can be used to perform the join at execution time.
Some DB2 predicate types can match index entries; others cannot. In
other
words, DB2 can apply some predicates to the index entries, depending on
the predicate type. Such predicates are called indexable. Indexable
predicates might not become matching predicates of an index; it depends
on
the indexes that are available and the access path chosen at bind time.
Rows evaluated at the first stage imply fewer predicate evaluations at the
second stage. For example, predicates containing arithmetic operations
and scalar functions are stage 2. Because stage 1 predicates eliminate
rows that would otherwise be passed on to stage 2, there is a definite
performance advantage in using stage 1 predicates whenever possible.
Because indexable predicates are evaluated at the index page, they are
also stage 1. The reverse is not necessarily true. For example, the
predicate C1 LIKE %BC is not indexable but can still be evaluated at
stage
1.
# * predicate syntax
* P1 is a simple BT predicate.
* P2 and P3 are simple non-BT predicates.
* P2 OR P3 is a compound BT predicate.
* P1 AND (P2 OR P3) is a compound BT predicate.
Boolean term join predicates are more efficient than non-Boolean term
join
predicates because they can detect early rejection of a given row.
In single index processing, only Boolean term predicates are chosen for
matching predicates. It means that only indexable Boolean term
predicates
are candidates for matching index scans. In order to take advantage of
index columns being matched by predicates that are not Boolean terms,
DB2
considers multiple index access.
Both predicates are stage 1, but only the first matches the index. A
matching index scan could be used with C1 as a matching column.
Both predicates are stage 1 but not Boolean terms. The compound is
not
indexable and cannot be considered for matching index access.
(However, it is indexable through index ORing).
Both predicates are stage 2 and not indexable. The index is not
considered for matching index access, and both predicates are
evaluated at stage 2.
The first two predicates only are stage 1 and indexable. The index is
considered for matching index access, and all rows satisfying those
two predicates are passed to stage 2 to evaluate the third predicate.
3. When possible, try to write queries that evaluate the most restrictive
predicates first. In other words, when predicates with a high
filtering factor are processed first, unnecessary rows are screened as
early as possible, which can reduce processing cost at a later stage.
However, a predicate's restrictiveness is only effective among
predicates within a predicate type class and predicate evaluation
stage.
The second set of rules describes the order of predicate evaluation within
each of the above stages:
Then, after both sets of rules are applied, control is given to the user
by evaluating the predicates in the order in which they appear on the
query.
The filter factor of a predicate is a number between zero and one that
estimates how many rows are rejected by and how many rows are not
rejected
by the predicate. Filter factors strongly affect the estimated costs of
access paths by estimating the percentage of rows returned after
evaluation of a set of predicates.
If P1 is a predicate and FF1 is its filter factor, then the filter factor
of the predicate NOT P1 is (1 - FF1).
The most relevant filter factors are those for compound predicates that:
-------------------------------------------------------------------------
Predicate Type Index- Stage Notes
able? 1?
-----------------------------------------+---------+----------+---------
COLvalue Y Y
-----------------------------------------+---------+----------+---------
COL IS NULL Y Y
-----------------------------------------+---------+----------+---------
COL op value Y Y
-----------------------------------------+---------+----------+---------
COL BETWEEN value1 Y Y
AND value2
-----------------------------------------+---------+----------+---------
# value BETWEEN COL1 N N
# AND COL2
-----------------------------------------+---------+----------+---------
COL BETWEEN COL N N
AND COL
-----------------------------------------+---------+----------+---------
COL LIKE 'pattern' Y Y 6
-----------------------------------------+---------+----------+---------
COL IN (list) Y Y
-----------------------------------------+---------+----------+---------
COL <> value N Y
-----------------------------------------+---------+----------+---------
COL IS NOT NULL N Y
-----------------------------------------+---------+----------+---------
COL NOT BETWEEN value1 N Y
AND value2
-----------------------------------------+---------+----------+---------
# value NOT BETWEEN N N
# COL1 AND COL2
-----------------------------------------+---------+----------+---------
COL NOT IN (list) N Y
-----------------------------------------+---------+----------+---------
COL NOT LIKE 'char' N Y 6
-----------------------------------------+---------+----------+---------
COL LIKE '%char' N Y 1, 6
-----------------------------------------+---------+----------+---------
COL LIKE '_char' N Y 1, 6
-----------------------------------------+---------+----------+---------
COL LIKE host variable Y Y 2, 6
-----------------------------------------+---------+----------+---------
# T1.COL T2.COL Y Y 3
-----------------------------------------+---------+----------+---------
T1.COL op T2.COL Y Y 3
-----------------------------------------+---------+----------+---------
T1.COL <> T2.COL N Y 3
-----------------------------------------+---------+----------+---------
# T1.COL1 T1.COL2 N N 4
-----------------------------------------+---------+----------+---------
T1.COL1 op T1.COL2 N N 4
-----------------------------------------+---------+----------+---------
T1.COL1 <> T1.COL2 N N 4
-----------------------------------------+---------+----------+---------
# COL(non subq) Y Y
-----------------------------------------+---------+----------+---------
# COL ANY (non subq) N N
-----------------------------------------+---------+----------+---------
COL ALL (non subq) N N
-----------------------------------------+---------+----------+---------
# COL op (non subq) Y Y
-----------------------------------------+---------+----------+---------
COL op ANY (non subq) Y Y
-----------------------------------------+---------+----------+---------
COL op ALL (non subq) Y Y
-----------------------------------------+---------+----------+---------
COL <> (non subq) N Y
-----------------------------------------+---------+----------+---------
COL <> ANY (non subq) N N
-----------------------------------------+---------+----------+---------
# COL <> ALL (non subq) N N
-----------------------------------------+---------+----------+---------
# COL IN (non subq) N N
-----------------------------------------+---------+----------+---------
# COL NOT IN (non subq) N N
-----------------------------------------+---------+----------+---------
COL(cor subq) N N 5
-----------------------------------------+---------+----------+---------
COL ANY (cor subq) N N
-----------------------------------------+---------+----------+---------
COL ALL (cor subq) N N
-----------------------------------------+---------+----------+---------
COL op (cor subq) N N 5
-----------------------------------------+---------+----------+---------
COL op ANY (cor subq) N N
-----------------------------------------+---------+----------+---------
COL op ALL (cor subq) N N
-----------------------------------------+---------+----------+---------
COL <> (cor subq) N N 5
-----------------------------------------+---------+----------+---------
COL <> ANY (cor subq) N N
-----------------------------------------+---------+----------+---------
COL <> ALL (cor subq) N N
-----------------------------------------+---------+----------+---------
COL IN (cor subq) N N
-----------------------------------------+---------+----------+---------
COL NOT IN (cor subq) N N
-----------------------------------------+---------+----------+---------
# EXISTS (subq) N N
-----------------------------------------+---------+----------+---------
# NOT EXISTS (subq) N N
-----------------------------------------+---------+----------+---------
# COLexpression N N 7
-----------------------------------------+---------+----------+---------
expressionvalue N N
-----------------------------------------+---------+----------+---------
expression <> value N N
-----------------------------------------+---------+----------+---------
expression op value N N
-----------------------------------------+---------+----------+---------
expression op (subquery) N N
---------------------------------------------------------------------
Note: 'non subq' means a non-correlated subquery. 'cor subq' means a
correlated subquery.
3. Within each statement, the columns are of the same type. Examples
of different column types include:
4. If both COL1 and COL2 are from the same table, access through an
index on either one is not considered for these predicates.
However, the following query is an exception:
Default statistics
Uniform distribution
Additional distribution
Default Filter Factor for Simple Predicate: Defaults are used when no
statistics exist, because either no RUNSTATS jobs were run and the
COLCARD
entry for that column was not updated, or the COLCARD value of the
column
was updated to '-1'.
-------------------------------------------------------------------------
Table 33. DB2 Default Filter Factors by Predicate Type
------------------------------------------------------------------------
Predicate Type Filter Factor
-------------------------------------------+----------------------------
Col literal 1/25
-------------------------------------------+----------------------------
Col IS NULL 1/25
-------------------------------------------+----------------------------
Col IN (literal list) number of literals /25
-------------------------------------------+----------------------------
Col Op literal 1/3
-------------------------------------------+----------------------------
Col LIKE literal 1/10
-------------------------------------------+----------------------------
Col BETWEEN literal1 and literal2 1/10
-----------------------------------------------------------------------
Note:
-------------------------------------------------------------------------
Table 34. DB2 Uniform Filter Factors by Predicate Type
------------------------------------------------------------------------
Predicate Type Filter Factor
-------------------------------------------+----------------------------
Col literal 1/COLCARD
-------------------------------------------+----------------------------
Col IS NULL 1/COLCARD
-------------------------------------------+----------------------------
Col IN (literal list) number of literals /COLCARD
-------------------------------------------+----------------------------
Col Op1 literal interpolation formula
-------------------------------------------+----------------------------
Col Op2 literal interpolation formula
-------------------------------------------+----------------------------
Col LIKE literal interpolation formula
-------------------------------------------+----------------------------
Col BETWEEN literal1 and literal2 interpolation formula
-----------------------------------------------------------------------
Note:
The interpolation formulas are based on the ratio of matching values over
the full range of column values. They are also subject to additional
lower and upper bounds. The following are rough interpolation formulas:
* For the operators < and <, where the literal is not a host variable:
(Literal value - LOW2KEY value) / (HIGH2KEY value -
LOW2KEY value)
* For the operators > and >, where the literal is not a host variable:
DB2 might not interpolate in some cases. In those cases, the default
filter factor or a filter factor based on the COLCARD value of the
predicate column is used as explained below.
The interpolation default filter factors for the various operators (<, <,
>, >), LIKE, and BETWEEN are shown in Table 35.
-------------------------------------------------------------------------
Table 35. Filter Factor Defaults for the Interpolation Formulas
------------------------------------------------------------------------
COLCARD Factor for Op Factor for LIKE
or BETWEEN
-----------------------+------------------------+-----------------------
>100,000,000 1/10,000 3/100,000
-----------------------+------------------------+-----------------------
>10,000,000 1/3,000 1/10,000
-----------------------+------------------------+-----------------------
>1,000,000 1/1,000 3/10,000
-----------------------+------------------------+-----------------------
>100,000 1/300 1/1,000
-----------------------+------------------------+-----------------------
>10,000 1/100 3/1,000
-----------------------+------------------------+-----------------------
>1,000 1/30 1/100
-----------------------+------------------------+-----------------------
>100 1/10 3/100
-----------------------+------------------------+-----------------------
>0 1/3 1/10
----------------------------------------------------------------------
Note: Op is one of these operators: <, <, >, >.
-------------------------------------------------------------------------
The additional term is evaluated based on the data provided for the
predicate column in the SYSCOLDIST table. It is estimated as the sum of
all the provided percentages of the additional data that would make the
predicate evaluation TRUE. For example, for the predicate C1 IN
('3','5'), with these values in SYSCOLDIST:
COLVALUE FREQUENCY
'3' 0153
'5' 0859
'8' 0627
If an IN-list predicate has only one item in its list, the predicate
becomes an EQUAL predicate.
A set of simple, Boolean term, equal predicates on the same column that
are connected by OR predicates can be converted into an IN-list predicate.
For example: C15 or C110 or C115 converts to C1 IN (5,10,15).
When the set of predicates that belong to a query logically imply extra
predicates, DB2 generates additional predicates. The purpose is to
provide more information during access path selection.
1. The query has an equal type predicate: C1C2. This could be a join
predicate or a local predicate.
2. The query has another equal or range type predicate on one of the
columns in the first predicate: C1 BETWEEN 3 and 5. This predicate
cannot be a LIKE predicate and must be a Boolean term predicate.
When these conditions are met, DB2 generates a new predicate, whether
or
not it already exists in the WHERE clause. In the above case, the
predicate C2 BETWEEN 3 and 5 is generated.
Extra join predicates are not generated if more than nine tables are
joined in a query. This cuts down on processor time during the access
path selection process.
A join is the process of combining rows of one table with rows of another.
In other words, a join is a query in which data is retrieved from more
than one table.
Composite and New Tables: During the first step in joining, only one
table is accessed. In the second step, rows of the second table are
joined to rows of the first; an interim table is produced that contains
all the qualified rows of the join. A third table can be joined to the
interim table to produce another interim table, and so on. The term
"composite" refers to the first table accessed in the first step, and to
the latest interim table in the later steps. It is also called the outer
table. The composite table might not actually be materialized into a work
file before joining the next table.
The new table is also called the inner table. Rows of the new table are
joined to the composite table rows to form a new composite table or the
result table.
To join two or more tables in a single query or subquery, DB2 chooses the
least expensive join method.
During nested loop join, for each row in the outer table that satisfies
the local predicates, DB2 searches for the matching rows of the inner
table, and, if found, concatenates them with the current row of the
composite table. Stage 1 and stage 2 predicates are applied to eliminate
unqualified rows during the join. Either table can be scanned by any of
the available access methods, including table space scan. The row for the
inner table in PLAN_TABLE will show METHOD '1'.
Nested loop join involves repetitive scanning of the inner table. That
is, the outer table is scanned once, while the inner table is scanned as
many times as the number of qualifying rows in the outer table. Hence,
the nested loop join is most often the most efficient join method when the
join column values passed to the inner table are in sequence and the join
column index of the inner table is clustered, or the number of rows
retrieved in the inner table through the index is small.
-------------------------
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A 10 AND B X;
-------------------------
v
Nested loop join
Table: OUTER INNER
COMPOSITE
Columns: A B X Y A B X Y
-------------------- -------------------- ----------------
10 3 --------- 5 A --------> 10 3 3 B
---> 3 B 10 1 1 D
10 1 ---- -----> 2 C 10 2 2 C
--------> 1 D 10 2 2 E
-----> 2 E 10 1 1 D
10 2 ---+--- 9 F ----------------
10 6 7 G
10 1 ---- --------------------
--------------------
-----------------------------------------------------------------------------------------
------------------------------
Figure 70. Nested Loop Join
* The outer table join column and inner table join column are not in the
same sequence.
Merge scan join is also known as merge join or sort merge join. For
merge
scan join the PLAN_TABLE shows METHOD '2'. For this method,
there must
be one or more predicates of the form TABLE1.COL1TABLE2.COL2,
where the
two columns have the same data type and length attribute. This method
scans both tables in the order of the merge join columns. If there are no
efficient indexes on the merge join columns to provide the ordering, a
sort on either outer table or inner table or both could be required. The
inner table must be put into a work file. The outer table is placed into
a work file only if it must be sorted.
When a row of the outer table matches a row of the inner table, DB2
returns the combined rows. Then another row of the inner table is read
that might match the same row of the outer table. DB2 continues reading
rows of the inner table as long as there is a match. When there is no
longer a match, another row of the outer table is read. If that row has
the same value in the join column, DB2 reads again the matching group of
records from the inner table.
If the outer row has a new value in the join column, DB2 searches ahead
in
the inner table until it finds either:
* A new matching inner row. In this case, DB2 starts the process again.
* An inner row with a higher value of the join column. DB2 discards the
unmatched row of the outer table and searches ahead for a match in the
outer table.
-------------------------
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A 10 AND B X;
-------------------------
v
Merge scan join
Merge Join Features: DB2 can use all join predicates during the join
phase. Called multiple column merge join, it is generally more efficient
than a single column merge join.
# DB2 can derive extra predicates for the inner table at bind time and apply
# them to the sorted outer table to be used at run time. The predicates can
# reduce the size of the work file needed for the inner table.
--------------------------
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A 10 AND X B
--------------------------
v
INNER
X Y RIDs
OUTER -----------
A B 1 Davis P5
Index --------- 1 Index 2 Jones P2
---- 10 1 ---- 2 Smith P7
10 1 3 Brown P4
10 2 5 Blake P1 5
10 3 7 Stone P6 ---------------->
10 6 v v 9 Meyer P3 ------> Composite
Table
-------------- - - --- -- - - ---- -----------
Ø A B X Y
v v ---------------------
--------------- 10 2 2 Jones
2 XB List Prefetch 4 10 3 3
Brown
---------------- 10 1 1 Davis
Intermediate table (phase 1) Ø 10 1 1
Davis
10 2 2 Jones
OUTER Data INNER RIDs
--------------------
---------------------- RID list
10 1 P5 --------
10 1 P5 P5
10 2 P2 P2
10 2 P7 P7
10 3 P4 P4
---------------------
v
-----------
3 SORT
-----------
-----------
----> RID list ----
--------
P2
v P4
P5
Intermediate table (phase 2) P7
-----------------------------------------------------------------------------------------
------------------------------
Figure 72. Hybrid Join (SORTN_JOIN'Y')
Hybrid join uses list prefetch more efficiently than nested loop join,
especially if there are indexes on the join predicate with low cluster
ratios. It also processes duplicates more efficiently because the inner
table is searched only once for each set of duplicate values in the join
column of the outer table.
Hybrid join requires an index on the join column of the inner table. For
the hybrid join example in Figure 72 in topic 5.2.8.2, both the outer
table (OUTER) and the inner table (INNER) have indexes on the join
columns.
2. The join occurs, creating the phase-one intermediate table. For every
outer table row, the index of the inner table is scanned to get the
corresponding RID.
3. The RIDs and the associated data in the outer table are sorted,
creating a sorted RID list and the phase-two intermediate table. This
sort is necessary because the inner table RIDS are accessed by a
non-clustered index. When this sort step occurs, it is reflected by a
value of Y in the SORTN_JOIN column of PLAN_TABLE. In the
other type
of hybrid join (SORTN_JOIN'N'), the inner table RIDs are retrieved by
a clustered index, and this sort is skipped.
5. The data from the inner table and the phase-two intermediate table is
concatenated to create the composite table, the ultimate result of the
hybrid join.
* If the qualifying rows of the inner and outer table are large, and the
join predicate does not provide much filtering; that is, in a
many-to-many join
* If the tables are large and have no indexes with matching columns
* If fewer columns are selected on inner tables. This is the case when
a DB2 sort is used. The fewer the columns to be sorted, the more
efficient the sort. This factor helps favor merge join.
In general, merge join tends to be used for large joins, hybrid join for
medium-sized joins, and nested loop for the smaller joins. This is a very
rough generalization.
A Cartesian join is a form of nested loop join where there are no join
predicates between the two tables being joined. DB2 usually avoids a
Cartesian join, but sometimes it is the most efficient way to join many
tables, as the following example illustrates. Three tables (T1, T2, T3)
are referenced in the query. T1 has 2 rows, T2 has 3 rows, and T3 has 10
million rows:
T3 has a join predicate with T1 and T2. T1 and T2 do not have a join
predicate with each other.
Assume that 5 million rows qualify for the C35 predicate. It takes much
processing time if T3 is the outer table of the join and table T1 and T2
are accessed for each of the 5 million rows.
On the other hand, if all rows from T1 and T2 are joined, even though
there is no join predicate, DB2 has to process the 5 million rows only six
times. It is difficult to say which access path is the most efficient.
DB2 evaluates the different options and could decide to access the tables
in the sequence T1, T2, T3. That operation is known as preferred
Cartesian join. You could find the EXPLAIN output surprising because
there is no predicate to join T1 and T2.
SQL allows you to write the same statement in different ways. There are
many situations when you can write a statement as a join, a non-correlated
subquery, or a correlated subquery and still produce the same correct
results. It is important for your query's performance that you choose the
best method.
3. The subquery searches the PROJ table to check whether there is any
project, where MAJPROJ is 'MA2100', for which the current
WORKDEPT is
responsible.
4. After the subquery is executed, DB2 stores the input value and
subquery result into the in-memory table. The in-memory table is a
wrap-around table and does not guarantee that all possible redundant
subquery executions will be saved.
DB2 repeats this whole process for each qualified row of the EMP table.
When the In-memory Table Is Used: The in-memory table is used when
the
operator of the predicate that contains the subquery is one of the
following operators:
< < > > <> EXISTS NOT EXISTS
For reasons of efficiency, it is also used only when there are fewer than
16 correlated columns in the subquery and the length of the sum of the
correlated columns is less than or equal to 256 bytes. If there is a
unique index on a subset of the correlated columns of a table from the
outer query, then the in-memory table is not used because there would be
no duplicate input values to check.
SELECT *
FROM DSN8310.EMP
WHERE JOB 'DESIGNER'
AND WORKDEPT < (SELECT MAX(DEPTNO)
FROM DSN8310.PROJ);
When the cursor is opened, the subquery executes. If more than one row
is
returned, SQLCODE -811 is issued. The predicate that contains the
subquery is treated the same as a simple predicate with a constant
specified, for example, WORKDEPT < 'value'.
Some comparison operators imply that a subquery can return more than
one
row: ANY, < ALL, < SOME, IN, EXISTS, for example.
When the subquery result is a character data type and the left-hand side
of the predicate is a datetime data type, then the result is placed in a
work file without sorting.
* (DEPTNO)
* (DIVISION, DEPTNO)
* (DEPTNO, DIVISION).
The following three queries all retrieve the same rows. All three
retrieve data about all designers in departments that are responsible for
projects that are part of major project MA2100. These three queries show
that there are several ways to retrieve a desired result.
It could be the case that columns from both the EMP and PROJ table need
to
be returned as output. If so, there is no choice but to use a join.
# When you bind a static SQL statement that contains host variables, DB2
# uses a default filter factor to determine the best access path for the SQL
# statement. For more information on filter factors, including default
# values, see "Predicate Filter Factor" in topic 5.2.7.2.
# DB2 often chooses an access path that performs well for a query with
# several host variables. However, in a new release or after maintenance
# has been applied, DB2 might choose a new access path that does not
perform
# as well as the old access path. In most cases, the change in access paths
# is due to the default filter factors, which might lead DB2 to optimize the
# query in a different way.
# Because there are only two different values in the SEX column, 'M' and
# 'F', the COLCARD value of SEX is 2. If the distribution of male and
# female employees is not balanced, the default filter factor of 1/2 will be
# larger or smaller than the actual filter factor, depending on whether :HV1
# is set to 'M' or 'F'.
# The access path that DB2 chooses might not be the most efficient one if
# the default filter factors assumed by DB2 are not close to the actual
# filter factors used when the SQL statement is executed in an application.
# You can influence the access path that DB2 chooses. You can tune an
SQL
# statement so that the default filter factors that do not agree with the
# application scenario have less impact on access path selection.
# Example 1:
# SELECT * FROM T1
# WHERE C1 BETWEEN :HV1 AND :HV2
# AND C2 BETWEEN :HV3 and :HV4;
# In this example:
# If DB2 does not choose T1X1, rewrite the query as follows, so that DB2
# does not choose the index T1X2 on T1(C2):
# SELECT * FROM T1
# WHERE C1 BETWEEN :HV1 AND :HV2
# AND (C2 BETWEEN :HV3 AND :HV4 OR 01);
# Example 2:
# SELECT * FROM T1
# WHERE C1 BETWEEN :HV1 AND :HV2
# AND C2 BETWEEN :HV3 and :HV4
# In this example:
#* The application provides both close and wide ranges on C1 and C2,
# which makes it difficult for a single access path based on default
# filter factors to work well for all the cases. For example, a small
# range on C1 will favor index T1X1 on T1(C1), a small range on C2
will
# favor index T1X2 on T1(C2), and wide ranges on C1 and C2 will favor
a
# table space scan.
# If DB2 does not choose the desired index or access path, you have a few
# options:
# Example 3:
# SELECT * FROM T1
# WHERE C1 BETWEEN :HV1 AND :HV2
# ORDER BY C2;
# In this example:
# * DB2 does not choose index access with a separate sort, which would
# perform well. Instead, DB2 chooses index access without a sort, which
# performs poorly.
# * Or, index access without a sort is chosen, though another index with
# sort would provide better performance.
# The contents of the host variables may cause significantly more or less
# rows to qualify than the default. If the number of rows qualifying is
# significantly different than what is estimated, the access path chosen may
# not be the optimal one.
# Example 4:
# SELECT * FROM T1 A, T1 B, T1 C
# WHERE A.C1 B.C1
# AND A.C2 C.C2
# AND A.C2 BETWEEN :HV1 AND :HV2
# AND A.C3 BETWEEN :HV3 AND :HV4
# AND A.C4 < :HV5
# AND B.C2 BETWEEN :HV6 AND :HV7
# AND B.C3 < :HV8
# AND C.C2 < :HV9;
# In this example:
# To circumvent this problem, reduce the size of any tables that need to be
# accessed early, or modify the join predicates. You can do this by either
# adding predicates to reduce the estimated size of a table or making the
# join predicate non-indexable, thereby disfavoring any index on the
columns
# of the join.
# SELECT * FROM T1 A, T1 B, T1 C
# WHERE (A.C1 B.C1 OR 01)
# AND A.C2 C.C2
# AND A.C2 BETWEEN :HV1 AND :HV2
# AND A.C3 BETWEEN :HV3 AND :HV4
# AND A.C4 < :HV5
# AND B.C2 BETWEEN :HV6 AND :HV7
# AND B.C3 < :HV8
# AND C.C2 < :HV9
There are two methods used to satisfy your queries that reference views:
* View merge
* View materialization
In the view merge process, the statement that references the view is
combined with the subselect that defined the view. This combination
creates a logically equivalent statement. This equivalent statement is
executed against the database. Consider the following statements:
The subselect of the view defining statement can be merged with the view
referencing statement to yield the following logically equivalent
statement:
Merged statement:
1. The view's defining subselect is executed against the database and the
results are placed in a temporary table.
-----------------------------------------------------------------------------------------
-----
Table 36. Cases when DB2 Performs View Materialization. The "X"
indicates a case of
materialization.
-----------------------------------------------------------------------------------------
----
SELECT from view uses... CREATE VIEW uses...
GROUP BY DISTINCT Column Column
Function
Function DISTINCT
-------------------------------+---------------------+----------+----------
+-----------------
Join X X X X
-------------------------------+---------------------+----------+----------
+-----------------
GROUP BY X X X X
-------------------------------+---------------------+----------+----------
+-----------------
DISTINCT - X - X
-------------------------------+---------------------+----------+----------
+-----------------
Column Function X X X X
-------------------------------+---------------------+----------+----------
+-----------------
Column Function X X X X
DISTINCT
-------------------------------+---------------------+----------+----------
+-----------------
SELECT Subset of - X - -
View Columns
-----------------------------------------------------------------------------------------
-
Another indication that DB2 chose view materialization is that the view
name appears as a TNAME attribute for rows describing the access path
for
the view referencing query block. When DB2 chooses view merge,
EXPLAIN
data for the merged statement appears in PLAN_TABLE; only the names
of the
base tables on which the view is defined appear.
Because view merge is generally the most efficient method for satisfying
a
view reference, DB2 materializes a view only if it cannot merge a view
reference. As described above, view materialization is a two-step process
with the first step resulting in the formation of a temporary result. The
smaller the temporary result, the more efficient is the second step. To
reduce the size of the temporary result, DB2 attempts to evaluate certain
predicates from the WHERE clause of the view referencing statement at
the
first step of the process rather than at the second step. Only certain
types of predicates qualify. First, the predicate must be a simple
Boolean term predicate. Second, it must have one of the forms shown in
Table 37.
---------------------------------------------------------------------------
Table 37. Predicate Candidates for First-step Evaluation
--------------------------------------------------------------------------
Predicate Example
-------------------------------------------+------------------------------
COL op literal V1.C1 > hv1
-------------------------------------------+------------------------------
COL IS (NOT) NULL V1.C1 IS NOT NULL
-------------------------------------------+------------------------------
COL (NOT) BETWEEN literal AND literal V1.C1 BETWEEN 1
AND 10
-------------------------------------------+------------------------------
COL (NOT) LIKE constant (ESCAPE constant) V1.C2 LIKE 'p\%%'
ESCAPE '\'
-------------------------------------------------------------------------
Note: Where "op" is , <>, >, <, <, or >, and "literal" is either a
host variable, constant, or special register. The literals in the
BETWEEN predicate need not be identical.
---------------------------------------------------------------------------
There are two general types of sorts that DB2 can use when accessing
data.
One is a sort of data rows; the other is a sort of row identifiers (RIDs)
in a RID list.
The only reason DB2 sorts the new table is for join processing, which is
indicated by SORTN_JOIN.
Sorts to Remove Duplicates: This type of sort is used for a query with
SELECT DISTINCT, with a set function such as COUNT(DISTINCT
COL1), or to
remove duplicates in UNION processing. It is indicated by
SORTC_UNIQ in
PLAN_TABLE.
DB2 sorts RIDs into ascending page number order in order to perform list
prefetch. This sort is a very fast sort which is done totally in memory.
A RID sort is usually not indicated in the PLAN_TABLE, but a RID sort
normally is performed whenever list prefetch is used. The only exception
to this rule is when a hybrid join is performed and a single, highly
clustered index is used on the inner table. In this case SORTN_JOIN is
'N', indicating that the RID list for the inner table was not sorted.
In order to see what access paths your production queries will use, you
should consider updating the catalog statistics on your test system to be
the same as your production system.
When you bind applications on the test system with production statistics,
access paths should be similar to what you see when the same query is
bound on your production system. If the access paths from test to
production are different, there are three possible causes:
There are some situations where the application wishes to retrieve only
the first few rows of a result, even though thousands of rows can qualify.
An example of such a situation is where the user wishes to retrieve the
first row that is greater than or equal to a known value. The SQL
statement could be coded as:
When to Use the OPTIMIZE Clause: Use the OPTIMIZE clause only
when your
application is not interested in the total answer set , that is, when the
application only fetches the first n qualifying rows. The access path is
not dynamically changed if the application fetches more than n rows. The
path chosen for efficient partial answer set access can be very
inefficient for accessing the total answer set. In general, putting in a
number for n that is larger than the number of rows that the query returns
has no effect on the access path picked.
5.2.15 Tips for Resolving Access Path Problems
1. Make sure your SQL is efficient and your database design is correct.
Performing this step first will resolve most of your access path
problems. Going over the checklist of general tips in this chapter
might help you to find the problem.
2. Code the query with SQL designed to influence access path selection.
If you have a query that is performing poorly, first execute EXPLAIN for
the query to see what access path DB2 is using to perform the query. If
your query uses host variables and you are doing the EXPLAIN from
QMF or
SPUFI, use parameter markers (?) where the host variable would be. Or
you
can use the EXPLAIN option of bind and match up the statement number
in
the DBRM with the QUERYNO column in the PLAN_TABLE. Then go
over the
following checklist to see that you have not overlooked some of the
basics.
Make sure the SQL query is coded as simply and efficiently as possible.
Make sure that no unused columns are selected and that an unneeded
ORDER
BY or GROUP BY is not present. Consult "Chapter 2-1. Retrieving
Data" in
topic 2.1 for help in writing efficient queries.
Make sure all the predicates that you think should be indexable are coded
so that they can be indexable. Refer to Table 32 in topic 5.2.7.2 to see
which predicates are indexable and which are not. Also try to remove any
predicates which are redundant or not needed; they can slow down
performance.
Make sure that the declared length of any host variables used in the query
is not longer than the data column it is being compared to. This would
cause the predicate to be stage 2 and it would not be a matching predicate
for an index scan.
# For example, assume that a host variable and an SQL column are defined
as
# follows:
# MYHOSTV DS P'123.123'
# This guarantees the same host variable declaration as the SQL column
# definition.
Make sure that the OPTIMIZE clause has been added to the query if it is
needed. For static applications, if you do not intend to fetch all the
data rows before closing the cursor, you can benefit from using
OPTIMIZE
FOR n ROWS. If you are using QMF, you could benefit from using the
OPTIMIZE clause if you will not need to view all the data rows. For
SPUFI, consider adding OPTIMIZE FOR n ROWS to your query if you
have
specified the output limit to be lower than the number of lines your query
will retrieve. See "Optimize for N Rows" in topic 5.2.14 for more
information on what kind of queries can benefit from this clause.
5.2.15.1.4 Are There any Subqueries in Your Query?
3. If there are multiple subqueries in any parent query, make sure that
the subqueries are ordered in the most efficient manner.
Consider the following illustration. Assume that there are 1000 rows in
MAIN_TABLE.
Assuming that subquery 1 and subquery 2 are the same type of subquery
(either correlated or non-correlated), DB2 evaluates the subquery
predicates in the order they appear in the WHERE clause. Subquery 1
rejects 10% of the total rows, and subquery 2 rejects 80% of the total
rows.
If you are in doubt, run EXPLAIN on the query with both a correlated and
a
non-correlated subquery. By examining the EXPLAIN output and
understanding your data distribution and SQL statements, you should be
able to determine which form is more efficient.
# The following is an excerpt from a large single table. Columns CITY and
# STATE are highly correlated, and columns DEPTNO and SEX are
entirely
# independent.
# TABLE CREWINFO
# In this simple example, for every value of column CITY that equals
# 'FRESNO', there is only one value of column STATE that matches.
# Query 1
-------------------------------------------------------------------------
# Table 38. Effects of Column Correlation on Matching Columns
------------------------------------------------------------------------
# INDEX 1 INDEX 2
---------------------------+--------------------------+-----------------
# Matching Predicates Predicate1 Predicate2
# CITYFRESNO AND STATECA DEPTNOA345 AND
# SEXF
---------------------------+--------------------------+-----------------
# Matching Columns 2 2
---------------------------+--------------------------+-----------------
# DB2 estimate (Filter columnCITY, colcard4 columnDEPTNO,
# Factor) Filter Factor1/4 colcard4
# for matching columns columnSTATE, colcard3 Filter
# Filter Factor1/3 Factor1/4
# columnSEX,
# colcard2
# Filter
# Factor1/2
---------------------------+--------------------------+-----------------
# Compound Filter Factor 1/4 * 1/3 0.08 1/4 * 1/2
# for matching columns 0.125
---------------------------+--------------------------+-----------------
# Qualified leaf pages 0.08 * 10 0.8 or 0.125 * 10
# based on approx. 1 1.25
# DB2 estimations INDEX CHOSEN (.8 < 1.25)
---------------------------+--------------------------+-----------------
# Actual filter factor 4/10 2/10
# based on data
# distribution
---------------------------+--------------------------+-----------------
# Actual number of 4/10 x 10 4 2/10 x 10 2
# qualified BETTER INDEX
# leaf pages based on CHOICE (2 < 4)
# compound predicate
-----------------------------------------------------------------------
# The problem is that the filtering of columns CITY and STATE should
not
# look good. Column STATE does almost no filtering. Since columns
# DEPTNO and SEX do a better job of filtering out rows, DB2 should
favor
# Index 2 over Index 1.
# Index 3 (EMPNO,CITY,STATE)
# Index 4 (EMPNO,DEPTNO,SEX)
# TABLE DEPTINFO
# Query 2
# SELECT ... FROM CREWINFO T1,DEPTINFO T2
# WHERE T1.CITY 'FRESNO' AND T1.STATE'CA'
(PREDICATE 1)
# AND T1.DEPTNO T2.DEPT AND T2.DEPTNAME 'LEGAL';
# Also, due to the smaller estimated size for table CREWINFO, a nested
# loop join might be chosen for the join method. But, if many rows are
# selected from table CREWINFO because Predicate1 does not filter as
# many rows as estimated, then another join method might be better.
1. There must be no sort needed for GROUP BY. You can see this by
looking at the EXPLAIN output.
4. If the query is a join, all set functions must be on the last table
joined. You can check this by looking at the EXPLAIN output.
If your query involves a MAX or MIN, refer to the one-fetch access type
(I1), and see if your query could be changed to take advantage of this.
See "One-Fetch Access" in topic 5.2.3.2.6.
5.2.15.1.8 Do You Need to Add Extra Local Predicates?
In this case, the predicate T2.C1 LIKE 'A%' should be added. Refer to
"Predicates Generated Through Transitive Closure" in topic 5.2.7.3.2 for
more information.
-------------------------------------------------------------------------
You can also use the EXPLAIN option when you bind or rebind a plan or
package. Compare the new plan or package for the statement to the old
one. If the new one has a table space scan or a nonmatching index space
scan, but the old one did not, the problem is probably the statement.
Investigate any changes in access path in the new plan or package; they
could represent performance improvements or degradations. If neither the
accounting report ordered by PLANNAME or PACKAGE nor the
EXPLAIN statement
suggest corrective action, use the DB2PM SQL activity reports for
additional information. For more information on using EXPLAIN, see
"Using
EXPLAIN" in topic 5.2.1.
There are very many special queries and situations where you would like
to
change the access path for a query by changing the SQL of the query
itself. The scenarios are too numerous to mention, and every query is
different. Therefore, rather than trying to list and discuss what you
would like to change by adding to the SQL, this section lists the few
changes that you can make. The possible effects of making these changes
are then listed.
You are most likely to influence an access path by using OPTIMIZE FOR
1
ROW. Specifying that clause could have the following effects:
1. The join method could change. Nested loop join is the most likely
choice, because it has low overhead cost and appears to be more
efficient if you only want to retrieve one row.
5. In a join query, the table with the columns in the ORDER BY clause is
likely to be picked as the outer table if there is an index on that
outer table which gives the ordering needed for the ORDER BY clause.
# DB2 might choose a less than optimal index access if for example data in
# two or more columns of an index is correlated. To tune your query to
# force DB2 to select a particular index:
# 1. Add an ORDER BY clause, with the leading columns of the desired
index
# as the leading columns of the ORDER BY clause.
# New Query 1
# SELECT ... FROM CREWINFO WHERE
# CITY BETWEEN 'FRESNO' AND 'FRESNO' (MODIFIED
PREDICATE) .
# AND STATE 'CA'
# AND DEPTNO 'A345' AND SEX 'F'; (PREDICATE2)
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times
quicker. The problem is that 50% of all parts from center number 3 are
still in Center 3; they have not moved. DB2 does not have any statistic
that indicates this. It assumes that the parts from center number 3 are
evenly distributed among the 50 centers.
You can get the desired access path by changing the query. To discourage
the use of IX2 for this particular query, you can change the third
predicate to be non-indexable.
SELECT * FROM T
WHERE
PART_TYPE 'BB'
AND W_FROM 3
AND (W_NOW 3 OR 01) <-- PREDICATE IS MADE NON-
INDEXABLE
Now index I2 is not picked, because it has only one match column. The
preferred index, I1, is picked. The third predicate is checked as a stage
2 predicate, which is more expensive than a stage 1 predicate. However,
if most of the filtering is already done by the first and second
predicates, having the third predicate as a stage 2 predicate should not
degrade performance significantly.
This technique for discouraging index usage can be used in the same way
to
discourage the use of multiple index access. Changing a join predicate
into a stage 2 predicate would prevent it from being used during a join.
-------------------------------------------------------------------------------
Table statistics Index statistics IX1 IX2
--------------------------------+---------------------------------------------
CARD 100,000 FIRSTKEYCARD 1000 50
NPAGES 10,000 FULLKEYCARD 100,000
100,000
CLUSTERRATIO 99% 99%
NLEAF 3000 2000
NLEVELS 3 3
-----------------------------------------------------------------------------
columnn cardinality HIGH2KEY LOW2KEY
-------- ----------- -------- -------
Part_type 1000 'ZZ' 'AA'
w_now 50 1000 1
w_from 50 1000 1
-------------------------------------------------------------------------------
Q1:
SELECT * FROM T -- SELECT ALL PARTS
WHERE PART_TYPE 'BB' P1 -- THAT ARE 'BB' TYPES
AND W_FROM 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW 3 P3 -- AND ARE STILL IN CENTER 3
-------------------------------------------------------------------------------
Filter factor of these predicates.
P1 1/1000 .001
P2 1/50 .02
P3 1/50 .02
------------------------------------------------------------------------------
ESTIMATED VALUES WHAT REALLY HAPPENS
filter data filter data
index matchcols factor rows index matchcols factor rows
ix2 2 .02*.02 40 ix2 2 .02*.50 1000
ix1 1 .001 100 ix1 1 .001 100
------------------------------------------------------------------------------
There are many ways to make a predicate stage 2. You could add zero to
one side of a numeric value, concatenate the null string to a character
type, or use the substring function to name a few. However, the
recommended way is to make the predicate a non-Boolean term by adding
an
OR predicate as follows:
Stage 1 Stage 2
T1.C1T2.C2 (T1.C1T2.C2 OR 01)
T1.C15 (T1.C15 OR 01)
Adding this OR predicate does not affect the result of the query. It is
valid for use with columns of all data types, and causes only a small
amount of overhead.
1. The table with the extra predicates is more likely to be picked as the
outer table. That is because DB2 estimates that fewer rows qualify
from the table if there are more predicates. It is generally more
efficient to have the table with the fewest qualifying rows as the
outer table.
This does not change the result of the query. It is valid for a column of
any data type, and causes a minimal amount of overhead. However, DB2
uses
only the best filter factor for any particular column. So, if TX.CX
already has another equal predicate on it, adding this extra predicate has
no effect. You should add the extra local predicate to a column that is
not involved in a predicate already. If index-only access is possible for
a table, it is generally not a good idea to add a predicate that would
prevent index-only access.
# The result of the count of each distinct column is the value of COLCARD
in
# the DB2 catalog table SYSCOLUMNS. Multiply the above two values
together
# to get a preliminary result:
# Remember that updates to the DB2 Catalog are lost after a RUNSTATS
# operation and might also affect other queries.
In order to prevent the index IX2 from being picked in Q1 above, it might
be necessary to update SYSCOLUMNS to set the column cardinality of
the
second key in the IX2 index to a lower value. The following is an
example
of how to update SYSCOLUMNS:
The column cardinality should be changed enough to get the good access
path, but not so much that the index I2 is not picked for the queries that
need it. Make certain that your UPDATE statement has the correct
predicates coded so that it only updates the correct row. Model the
table, indexes, and queries on a test system (refer to "Modeling Your
Production System" in topic 5.2.13), and use EXPLAIN to determine if
the
SYSCOLUMNS update has the desired effect. After you find an access
path
that performs better on the test system, update the SYSCOLUMNS value
for
the production system and rebind any static applications that were using
the wrong index. The previous example is a very simple one, but it can
alert you to the type of problems that can happen with correlated data.
Question: How can I provide a unique identifier for a table that has no
unique column?
Answer: Add a column with the data type TIMESTAMP, defined as NOT
NULL
WITH DEFAULT. The timestamp is then generated automatically in that
column. Then create a unique index on the new column.
If you add that column to an existing table, the existing rows are given
the same default value. You must provide unique values before you can
create the index.
When the index exists, some insertions can fail by occurring almost at the
same time as another insertion. Your application programs should retry
those insertions.
* Keep a copy of the data that has been fetched and scroll through it by
some programming technique.
QMF chooses the first option listed above: it saves fetched data in
virtual storage. If the data does not fit in virtual storage, QMF writes
it out to a BDAM data set, DSQSPILL. (For more information see Query
Management Facility: Reference Version 3.)
If you use this technique, and if you do not commit your work when the
data has been fetched, your program could hold locks that could prevent
other users from accessing the data. (For locking considerations in your
program, see "Planning for Concurrency" in topic 4.1.2.)
To retrieve the data again from the beginning, merely close the active
cursor and reopen it. That positions the cursor at the beginning of the
result table. But, unless the program holds locks on all the data, it
could have been changed, and what was the first row of the result table
might not be the first row now.
To retrieve data a second time from somewhere in the middle of the result
table, execute a second SELECT statement, and declare a second cursor
on
it. For example, suppose the first SELECT statement was:
Suppose also that you now want to return to the rows that start with
DEPTNO 'M95', and fetch sequentially from that point. Run:
But again, unless the data is still locked, rows could have been inserted
or deleted by other users. The row with DEPTNO 'M95' might no longer
exist. Or there could now be 20 rows with DEPTNO between M95 and
M99,
where before there were only 16.
The Order of Rows in the Second Result Table: The rows of the second
result table might not appear in the same order. DB2 does not consider
the order of rows as significant, unless the SELECT statement uses
ORDER
BY. Hence, if there are several rows with the same DEPTNO value, the
second SELECT statement could retrieve them in a different order from
the
first SELECT statement. The only guarantee is that they will all be in
order by department number, as demanded by the clause ORDER BY
DEPTNO.
If there is only one row for each value of DEPTNO, then the following
statement specifies a unique ordering of rows:
For retrieving rows in reverse order, it can be useful to have two indexes
on the DEPTNO column, one in ascending order and one in descending
order.
Question: How can you scroll backward and update data that had been
retrieved previously?
Answer: Scrolling and updating at the same time can cause unpredictable
results. INSERT, UPDATE and DELETE statements executed while a
cursor is
open can affect the result table if they are issued from the same
application process. For example, if cursor C is positioned on a row of
its result table defined as SELECT * FROM T, and you insert a row into
T,
the effect of that insert on the result table is not predictable because
its rows are not ordered. A later FETCH C might or might not retrieve the
new row of T.
Question: Are there any special techniques for updating large volumes of
data?
Answer: Yes. When updating large volumes of data using a cursor, you
can
minimize the amount of time that locks are held on the data by declaring
the cursor with the HOLD option and by issuing commits frequently.
Question: Are there any special techniques for fetching and displaying
large volumes of data?
Answer: There are no special techniques; but for large numbers of rows,
efficiency considerations often become very important. In particular, you
need to be aware of locking considerations, including the possibilities of
lock escalation.
If your program allows input from a terminal before it commits the data
and thereby releases locks, it is possible that a significant loss of
concurrency results. Review the description of locks in "The
ISOLATION
Option" in topic 4.1.2.7.2 while designing your program. Then review the
expected use of tables to predict whether you will have locking problems.
Answer: Generally, you should select only the columns you need because
DB2 is sensitive to the number of columns being selected. Use SELECT
*
only when you are sure you want to select all columns. One alternative is
to use views that have been defined with only the necessary columns, and
use SELECT * to access the views. Avoid SELECT * if all the selected
columns participate in a sort operation (SELECT DISTINCT and
SELECT...UNION, for example).
Question: How can I tell DB2 that I want only a few of the thousands of
rows that satisfy a query?
Even with the ORDER BY clause, DB2 might fetch all the data first and
sort
it afterwards, which could be wasteful. Instead, code:
The OPTIMIZE FOR clause tells DB2 to select an access path that returns
the first qualifying row quickly.
To get the effect of adding data to the "end" of a table, define a unique
index on a TIMESTAMP column in the table definition. Then, when you
retrieve data from the table, use an ORDER BY clause naming that
column.
The newest insert appears last.
Answer: You can save the corresponding SQL statements in a table with
a
column having a data type of VARCHAR or LONG VARCHAR. You
must save the
source SQL statements, not the prepared versions. That means that you
must retrieve and then prepare each statement before executing the
version
stored in the table. In essence, your program prepares an SQL statement
from a character string and executes it dynamically. (For a description
of dynamic SQL, see "Chapter 5-1. Coding Dynamic SQL in Application
Programs" in topic 5.1.)
Question: A program allows end users to create new tables, add columns
to
them, rearrange the columns, and delete columns. How can that be done
with
SQL?
Answer: The SQL CREATE TABLE statement creates a new table, and it
can be
executed dynamically. You can add columns with ALTER TABLE, and
execute
that dynamically. The added columns initially contain either the null
value or a default value based on the data type. But both statements,
like any data definition statement, are relatively expensive to execute;
consider the effects of locks.
It is not possible to rearrange or delete columns in a table without
dropping the entire table. You can, however, dynamically create a view
on
the table, which includes only the columns you want, in the order you
want. This has the same effect as redefining the table.
Question: How can you store a large volume of data that is not defined as
a set of columns in a table?
Answer: The examples below, drawn from the sample applications, show
how
to determine which referential constraint has been violated. For
information on possible violations, see SQLCODEs -530 through -542 in
Section 2 of Messages and Codes.
/*********************************************************
************/
/* CONSTRAINT NAME FOR DEPARTMENT NUMBER
*/
/*********************************************************
************/
.
.
.
IF SQLCODE -530 THEN
DO;
IF SUBSTR(SQLCA.SQLERRM,1,8) MGRNO_CONSTRAINT
THEN
CALL DSN8MPG (MODULE, '210E', OUTMSG); /* INVALID
MGRNO */
ELSE
CALL DSN8MPG (MODULE, '213E', OUTMSG); /* INVALID
ADMRDEPT */
/* ADD NOT DONE */
GO TO MPEMSG; /* PRINT ERROR MESSAGE
*/
END;
01 CONSTRAINTS.
03 PARM-LENGTH PIC S9(4) COMP-4.
03 REF-CONSTRAINT PIC X(08).
03 FILLER PIC X(62).
01 MGRNO-CONSTRAINT PIC X(08) VALUE 'RDE '.
.
.
.
IF SQLCODE NOT -530 THEN
GO TO DB-ERROR.
MOVE SQLERRM TO CONSTRAINTS.
* **INVALID MGRNO, DEPARTMENT
* **NOT ADDED
IF REF-CONSTRAINT MGRNO-CONSTRAINT THEN
MOVE '210E' TO MSGCODE
* **INVALID ADMRDEPT, DEPARTMENT
* **NOT ADDED
ELSE
MOVE '213E' TO MSGCODE.
GO TO MCEMSG.
There are some restrictions on how you make and break connections to
DB2
in any structure. If you use the PGM option of ISPF SELECT, ISPF
passes
control to your load module by the LINK macro; if you use CMD, ISPF
passes
control by the ATTACH macro.
Figure 77 shows schematically the task control blocks that result from
attaching the DSN command processor below TSO or ISPF.
* All such initiated DSN work units are unrelated, with regard to
isolation (locking) and recovery (commit). As such, a user can
deadlock with him or herself; that is, one unit (DSN) can request a
serialized resource (a data page, for example) that is held
incompatibly by another unit (DSN).
* A COMMIT in one program applies only to that process. There is no
facility for coordinating the processes.
---------------------
TSO or ISPF
---------------------
ATTACH
---------v-----------
DSN initialization
load module
ALIAS DSN
---------------------
ATTACH
---------v----------- ---------------------
DSN main load LINK Ordinary
module --------> application
program
--------------------- ---------------------
ATTACH (see note 2)
---------v-----------
Application
command
processor
---------------------
(See Note 1)
Notes to Figure
The user first invokes ISPF, which displays the data and selection panels.
When the user selects the program on the selection panel, ISPF invokes a
CLIST that runs the program. A corresponding CLIST might contain:
DSN
RUN PROGRAM(MYPROG) PLAN(MYPLAN)
END
The application has one large load module and one plan.
DSN
RUN PROGRAM(ISPF) PLAN(MYPLAN)
END
You then have to leave ISPF before you can start your application.
You can break a large application into several different functions, each
communicating through a common pool of shared variables controlled by
ISPF. Some functions can be written as separately compiled and loaded
programs; other functions can be coded as EXECs or CLISTs. You can
invoke
any of those parts through the ISPF SELECT service, and you can invoke
that from a program, a CLIST, or an ISPF selection panel.
When you use the ISPF SELECT service, you can specify whether ISPF
should
create a new ISPF variable pool before invoking the function. You can
also break a large application into several independent parts, each with
its own ISPF variable pool.
You can invoke different parts of the program in different ways. For
example, you can use the PGM option of ISPF SELECT:
CMD(command)
For a part that accesses DB2, the command can name a CLIST that
invokes
DSN:
DSN
RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel)
END
Breaking the application into separate modules makes it more flexible and
easier to maintain. Furthermore, not all of it need be dependent on DB2;
parts of the application that do not call DB2 can be selected, and will
run, even if DB2 is not running. A stopped DB2 database will not
interfere with parts of the program that refer only to other databases.
You can use the call attachment facility (CAF) to invoke DB2; for details,
see "Chapter 5-5. Programming for the Call Attachment Facility (CAF)"
in
topic 5.5. The ISPF/CAF sample connection manager programs
(DSN8SPM and
DSN8SCM) take advantage of the ISPLINK SELECT services, letting
each
routine make its own connection to DB2 and establish its own thread and
plan.
With the same modular structure as in the previous example, using CAF is
likely to provide greater efficiency by reducing the number of CLISTs.
This does not mean, however, that any DB2 function executes more
quickly.
5.5 Chapter 5-5. Programming for the Call Attachment Facility (CAF)
This chapter is intended to help you to write programs that use the call
attachment facility (CAF). It contains a general-use programming
interface that allows you to write programs using the services of DB2.
It is also possible for IMS batch applications to access DB2 data bases
through CAF, though that method does not coordinate the commitment of
work
between the IMS and DB2 systems. We highly recommend that you use
the DB2
DL/I batch support for IMS batch applications.
CICS application programs must use the CICS attachment facility, IMS
application programs, the IMS attachment facility. Programs running in
TSO foreground or TSO background can use either the DSN command
processor
or CAF; each has advantages and disadvantages.
In deciding whether to use the call attachment facility, you will want to
consider the capabilities and restrictions described on the pages
following.
* Access DB2 from MVS address spaces where TSO, IMS, or CICS do
not
exist.
* Run when DB2 is down (though it cannot run SQL when DB2 is
down).
* Run above or below the 16-megabyte line. (The CAF code itself
resides
below the line.)
* Supply event control blocks (ECBs), for DB2 to post, that signal
start-up or termination.
* Intercept return codes, reason codes, and abend codes from DB2 and
translate them into messages as desired.
Each connected task can run a plan. Multiple tasks in a single address
space can specify the same plan, but each instance of a plan is run
independently from the others. A task can terminate its plan and run a
different plan without fully breaking its connection to DB2.
CAF does not generate task structures, nor does it provide attention
processing exits or functional recovery routines. You can provide
whatever attention handling and functional recovery your application
needs, but you must use ESTAE/ESTAI type recovery routines and not
Enabled
Unlocked Task (EUT) FRR routines.
* If you need to use MVS macros (ATTACH, WAIT, POST, and so on),
you
will have to embed them in modules written in assembler language.
CAF uses MVS SVC LOAD to load two modules as part of the
initialization
following your first service request. Both modules are loaded into
fetch-protected storage that has the job-step protection key. If your
local environment intercepts and replaces the LOAD SVC, then you must
ensure that your version of LOAD manages the load list element (LLE)
and
contents directory entry (CDE) chains like the standard MVS LOAD
macro (at
least for call attachment facility applications).
If you use CAF from IMS batch, you must write data to only one system
in
any one unit of work. If you write to both systems within the same unit,
a system failure can leave the two databases inconsistent with no
possibility of automatic recovery. To end a unit of work in DB2, execute
the SQL COMMIT statement; to end one in IMS, issue the SYNCPOINT
command.
* One attachment facility cannot invoke another. This means your CAF
application cannot invoke DSN or be invoked by the DSN RUN
subcommand.
You can use the DSN command processor implicitly during program
development for functions such as:
* Using SPUFI (SQL processor using file input) to test some of the SQL
functions in the program.
To run a program using DSN, you enter the TSO command DSN and then
the RUN
subcommand with the name of a program. The DSN command processor
runs
under the control of the TSO terminal monitor program (TMP). Because
the
TMP runs in either foreground or background, DSN applications can be
run
interactively or as batch jobs.
When using DSN services, your application runs under the control of
DSN.
Because TSO attaches DSN to start it running, and DSN attaches a part of
itself, your application gains control two task levels below that of TSO.
Your program is completely dependent upon DSN to manage your
connection to
DB2. You cannot change plan names without allowing your application
to
terminate. If DB2 terminates, your application also terminates. If DB2
is down, your application cannot begin to run.
For some applications, those limitations are too severe. CAF provides an
alternative. CAF gives applications more control over their run
environment than is possible with the DSN command processor, and
provides
extra flexibility for sophisticated applications that can afford the extra
code required.
When the language interface is available, your program can make use of
the
CAF in two ways:
The first element of each option list is a function, which describes the
action you want CAF to take. The available values of function are listed
under "Summary of Connection Functions" in topic 5.5.2.1, with a rough
approximation of their effects. The effect of any function depends in
part on what functions the program has already run. Before using any
function, be sure to read the description of its usage. Also read
"Summary of CAF Behavior" in topic 5.5.2.11 which describes the
influence
of previous functions.
CALL DSNALI
('CONNECT')--------+-----> DSNALI
('OPEN')-----------+-----> ----->
('CLOSE')----------+----->
('DISCONNECT')-----+----->
CALL DSNHLI-----
(SQL calls) (Process
CALL DSNWLI--- connection
(IFI calls) requests)
----------------+-+----
--------------- -----------
--------------- v
-----------
>DSNHLI (a Dummy DB2
application EP)
-->DSNWLI (a dummy
application EP)
CONNECT
Establishes the task (TCB) as a user of the named DB2 subsystem.
When
the first task within an address space issues a connection request,
the address space is initialized as a user of DB2 also. See "CONNECT:
Syntax and Usage" in topic 5.5.2.6.
OPEN
Allocates a DB2 plan. A plan must be allocated before SQL statements
can be processed. If no CONNECT function had been requested,
OPEN
implicitly establishes the task, and optionally the address space, as
a user of DB2. See "OPEN: Syntax and Usage" in topic 5.5.2.7.
CLOSE
Optionally commits or abends any database changes and deallocates the
plan. If the CONNECT function was implicitly requested by OPEN,
the
task, and possibly the address space, is removed as a user of DB2.
See "CLOSE: Syntax and Usage" in topic 5.5.2.8.
DISCONNECT
Removes the task as a user of DB2 and, if this is the last or only
task in the address space with a DB2 connection, terminates the
address space connection to DB2. See "DISCONNECT: Syntax and
Usage"
in topic 5.5.2.9.
TRANSLATE
Returns an SQLCODE and printable text in the SQLCA that describes
a
DB2 hexadecimal error reason code. See "TRANSLATE: Syntax and
Usage"
in topic 5.5.2.10. You cannot call the TRANSLATE function from the
FORTRAN language.
Parameter Default
# For implicit connection requests, register 15 contains the return code and
# register 0 contains the reason code. The application program should
# examine the connection codes immediately after the first executable SQL
# statement within the application program. If the implicit connection was
# successful, then the SQLCODE associated with the first SQL statement,
and
# subsequent SQL statements can be examined.
Part of the call attachment facility is a DB2 load module, known as the
call attachment facility language interface. It is named DSNALI and has
alias names of DSNHLI2 and DSNWLI2. The module has five entry
points:
DSNALI, DSNHLI, DSNHLI2, DSNWLI, and DSNWLI2:
* Entry point DSNALI handles explicit DB2 connection service
requests.
You can access the DSNALI module by either explicitly issuing LOAD
requests when your program runs or by including the module in your load
module when your program is link-edited. There are advantages and
disadvantages to each approach.
To load DSNALI, issue MVS LOAD service requests for entry points
DSNALI
and DSNHLI2. If IFI services will be used, DSNWLI2 must also be
loaded.
The entry point addresses returned by LOAD are saved for later use in the
CALL macro invocations.
If you do not need explicit calls to DSNALI for CAF functions, including
DSNALI in your load module has some advantages. When DSNALI is
included
during the link-edit, you need not code the previously described dummy
DSNHLI entry point in your program. Module DSNALI contains an
entry point
for DSNHLI, which is identical to DSNHLI2, and an entry point
DSNWLI,
which is identical to DSNWLI2.
In either case, DB2 deallocates the plan, if necessary, and terminates the
task's connection before the task itself is allowed to terminate.
The call attach register and parameter list conventions for assembler
language are described below. The parameters used with any particular
function are described with the syntax and usage of the function.
If you do not specify the return code and reason code parameters in your
CAF calls, CAF puts a return code in register 15 and a reason code in
register 0. CAF also supports high-level languages that cannot
interrogate individual registers. See Figure 79 in topic 5.5.2.5.2 and
the discussion following it for more information.
The contents of registers 2 through 14 are preserved across calls. Users
must conform to the following (standard) calling conventions:
-------------------------------------------------------------------------
Register Usage
----------+-------------------------------------------------------------
R1 Parameter list pointer (for details, see "CALL DSNALI
Parameter List" in topic 5.5.2.5.2)
----------+-------------------------------------------------------------
R13 Address of caller's save area
----------+-------------------------------------------------------------
R14 Caller's return address
----------+-------------------------------------------------------------
R15 CAF entry point address
------------------------------------------------------------------------
5.5.2.5.2 CALL DSNALI Parameter List
# When you code CALL DSNALI statements, you must specify all
parameters that
# come before Return Code. You cannot omit any of those parameters by
# coding zeros or blanks. There are no defaults for those parameters for
# explicit connection service requests. Defaults are provided only for
# implicit connections.
# For all languages except assembler language, code zero for a parameter in
# the CALL DSNALI statement when you want to use the default value for
that
# parameter but specify subsequent parameters. For example, suppose you
are
# coding a CONNECT call in a COBOL program. You want to specify all
# parameters except Return Code. Write the call in this way:
Parameter ------------
List -----------> DSN Subsystem name
------------ ------------
----> ------
----------- ------------
------------ -----> 0 Termination event
----------- ------------ control block (ECB)
------------------
'end of ----------- ------------
parameters' -----------------------> 0 Startup ECB
----------- ------------
1 ----->1 ------------------
- or --+--------- ------------ CAF puts the address
2 ----->1 ------------ -----> of the release infor-
- or --+--------- ------------ mation block (RIB) here
3 ----->1 ------
- or --+--------- ------------
4 ----->1 -- -----------> Return code
----------- ------------
------------
-----------------> Reason code
------------
------------
---------------------> CURRENT DEGREE duration
------------
2. Terminates the parameter list after the return code field, and places
the return code in the parameter list and the reason code in register
0.
3. Terminates the parameter list after the reason code field and places
the return code and the reason code in the parameter list.
Even if you specify that the return code be placed in the parameter list,
it will also be placed in register 15 to accommodate high-level languages
that support special return code processing.
Note: The CONNECT function of the call attachment facility should not
be
confused with the DB2 CONNECT statement used to access a remote
location
within DB2.
5.5.2.6.1 Syntax
# CALL DSNALI
# (function,ssnm,termecb,startecb,ribptr<,retcode<,reascode<,srdura>>)
function
12-byte area containing CONNECT followed by five blanks.
ssnm
4-byte DB2 subsystem name to which the connection is made. If your
ssnm is less than four characters long, pad it on the right with
blanks to a length of four characters.
termecb
The application's event control block (ECB) for DB2 termination. This
ECB is posted by DB2 when the operator enters the -STOP DB2
command or
when DB2 is undergoing abend. It indicates the type of termination by
a POST code, as follows:
------------------------------
POST code Termination
type
--------------+--------------
8 QUIESCE
--------------+--------------
12 FORCE
--------------+--------------
16 ABTERM
-----------------------------
startecb
The application's start-up ECB. If DB2 is not yet started when the
call is issued, the ECB is posted when DB2 successfully completes its
startup processing. DB2 posts at most one startup ECB per address
space. The ECB is the one associated with the most recent CONNECT
call from that address space. Any nonzero CAF/DB2 reason codes
must
be examined prior to issuing a WAIT on this ECB.
ribptr
A 4-byte area in which CAF places the address of the release
information block (RIB) after the call. You can determine what
release level of DB2 you are currently running by examining field
RIBREL. If the RIB is not available (for example, if you name a
subsystem that does not exist), the area is set to zeros.
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not provided, CAF places the return code
in register 15 and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not provided,
CAF
places the reason code in register 0.
# srdura
# A 10-byte area containing the string 'SRDURA(CD)'. This field is
# optional. If it is provided, the value in the CURRENT DEGREE
special
# register stays in effect from CONNECT until DISCONNECT. If it is
not
# provided, the value in the CURRENT DEGREE special register stays
in
# effect from OPEN until CLOSE. If you specify this parameter in any
# language except assembler, you must also specify the return code and
# reason code parameters. In assembler language, you can omit the
# return code and reason code parameters by specifying commas as
# place-holders.
5.5.2.6.2 Usage
The use of a CONNECT call is optional. The first request from a task,
either OPEN, or an SQL or IFI call, causes CAF to issue an implicit
CONNECT request. If a task's connection is established implicitly, it is
terminated by a CLOSE request or when the task terminates.
CONNECT can be run from any or all tasks in the address space, but the
address space level initialization is done only once when the first task
connects.
Practically speaking, you must not mix explicit CONNECT and OPEN
requests
with implicitly established connections in the same address space. Either
explicitly specify which DB2 subsystem you want to use or allow all
requests to be directed to the default subsystem.
# * You need the value of the CURRENT DEGREE special register to last
as
# long as the connection (srdura).
* You need to monitor the DB2 start-up ECB (startecb), the DB2
termination ECB (termecb), or the DB2 release level.
* Multiple tasks in the address space will be opening and closing plans.
* A single task in the address space will be opening and closing plans
more than once.
* That the operator has issued a -STOP DB2 command. When this
happens,
DB2 posts the termination ECB, termecb. Your application can either
wait on or just look at the ECB.
* That DB2 is undergoing abend. When this happens, DB2 posts the
termination ECB, termecb.
* The current release level of DB2. Access the RIBREL field in the
release information block (RIB).
Do not issue CONNECT requests from a TCB that already has an active
DB2
connection. (See "Summary of CAF Behavior" in topic 5.5.2.11 and
"Error
Messages and DSNTRACE" in topic 5.5.5 for more information on CAF
errors.)
5.5.2.6.3 Examples
-----------------------------------------------------------------------------------------
Table 39. Examples of CAF CONNECT Calls
----------------------------------------------------------------------------------------
Language Call example
-----------+----------------------------------------------------------------------------
# assembler CALL DSNALI,
(CONNFN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,REASCOD
E,SRDURA)
-----------+----------------------------------------------------------------------------
# C fnretdsnali(&connfnâ0ã,&ssidâ0ã, &tecb,
&secb,&ribptr,&retcode,
# &reascode,&srduraâ0ã);
-----------+----------------------------------------------------------------------------
# COBOL CALL 'DSNALI' USING CONNFN SSID TERMECB
STARTECB RIBPTR RETCODE REASCODE
# SRDURA.
-----------+----------------------------------------------------------------------------
# FORTRAN CALL
DSNALI(CONNFN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,R
EASCODE,SRDURA)
-----------+----------------------------------------------------------------------------
# PL/I CALL
DSNALI(CONNFN,SSID,TERMECB,STARTECB,RIBPTR,RETCODE,R
EASCODE,SRDURA);
---------------------------------------------------------------------------------------
Note: DSNALI is an assembler language program; therefore, the
following compiler
directives must be included in your C and PL/I applications:
5.5.2.7.1 Syntax
function
A 12-byte area containing the word OPEN followed by eight blanks.
ssnm
A 4-byte DB2 subsystem name to which, optionally, a connection will
be
made. If your ssnm is less than four characters long, pad it on the
right with blanks to a length of four characters.
plan
An 8-byte DB2 plan name.
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not provided, CAF places the return code
in register 15 and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not provided,
CAF
places the reason code in register 0.
5.5.2.7.2 Usage
OPEN allocates DB2 resources needed to run the plan or issue IFI
requests.
If the requesting task does not already have a connection to the named
DB2
subsystem, then OPEN establishes it.
ssnm names the DB2 subsystem where the plan will be allocated. The
parameter, like all others, is required, even though the task could have
issued a CONNECT call. If a task issues CONNECT followed by OPEN,
then
the subsystem names for both calls must be the same.
The use of OPEN is optional. If not used, the action of OPEN occurs on
the first SQL or IFI call from the task, using the defaults listed under
"Implicit Connections" in topic 5.5.2.2.
5.5.2.7.3 Examples
-------------------------------------------------------------------------
Table 40. Examples of CAF OPEN Calls
------------------------------------------------------------------------
Language Call example
-----------+------------------------------------------------------------
assembler CALL DSNALI,(OPENFN,SSID,PLANNAME,
RETCODE,REASCODE)
-----------+------------------------------------------------------------
C fnretdsnali(&connfnâ0ã,&ssidâ0ã, &plannameâ0ã,&retcode,
&reascode);
-----------+------------------------------------------------------------
COBOL CALL 'DSNALI' USING OPENFN SSID PLANNAME
RETCODE
REASCODE.
-----------+------------------------------------------------------------
FORTRAN CALL DSNALI(OPENFN,SSID,PLANNAME,
RETCODE,REASCODE)
-----------+------------------------------------------------------------
PL/I CALL DSNALI(OPENFN,SSID,PLANNAME,
RETCODE,REASCODE);
-----------------------------------------------------------------------
Note: DSNALI is an assembler language program; therefore, the
following compiler directives must be included in your C and PL/I
applications:
CLOSE deallocates the plan and optionally disconnects the task, and
possibly the address space, from DB2.
5.5.2.8.1 Syntax
function
A 12-byte area containing the word CLOSE followed by seven blanks.
termop
A 4-byte terminate option, with one of these values:
retcode
A 4-byte area in which CAF will place the return code.
This field is optional. If not provided, CAF places the return code
in register 15 and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not provided,
CAF
places the reason code in register 0.
5.5.2.8.2 Usage
CLOSE deallocates the plan that was created either explicitly by OPEN or
implicitly at the first SQL call.
If CONNECT has not been issued for the task, CLOSE also deletes the
task's
connection to DB2. If no other task in the address space has an active
connection to DB2, DB2 also deletes the control block structures created
for the address space and removes the cross memory authorization.
Do not use CLOSE when your current task does not have a plan allocated.
Using CLOSE is optional. If you omit it, DB2 performs the same actions
when your task terminates, using the SYNC parameter if termination is
normal and the ABRT parameter if termination is abnormal. (The
function
is an implicit CLOSE.) If the objective is to shutdown your application,
shutdown performance is improved if CLOSE is requested explicitly
before
allowing the task to terminate.
If you want to use a new plan, you must issue an explicit CLOSE,
followed
by an OPEN, specifying the new plan name.
If DB2 terminates, a task that did not issue CONNECT explicitly issues
CLOSE, to allow CAF to reset its control blocks so that future
connections
can be made. This CLOSE returns the reset accomplished return code
(+004)
and reason code X'00C10824' If you omit CLOSE, then when DB2 is
back
online, the task's next connection request fails. You get either the
message Your TCB does not have a connection, with X'00F30018' in
register
0, or CAF error message DSNA201I or DSNA202I, depending on what
your
application tried to do. The task must then issue CLOSE before it can
reconnect to DB2.
5.5.2.8.3 Examples
-------------------------------------------------------------------------
Table 41. Examples of CAF CLOSE Calls
------------------------------------------------------------------------
Language Call example
-----------+------------------------------------------------------------
assembler CALL DSNALI,(CLOSFN,TERMOP,RETCODE,
REASCODE)
-----------+------------------------------------------------------------
C fnretdsnali(&connfnâ0ã,&ssidâ0ã, &retcode,&reascode);
-----------+------------------------------------------------------------
COBOL CALL 'DSNALI' USING CLOSFN TERMOP RETCODE
REASCODE.
-----------+------------------------------------------------------------
FORTRAN CALL DSNALI(CLOSFN,TERMOP,
RETCODE,REASCODE)
-----------+------------------------------------------------------------
PL/I CALL DSNALI(CLOSFN,TERMOP,
RETCODE,REASCODE);
-----------------------------------------------------------------------
Note: DSNALI is an assembler language program; therefore, the
following compiler directives must be included in your C and PL/I
applications:
5.5.2.9.1 Syntax
function
A 12-byte area containing the word DISCONNECT followed by two
blanks.
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not provided, CAF places the return code
in register 15 and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not provided,
CAF
places the reason code in register 0.
5.5.2.9.2 Usage
Only those tasks that have issued CONNECT explicitly can issue
DISCONNECT.
If CONNECT was not used, then DISCONNECT causes an error.
A task that did not issue CONNECT explicitly must issue CLOSE to reset
the
CAF control blocks when DB2 terminates.
5.5.2.9.3 Examples
-------------------------------------------------------------------------
Table 42. Examples of CAF DISCONNECT Calls
------------------------------------------------------------------------
Language Call example
-----------+------------------------------------------------------------
assembler CALL DSNALI(,DISCFN,RETCODE,REASCODE)
-----------+------------------------------------------------------------
C fnretdsnali(&connfnâ0ã,&ssidâ0ã,
-----------+------------------------------------------------------------
COBOL CALL 'DSNALI' USING DISCFN RETCODE
REASCODE.
-----------+------------------------------------------------------------
FORTRAN CALL DSNALI(DISCFN,RETCODE,REASCODE)
-----------+------------------------------------------------------------
PL/I CALL DSNALI(DISCFN,RETCODE,REASCODE);
-----------------------------------------------------------------------
Note: DSNALI is an assembler language program; therefore, the
following compiler directives must be included in your C and PL/I
applications:
TRANSLATE is useful only after a failure during OPEN and then only if
an
explicit CONNECT was performed prior to the OPEN request. The same
function is performed automatically when errors occur processing SQL or
IFI requests.
5.5.2.10.1 Syntax
function
A 12-byte area containing the word TRANSLATE followed by three
blanks.
sqlca
The program's SQL communication area (SQLCA).
retcode
A 4-byte area in which CAF places the return code.
This field is optional. If not provided, CAF places the return code
in register 15 and the reason code in register 0.
reascode
A 4-byte area in which CAF places a reason code. If not provided,
CAF
places the reason code in register 0.
5.5.2.10.2 Usage
5.5.2.10.3 Examples
-------------------------------------------------------------------------
Table 43. Examples of CAF TRANSLATE Calls
------------------------------------------------------------------------
Language Call example
-----------+------------------------------------------------------------
assembler CALL DSNALI,(XLATFN,SQLCA,RETCODE,
REASCODE)
-----------+------------------------------------------------------------
C fnretdsnali(&connfnâ0ã,&ssidâ0ã, &retcode,&reascode);
-----------+------------------------------------------------------------
COBOL CALL 'DSNALI' USING XLATFN SQLCA RETCODE
REASCODE.
-----------+------------------------------------------------------------
PL/I CALL DSNALI(XLATFN,SQLCA,RETCODE, REASCODE);
-----------------------------------------------------------------------
Note: DSNALI is an assembler language program; therefore, the
following compiler directives must be included in your C and PL/I
applications:
* The top row lists calls that can be made by a program. Each
possibility identifies a column of the table.
* The first column lists the task's most recent history of connection
requests. For example, CONNECT followed by OPEN means that the
task
issued CONNECT and then OPEN with no other CAF calls in between.
* The intersection of a row and column shows the effect of the next call
if it follows upon the corresponding connection history. For example,
if the call is OPEN and the connection history is CONNECT, the effect
# is OPEN: the OPEN function is performed. If the call is SQL and the
# connection history is empty (the SQL call is the first CAF function
# the program), the effect is that an implicit CONNECT and OPEN
function
# is performed, followed by the SQL function.
-----------------------------------------------------------------------------------------
----------
Table 44. Effects of CAF Calls, as Dependent on Connection History
-----------------------------------------------------------------------------------------
---------
Next Call
Connection
------------------------------------------------------------------------------------
History CONNECT OPEN SQL CLOSE
DISCONNECT TRANSLATE
-------------+-------------+-------------+--------------+-------------+-------------
+-------------
Empty: CONNECT OPEN CONNECT, Error 203 Error
204 Error 205
first call OPEN,
followed by
the SQL or
IFI call
-------------+-------------+-------------+--------------+-------------+-------------
+-------------
CONNECT Error 201 OPEN OPEN, Error 203
DISCONNECT TRANSLATE
followed by
the SQL or
IFI call
-------------+-------------+-------------+--------------+-------------+-------------
+-------------
CONNECT Error 201 Error 202 The SQL or CLOSE(1)
DISCONNECT TRANSLATE
followed by IFI call
OPEN
-------------+-------------+-------------+--------------+-------------+-------------
+-------------
CONNECT Error 201 Error 202 The SQL or CLOSE(1)
DISCONNECT TRANSLATE
followed by IFI call
SQL or IFI
call
-------------+-------------+-------------+--------------+-------------+-------------
+-------------
OPEN Error 201 Error 202 The SQL or CLOSE(2) Error
204 TRANSLATE
IFI call
-------------+-------------+-------------+--------------+-------------+-------------
+-------------
SQL or IFI Error 201 Error 202 The SQL or CLOSE(2) Error
204 TRANSLATE(3)
call IFI call
-----------------------------------------------------------------------------------------
---
notes:
1. The task and address space connections remain active. If CLOSE fails
because DB2 was down,
then the CAF control blocks are reset, the function produces return
code 4 and reason code
XX'00C10824', and CAF is ready for more connection requests when
DB2 is again online.
CONNECT
OPEN allocate a plan
SQL or IFI call
.
.
.
CLOSE deallocate the current plan
OPEN allocate a new plan
SQL or IFI call
.
.
.
CLOSE
DISCONNECT
A task can have a connection to one and only one DB2 subsystem at any
point in time. A CAF error occurs if the subsystem name on OPEN does
not
match the one on CONNECT. To switch to a different subsystem, the
application must disconnect from the current subsystem, then issue a
connect request specifying a new subsystem name.
In this scenario, multiple tasks within the address space are using DB2
services. Each task must explicitly specify the same subsystem name on
either the CONNECT or OPEN function request. Task 1 makes no SQL
or IFI
calls. Its purpose is to monitor the DB2 termination and start-up ECBs,
and to check the DB2 release level.
TASK 1 TASK 2 TASK 3 TASK n
CONNECT
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
OPEN OPEN OPEN
SQL SQL SQL
... ... ...
CLOSE CLOSE CLOSE
DISCONNECT
You can provide exits from your application for the purposes described in
the following text.
The call attachment facility has no attention exits. You can provide your
own if necessary. However, DB2 uses extended unlock task (EUT)
functional
recovery routines (FRRs), so if attention is requested while DB2 code is
executing, your routine may not get control.
5.5.4.2 Recovery Routines
Your program can provide an abend exit routine. It must use tracking
indicators to determine if an abend occurred during DB2 processing. If
one does occur while DB2 has control, you have two choices:
This is the only action that can be taken for abends caused by CANCEL
or DETACH. Additional SQL statements will not be accepted at this
point. If you attempt to execute another SQL statement from the
application program or its recovery routine, a return code of +256 and
a reason code of X'00F30083' will result.
Standard MVS functional recovery routines (FRRs) can cover only code
running in service request block (SRB) mode. Since DB2 does not
support
SRB mode callers, only extended unlocked task (EUT) FRRs can be
considered.
Note: Do not have an EUT FRR active when invoking CAF, processing
SQL
requests, or calling IFI.
With MVS/SP2.2, an EUT FRR can be active, but it cannot retry failing
DB2
requests. An EUT FRR retry bypasses DB2's ESTAE routines. The next
DB2
request of any type, including DISCONNECT, fails with a return code of
+256 and a reason code of X'00F30050'.
With MVS, if you have an active EUT FRR, all DB2 requests, including
the
initial CONNECT or OPEN, will fail. This is because DB2 always
creates an
ARR-type ESTAE, and MVS/ESA does not allow creation of ARR-type
ESTAEs
when an FRR is active.
# This section explains the return and reason codes that CAF returns. The
# codes are returned either to the variables named in the return and reason
# code parameters of a CAF call or, if those parameters are not used, to
# registers 15 and 0. Detailed explanations of the reason codes appear in
# Messages and Codes.
# When the reason code begins with X'00F3' (except for X'00F30006'), you
can
# invoke the CAF TRANSLATE function and obtain printable, and
displayable,
# error message text.
# For SQL calls, CAF returns standard SQL return codes in the SQLCA.
See
# Section 2 of Messages and Codes for a list of those return codes and their
# meanings. CAF returns IFI return and reason codes in the
instrumentation
# facility communication area (IFCA).
-------------------------------------------------------------------------
# Table 45. CAF Return Codes and Reason Codes
------------------------------------------------------------------------
# Return code Reason code Explanation
----------------+----------------+--------------------------------------
# 0 X'00000000' Successful completion.
----------------+----------------+--------------------------------------
# 4 X'00C10823' Release level mismatch between DB2
# and the and the call attachment
# facility code.
----------------+----------------+--------------------------------------
# 4 X'00C10824' CAF reset complete. Ready to make a
# new connection.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10201' Received a second CONNECT from the
# same TCB. The first CONNECT could
# have been implicit or explicit.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10202' Received a second OPEN from the same
# TCB. The first OPEN could have been
# implicit or explicit.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10203' CLOSE issued when there was no
# active OPEN.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10204' DISCONNECT issued when there was no
# active CONNECT.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10205' TRANSLATE issued when there was no
# connection to DB2.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10206' Wrong number of parameters or the
# end-of-list bit was off.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10207' Unrecognized function parameter.
----------------+----------------+--------------------------------------
# 200 (note 1) X'00C10208' Received requests to access two
# different DB2 subsystems from the
# same TCB.
----------------+----------------+--------------------------------------
# 204 (note 2) CAF system error. Probable error in
# the attach or DB2.
----------------------------------------------------------------------
# Notes:
# These reason codes are issued by the subsystem support for allied
# memories, a part of the DB2 subsystem support subcomponent that
services
# all DB2 connection and work requests. The codes, along with abend and
# subsystem termination reason codes issued by other parts of subsystem
# support, are described in Section 4 of Messages and Codes.
The following pages contain sample JCL and assembler programs that
access
the call attachment facility (CAF).
The sample JCL that follows is a model for invoking CAF in a batch
(non-TSO) environment. The DSNTRACE statement shown in this
example is
optional.
.
.
.
//SYSPRINT DD SYSOUT*
//DSNTRACE DD SYSOUT*
//SYSUDUMP DD SYSOUT*
The following code segment shows how an application can load entry
points
DSNALI and DSNHLI2 for the call attachment language interface.
Storing
the entry points in variables LIALI and LISQL ensures that the loads have
to be done only once.
When the module is done with DB2, you should delete the entries.
****************************** CONNECT
********************************
L R15,LIALI Get the Language Interface address
CALL (15),
(CONNECT,SSID,TECB,SECB,RIBPTR),VL,MF(E,CAFCALL)
BAL R14,CHEKCODE Check the return and reason codes
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
USING R8,RIB Prepare to access the RIB
L R8,RIBPTR Access RIB to get DB2 release level
WRITE 'The current DB2 release level is' RIBREL
****************************** OPEN
***********************************
L R15,LIALI Get the Language Interface address
CALL (15),(OPEN,SSID,PLAN),VL,MF(E,CAFCALL)
BAL R14,CHEKCODE Check the return and reason codes
****************************** SQL
************************************
* Insert your SQL calls here. The DB2 Precompiler
* generates calls to entry point DSNHLI. You should
* code a dummy entry point of that name to intercept
* all SQL calls. A dummy DSNHLI is shown below.
****************************** CLOSE
**********************************
CLC CONTROL,CONTINUE Is everything still OK?
BNE EXIT If CONTROL not 'CONTINUE', shutdown
MVC TRMOP,ABRT Assume termination with ABRT
parameter
L R4,SQLCODE Put the SQLCODE into a register
C R4,CODE0 Examine the SQLCODE
BZ SYNCTERM If zero, then CLOSE with SYNC
parameter
C R4,CODE100 See if SQLCODE was 100
BNE DISC If not 100, CLOSE with ABRT parameter
SYNCTERM MVC TRMOP,SYNC Good code, terminate with
SYNC parameter
DISC DS 0H Now build the CAF parmlist
L R15,LIALI Get the Language Interface address
CALL (15),(CLOSE,TRMOP),VL,MF(E,CAFCALL)
BAL R14,CHEKCODE Check the return and reason codes
****************************** DISCONNECT
*****************************
CLC CONTROL,CONTINUE Is everything still OK
BNE EXIT If CONTROL not 'CONTINUE', stop loop
L R15,LIALI Get the Language Interface address
CALL (15),(DISCON),VL,MF(E,CAFCALL)
BAL R14,CHEKCODE Check the return and reason codes
The code does not show a task that waits on the DB2 termination ECB. If
you like, you can code such a task and use the MVS WAIT macro to
monitor
the ECB. You probably want this task to detach the sample code if the
termination ECB is posted. That task can also wait on the DB2 startup
ECB. This sample waits on the startup ECB at its own task level.
Variable Usage
LIALI The entry point that handles DB2 connection service requests.
LISQL The entry point that handles SQL calls.
SSID The DB2 subsystem identifier.
TECB The address of the DB2 termination ECB.
SECB The address of the DB2 start-up ECB.
RIBPTR The address of the fullword that CAF will set to contain the
RIB address.
PLAN The name to be used on the OPEN call.
CONTROL Used to shut down processing because of unsatisfactory
return
or reason codes. Subroutine CHEKCODE sets CONTROL.
CAFCALL List-form parameter area for the CALL macro.
Figure 81 illustrates a way to check the return codes and the DB2
termination ECB after each connection service request and SQL call. The
routine sets the variable CONTROL to control further processing within
the
module. First, a pseudocode version of the subroutine:
**********************************************************
*************
* CHEKCODE PSEUDOCODE *
**********************************************************
*************
*IF TECB is POSTed with the ABTERM or FORCE codes
* THEN
* CONTROL 'SHUTDOWN'
* WRITE 'DB2 found FORCE or ABTERM, shutting down'
* ELSE /* Termination ECB was not POSTed */
* SELECT (RETCODE) /* Look at the return code */
* WHEN (0) ; /* Do nothing; everything is OK */
* WHEN (4) ; /* Warning */
* SELECT (REASCODE) /* Look at the reason code */
* WHEN ('00C10823'X) /* DB2 / CAF release level mismatch*/
* WRITE 'Found a mismatch between DB2 and CAF release
levels'
* WHEN ('00C10824'X) /* Ready for another CAF call */
* CONTROL 'RESTART' /* Start over, from the top */
* OTHERWISE
* WRITE 'Found unexpected R0 when R15 was 4'
* CONTROL 'SHUTDOWN'
* END INNER-SELECT
* WHEN (8,12) /* Connection failure */
* SELECT (REASCODE) /* Look at the reason code */
* WHEN ('00F30002'X, /* These mean that DB2 is down but */
* '00F30012'X) /* will POST SECB when up again */
* DO
* WRITE 'DB2 is unavailable. I'll tell you when it's up.'
* WAIT SECB /* Wait for DB2 to come up */
* WRITE 'DB2 is now available.'
* END
*
/**********************************************************/
* /* Insert tests for other DB2 connection failures here. */
* /* CAF Externals Specification lists other codes you can */
* /* receive. Handle them in whatever way is appropriate */
* /* for your application. */
*
/**********************************************************/
* OTHERWISE /* Found a code we're not ready for*/
* WRITE 'Warning: DB2 connection failure. Cause unknown'
* CALL DSNALI ('TRANSLATE',SQLCA) /* Fill in SQLCA
*/
* WRITE SQLCODE and SQLERRM
* END INNER-SELECT
* WHEN (200)
* WRITE 'CAF found user error. See DSNTRACE dataset'
* WHEN (204)
* WRITE 'CAF system error. See DSNTRACE data set'
* OTHERWISE
* CONTROL 'SHUTDOWN'
* WRITE 'Got an unrecognized return code'
* END MAIN SELECT
* IF (RETCODE > 4) THEN /* Was there a connection problem?
*/
* CONTROL 'SHUTDOWN'
* END CHEKCODE
**********************************************************
*************
* Subroutine CHEKCODE checks return codes from DB2 and Call
Attach.
* When CHEKCODE receives control, R13 should point to the caller's
* save area.
**********************************************************
*************
CHEKCODE DS 0H
STM R14,R12,12(R13) Prolog
ST R15,RETCODE Save the return code
ST R0,REASCODE Save the reason code
LA R15,SAVEAREA Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
* ********************* HUNT FOR FORCE OR ABTERM
***************
TM TECB,POSTBIT See if TECB was POSTed
BZ DOCHECKS Branch if TECB was not POSTed
CLC TECBCODE(3),QUIESCE Is this "STOP DB2
MODEFORCE"
BE DOCHECKS If not QUIESCE, was FORCE or
ABTERM
MVC CONTROL,SHUTDOWN Shutdown
WRITE 'Found found FORCE or ABTERM, shutting down'
B ENDCCODE Go to the end of CHEKCODE
DOCHECKS DS 0H Examine RETCODE and REASCODE
* ********************* HUNT FOR 0
*****************************
CLC RETCODE,ZERO Was it a zero?
BE ENDCCODE Nothing to do in CHEKCODE for zero
* ********************* HUNT FOR 4
*****************************
CLC RETCODE,FOUR Was it a 4?
BNE HUNT8 If not a 4, hunt eights
CLC REASCODE,C10823 Was it a release level mismatch?
BNE HUNT824 Branch if not an 823
WRITE 'Found a mismatch between DB2 and CAF release levels'
B ENDCCODE We are done. Go to end of CHEKCODE
HUNT824 DS 0H Now look for 'CAF reset' reason code
CLC REASCODE,C10824 Was it 4? Are we ready to restart?
BNE UNRECOG If not 824, got unknown code
WRITE 'CAF is now ready for more input'
MVC CONTROL,RESTART Indicate that we should re-
CONNECT
B ENDCCODE We are done. Go to end of CHEKCODE
UNRECOG DS 0H
WRITE 'Got RETCODE 4 and an unrecognized reason code'
MVC CONTROL,SHUTDOWN Shutdown, serious problem
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* HUNT FOR 8
*****************************
HUNT8 DS 0H
CLC RETCODE,EIGHT Hunt return code of 8
BE GOT8OR12
CLC RETCODE,TWELVE Hunt return code of 12
BNE HUNT200
GOT8OR12 DS 0H Found return code of 8 or 12
WRITE 'Found RETCODE of 8 or 12'
CLC REASCODE,F30002 Hunt for X'00F30002'
BE DB2DOWN
CLC REASCODE,F30012 Hunt for X'00F30012'
BE DB2DOWN
WRITE 'DB2 connection failure with an unrecognized
REASCODE'
CLC SQLCODE,ZERO See if we need TRANSLATE
BNE A4TRANS If not blank, skip TRANSLATE
* ********************* TRANSLATE unrecognized
RETCODEs ********
WRITE 'SQLCODE 0 but R15 not, so TRANSLATE to get
SQLCODE'
L R15,LIALI Get the Language Interface address
CALL (15),(TRANSLAT,SQLCA),VL,MF(E,CAFCALL)
C R0,C10205 Did the TRANSLATE work?
BNE A4TRANS If not C10205, SQLERRM now filled in
WRITE 'Not able to TRANSLATE the connection failure'
B ENDCCODE Go to end of CHEKCODE
A4TRANS DS 0H SQLERRM must be filled in to get here
* Note: your code should probably remove the X'FF'
* separators and format the SQLERRM feedback area.
* Alternatively, use DB2 Sample Application DSNTIAR
* to format a message.
WRITE 'SQLERRM is:' SQLERRM
B ENDCCODE We are done. Go to end of CHEKCODE
DB2DOWN DS 0H Hunt return code of 200
WRITE 'DB2 is down and I will tell you when it comes up'
WAIT ECBSECB Wait for DB2 to come up
WRITE 'DB2 is now available'
MVC CONTROL,RESTART Indicate that we should re-
CONNECT
B ENDCCODE
* ********************* HUNT FOR 200
***************************
HUNT200 DS 0H Hunt return code of 200
CLC RETCODE,NUM200 Hunt 200
BNE HUNT204
WRITE 'CAF found user error, see DSNTRACE data set'
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* HUNT FOR 204
***************************
HUNT204 DS 0H Hunt return code of 204
CLC RETCODE,NUM204 Hunt 204
BNE WASSAT If not 204, got strange code
WRITE 'CAF found system error, see DSNTRACE data set'
B ENDCCODE We are done. Go to end of CHEKCODE
* ********************* UNRECOGNIZED RETCODE
*******************
WASSAT DS 0H
WRITE 'Got an unrecognized RETCODE'
MVC CONTROL,SHUTDOWN Shutdown
BE ENDCCODE We are done. Go to end of CHEKCODE
ENDCCODE DS 0H Should we shutdown?
L R4,RETCODE Get a copy of the RETCODE
C R4,FOUR Have a look at the RETCODE
BNH BYEBYE If RETCODE < 4 then leave
CHEKCODE
MVC CONTROL,SHUTDOWN Shutdown
BYEBYE DS 0H Wrap up and leave CHEKCODE
L R13,4(,R13) Point to caller's save area
RETURN (14,12) Return to the caller
Figure 81. Subroutine to Check Return Codes from CAF and DB2, in
Assembler
Each of the four DB2 attachment facilities contains an entry point named
DSNHLI. When you use CAF, SQL statements result in BALR
instructions to
DSNHLI in your program. The subroutine there merely passes control to
entry point DSNHLI2 in the DSNALI module, which is the same as entry
point
DSNHLI in DSNALI. That technique finds the correct DSNHLI entry
point
without the need to include DSNALI in your load module.
**********************************************************
*************
* Subroutine DSNHLI intercepts calls to LI EPDSNHLI
**********************************************************
*************
DS 0D
DSNHLI CSECT Begin CSECT
STM R14,R12,12(R13) Prologue
LA R15,SAVEHLI Get save area address
ST R13,4(,R15) Chain the save areas
ST R15,8(,R13) Chain the save areas
LR R13,R15 Put save area address in R13
L R15,LISQL Get the address of real DSNHLI
BALR R14,R15 Do a SQL call
L R13,4(,R13) Restore R13 (caller's save area addr)
L R14,12(,R13) Restore R14 (return address)
RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
****************************** VARIABLES
******************************
SECB DS F DB2 Start-up ECB
TECB DS F DB2 Termination ECB
LIALI DS F DSNALI Entry Point address
LISQL DS F DSNHLI2 Entry Point address
SSID DS CL4 DB2 SubSystem ID. CONNECT parameter
PLAN DS CL8 DB2 Plan name. OPEN parameter
TRMOP DS CL4 CLOSE termination option
(SYNCABRT)
RIBPTR DS F DB2 puts Release Info Block addr here
RETCODE DS F Chekcode saves R15 here
REASCODE DS F Chekcode saves R0 here
CONTROL DS CL8 GO, SHUTDOWN, or RESTART
SAVEAREA DS 18F Save area for CHEKCODE
****************************** CONSTANTS
******************************
SHUTDOWN DC CL8'SHUTDOWN' CONTROL value:
Shutdown execution
RESTART DC CL8'RESTART ' CONTROL value: Restart
execution
CONTINUE DC CL8'CONTINUE' CONTROL value: Everything
OK, cont
CODE0 DC F'0' SQLCODE of 0
CODE100 DC F'100' SQLCODE of 100
QUIESCE DC XL3'000008' TECB postcode: STOP DB2
MODEQUIESCE
CONNECT DC CL12'CONNECT ' Name of a CAF service. Must
be CL12!
OPEN DC CL12'OPEN ' Name of a CAF service. Must be
CL12!
CLOSE DC CL12'CLOSE ' Name of a CAF service. Must be
CL12!
DISCON DC CL12'DISCONNECT ' Name of a CAF service. Must
be CL12!
TRANSLAT DC CL12'TRANSLATE ' Name of a CAF service. Must
be CL12!
SYNC DC CL4'SYNC' Termination option (COMMIT)
ABRT DC CL4'ABRT' Termination option (ROLLBACK)
****************************** RETURN CODES (R15) FROM
CALL ATTACH ****
ZERO DC F'0' 0
FOUR DC F'4' 4
EIGHT DC F'8' 8
TWELVE DC F'12' 12 (Call Attach return code in R15)
NUM200 DC F'200' 200 (User error)
NUM204 DC F'204' 204 (Call Attach system error)
****************************** REASON CODES (R00) FROM
CALL ATTACH ****
C10205 DC XL4'00C10205' Call attach could not TRANSLATE
C10823 DC XL4'00C10823' Call attach found a release mismatch
C10824 DC XL4'00C10824' Call attach ready for more input
F30002 DC XL4'00F30002' DB2 subsystem not up
F30011 DC XL4'00F30011' DB2 subsystem not up
F30012 DC XL4'00F30012' DB2 subsystem not up
F30025 DC XL4'00F30025' DB2 is stopping (REASCODE)
*
* Insert more codes here as necessary for your application
*
****************************** SQLCA and RIB
**************************
EXEC SQL INCLUDE SQLCA
DSNDRIB Get the DB2 Release Information Block
****************************** CALL macro parm list
*******************
CAFCALL CALL ,(*,*,*,*,*),VL,MFL
Most of the examples in this book refer to the tables described in this
appendix. As a group, the tables include information that describes
employees, departments, projects, and activities, and make up a sample
application that exemplifies most of the features of DB2. The sample
storage group, databases, tablespaces, and tables are created when you run
the installation sample job DSNTEJ1. The CREATE INDEX statements
for the
sample tables are not shown here; they, too, are created by the DSNTEJ1
sample job.
The activity table describes the activities that can be performed during a
project. The table resides in database DSN8D31A and is created with:
A.1.1 Content
----------------------------------------------
Table 46. Columns of the Activity Table
---------------------------------------------
Column Column Description
Name
---------+----------+------------------------
1 ACTNO Activity ID (the
primary key)
---------+----------+------------------------
2 ACTKWD Activity keyword (up
to six characters)
---------+----------+------------------------
3 ACTDESC Activity description
--------------------------------------------
----------------------------------------
Table 47. Indexes of the Activity
Table
---------------------------------------
Name On Type of
Column Index
--------------+---------+--------------
DSN8310.XACT1 ACTNO Primary,
ascending
--------------+---------+--------------
DSN8310.XACT2 ACTKWD Unique,
ascending
--------------------------------------
A.2.1 Content
------------------------------------------------------------------------------------
Table 48. Columns of the Department Table
-----------------------------------------------------------------------------------
Column Column Description
Name
---------+----------+--------------------------------------------------------------
1 DEPTNO Department ID, the primary key
---------+----------+--------------------------------------------------------------
2 DEPTNAME A name describing the general activities of the
department
---------+----------+--------------------------------------------------------------
3 MGRNO Employee number (EMPNO) of the department
manager
---------+----------+--------------------------------------------------------------
4 ADMRDEPT ID of the department to which this department
reports; the
department at the highest level reports to itself
---------+----------+--------------------------------------------------------------
5 LOCATION The remote location name
----------------------------------------------------------------------------------
-------------------------------------------
Table 49. Indexes of the Department
Table
------------------------------------------
Name On Type of
Column Index
-----------------+---------+--------------
DSN8310.XDEPT1 DEPTNO Primary,
ascending
-----------------+---------+--------------
DSN8310.XDEPT2 MGRNO Ascending
-----------------+---------+--------------
DSN8310.XDEPT3 ADMRDEPT Ascending
-----------------------------------------
------------------------------------------------------------------------------------
Table 50. DSN8310.DEPT: Department Table
-----------------------------------------------------------------------------------
DEPTNO DEPTNAME MGRNO ADMRDEPT LOCATION
-------+-------------------+-------+---------+-------------------------------------
A00 SPIFFY COMPUTER 000010 A00 ----------------
SERVICE DIV.
-------+-------------------+-------+---------+-------------------------------------
B01 PLANNING 000020 A00 ----------------
-------+-------------------+-------+---------+-------------------------------------
C01 INFORMATION 000030 A00 ----------------
CENTER
-------+-------------------+-------+---------+-------------------------------------
D01 DEVELOPMENT ------ A00 ----------------
CENTER
-------+-------------------+-------+---------+-------------------------------------
E01 SUPPORT SERVICES 000050 A00 ----------------
-------+-------------------+-------+---------+-------------------------------------
D11 MANUFACTURING 000060 D01 ----------------
SYSTEMS
-------+-------------------+-------+---------+-------------------------------------
D21 ADMINISTRATION 000070 D01 ----------------
SYSTEMS
-------+-------------------+-------+---------+-------------------------------------
E11 OPERATIONS 000090 E01 ----------------
-------+-------------------+-------+---------+-------------------------------------
E21 SOFTWARE SUPPORT 000100 E01 ----------------
-------+-------------------+-------+---------+-------------------------------------
F22 BRANCH OFFICE F2 ------ E01 ----------------
-------+-------------------+-------+---------+-------------------------------------
G22 BRANCH OFFICE G2 ------ E01 ----------------
-------+-------------------+-------+---------+-------------------------------------
H22 BRANCH OFFICE H2 ------ E01 ----------------
-------+-------------------+-------+---------+-------------------------------------
I22 BRANCH OFFICE I2 ------ E01 ----------------
-------+-------------------+-------+---------+-------------------------------------
J22 BRANCH OFFICE J2 ------ E01 ----------------
--------------------------------------------------------------------------------
The table shown in Table 53 in topic A.3.2 and Table 54 in topic A.3.2
resides in the partitioned table space DSN8D31A.DSN8S31E. Because it
has
a foreign key referencing DEPT, that table and the index on its primary
key must be created first. Then EMP is created with:
A.3.1 Content
------------------------------------------------------------------------------------
Table 51. Columns of the Employee Table
-----------------------------------------------------------------------------------
Column Column Description
Name
---------+----------+--------------------------------------------------------------
1 EMPNO Employee number (the primary key)
---------+----------+--------------------------------------------------------------
2 FIRSTNME First name of employee
---------+----------+--------------------------------------------------------------
3 MIDINIT Middle initial of employee
---------+----------+--------------------------------------------------------------
4 LASTNAME Last name of employee
---------+----------+--------------------------------------------------------------
5 WORKDEPT ID of department in which the employee works
---------+----------+--------------------------------------------------------------
6 PHONENO Employee telephone number
---------+----------+--------------------------------------------------------------
7 HIREDATE Date of hire
---------+----------+--------------------------------------------------------------
8 JOB Job held by the employee
---------+----------+--------------------------------------------------------------
9 EDLEVEL Number of years of formal education
---------+----------+--------------------------------------------------------------
10 SEX Sex of the employee (M or F)
---------+----------+--------------------------------------------------------------
11 BIRTHDATE Date of birth
---------+----------+--------------------------------------------------------------
12 SALARY Yearly salary in dollars
---------+----------+--------------------------------------------------------------
13 BONUS Yearly bonus in dollars
---------+----------+--------------------------------------------------------------
14 COMM Yearly commission in dollars
----------------------------------------------------------------------------------
----------------------------------------
Table 52. Indexes of the Employee
Table
---------------------------------------
Name On Type of
Column Index
--------------+---------+--------------
DSN8310.XEMP1 EMPNO Primary,
partitioned,
ascending
--------------+---------+--------------
DSN8310.XEMP2 WORKDEPT Ascending
--------------------------------------
-----------------------------------------------------------------------------------------
----------
Table 53. Left Half of DSN8310.EMP: Employee Table. Note that a
blank in the MIDINIT column is
an actual value of ' ' rather than null.
-----------------------------------------------------------------------------------------
---------
EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT
PHONENO HIREDATE
--------+----------+---------+-----------+----------+--------
+------------------------------------
000010 CHRISTINE I HAAS A00 3978 1965-01-01
--------+----------+---------+-----------+----------+--------
+------------------------------------
000020 MICHAEL L THOMPSON B01 3476 1973-10-10
--------+----------+---------+-----------+----------+--------
+------------------------------------
000030 SALLY A KWAN C01 4738 1975-04-05
--------+----------+---------+-----------+----------+--------
+------------------------------------
000050 JOHN B GEYER E01 6789 1949-08-17
--------+----------+---------+-----------+----------+--------
+------------------------------------
000060 IRVING F STERN D11 6423 1973-09-14
--------+----------+---------+-----------+----------+--------
+------------------------------------
000070 EVA D PULASKI D21 7831 1980-09-30
--------+----------+---------+-----------+----------+--------
+------------------------------------
000090 EILEEN W HENDERSON E11 5498 1970-08-15
--------+----------+---------+-----------+----------+--------
+------------------------------------
000100 THEODORE Q SPENSER E21 0972 1980-06-19
--------+----------+---------+-----------+----------+--------
+------------------------------------
000110 VINCENZO G LUCCHESSI A00 3490 1958-05-16
--------+----------+---------+-----------+----------+--------
+------------------------------------
000120 SEAN O'CONNELL A00 2167 1963-12-05
--------+----------+---------+-----------+----------+--------
+------------------------------------
000130 DOLORES M QUINTANA C01 4578 1971-07-28
--------+----------+---------+-----------+----------+--------
+------------------------------------
000140 HEATHER A NICHOLLS C01 1793 1976-12-15
--------+----------+---------+-----------+----------+--------
+------------------------------------
000150 BRUCE ADAMSON D11 4510 1972-02-12
--------+----------+---------+-----------+----------+--------
+------------------------------------
000160 ELIZABETH R PIANKA D11 3782 1977-10-11
--------+----------+---------+-----------+----------+--------
+------------------------------------
000170 MASATOSHI J YOSHIMURA D11 2890 1978-09-
15
--------+----------+---------+-----------+----------+--------
+------------------------------------
000180 MARILYN S SCOUTTEN D11 1682 1973-07-07
--------+----------+---------+-----------+----------+--------
+------------------------------------
000190 JAMES H WALKER D11 2986 1974-07-26
--------+----------+---------+-----------+----------+--------
+------------------------------------
000200 DAVID BROWN D11 4501 1966-03-03
--------+----------+---------+-----------+----------+--------
+------------------------------------
000210 WILLIAM T JONES D11 0942 1979-04-11
--------+----------+---------+-----------+----------+--------
+------------------------------------
000220 JENNIFER K LUTZ D11 0672 1968-08-29
--------+----------+---------+-----------+----------+--------
+------------------------------------
000230 JAMES J JEFFERSON D21 2094 1966-11-21
--------+----------+---------+-----------+----------+--------
+------------------------------------
000240 SALVATORE M MARINO D21 3780 1979-12-05
--------+----------+---------+-----------+----------+--------
+------------------------------------
000250 DANIEL S SMITH D21 0961 1969-10-30
--------+----------+---------+-----------+----------+--------
+------------------------------------
000260 SYBIL P JOHNSON D21 8953 1975-09-11
--------+----------+---------+-----------+----------+--------
+------------------------------------
000270 MARIA L PEREZ D21 9001 1980-09-30
--------+----------+---------+-----------+----------+--------
+------------------------------------
000280 ETHEL R SCHNEIDER E11 8997 1967-03-24
--------+----------+---------+-----------+----------+--------
+------------------------------------
000290 JOHN R PARKER E11 4502 1980-05-30
--------+----------+---------+-----------+----------+--------
+------------------------------------
000300 PHILIP X SMITH E11 2095 1972-06-19
--------+----------+---------+-----------+----------+--------
+------------------------------------
000310 MAUDE F SETRIGHT E11 3332 1964-09-12
--------+----------+---------+-----------+----------+--------
+------------------------------------
000320 RAMLAL V MEHTA E21 9990 1965-07-07
--------+----------+---------+-----------+----------+--------
+------------------------------------
000330 WING LEE E21 2103 1976-02-23
--------+----------+---------+-----------+----------+--------
+------------------------------------
000340 JASON R GOUNOT E21 5698 1947-05-05
--------+----------+---------+-----------+----------+--------
+------------------------------------
200010 DIAN J HEMMINGER A00 3978 1965-01-01
--------+----------+---------+-----------+----------+--------
+------------------------------------
200120 GREG ORLANDO A00 2167 1972-05-05
--------+----------+---------+-----------+----------+--------
+------------------------------------
200140 KIM N NATZ C01 1793 1976-12-15
--------+----------+---------+-----------+----------+--------
+------------------------------------
200170 KIYOSHI YAMAMOTO D11 2890 1978-09-15
--------+----------+---------+-----------+----------+--------
+------------------------------------
200220 REBA K JOHN D11 0672 1968-08-29
--------+----------+---------+-----------+----------+--------
+------------------------------------
200240 ROBERT M MONTEVERDE D21 3780 1979-12-05
--------+----------+---------+-----------+----------+--------
+------------------------------------
200280 EILEEN R SCHWARTZ E11 8997 1967-03-24
--------+----------+---------+-----------+----------+--------
+------------------------------------
200310 MICHELLE F SPRINGER E11 3332 1964-09-12
--------+----------+---------+-----------+----------+--------
+------------------------------------
200330 HELENA WONG E21 2103 1976-02-23
--------+----------+---------+-----------+----------+--------
+------------------------------------
200340 ROY R ALONZO E21 5698 1947-05-05
-----------------------------------------------------------------------------------------
----
-----------------------------------------------------------------------------------------
----------
Table 54. Right Half of DSN8310.EMP: Employee Table
-----------------------------------------------------------------------------------------
---------
(EMPNO) JOB EDLEVEL SEX BIRTHDATE SALARY
BONUS COMM
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000010) PRES 18 F 1933-08-14 52750.00 1000.00
4220.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000020) MANAGER 18 M 1948-02-02 41250.00
3300.00
800.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000030) MANAGER 20 F 1941-05-11 38250.00
3060.00
800.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000050) MANAGER 16 M 1925-09-15 40175.00
3214.00
800.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000060) MANAGER 16 M 1945-07-07 32250.00
2580.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000070) MANAGER 16 F 1953-05-26 36170.00
2893.00
700.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000090) MANAGER 16 F 1941-05-15 29750.00
2380.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000100) MANAGER 14 M 1956-12-18 26150.00
2092.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000110) SALESREP 19 M 1929-11-05 46500.00
3720.00
900.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000120) CLERK 14 M 1942-10-18 29250.00
2340.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000130) ANALYST 16 F 1925-09-15 23800.00
1904.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000140) ANALYST 18 F 1946-01-19 28420.00
2274.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000150) DESIGNER 16 M 1947-05-17 25280.00
2022.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000160) DESIGNER 17 F 1955-04-12 22250.00
1780.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000170) DESIGNER 16 M 1951-01-05 24680.00
1974.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000180) DESIGNER 17 F 1949-02-21 21340.00
1707.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000190) DESIGNER 16 M 1952-06-25 20450.00
1636.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000200) DESIGNER 16 M 1941-05-29 27740.00
2217.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000210) DESIGNER 17 M 1953-02-23 18270.00
1462.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000220) DESIGNER 18 F 1948-03-19 29840.00
2387.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000230) CLERK 14 M 1935-05-30 22180.00
1774.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000240) CLERK 17 M 1954-03-31 28760.00
2301.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000250) CLERK 15 M 1939-11-12 19180.00
1534.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000260) CLERK 16 F 1936-10-05 17250.00
1380.00
300.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000270) CLERK 15 F 1953-05-26 27380.00
2190.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000280) OPERATOR 17 F 1936-03-28 26250.00
2100.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000290) OPERATOR 12 M 1946-07-09 15340.00
1227.00
300.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000300) OPERATOR 14 M 1936-10-27 17750.00
1420.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000310) OPERATOR 12 F 1931-04-21 15900.00
1272.00
300.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000320) FIELDREP 16 M 1932-08-11 19950.00
1596.00
400.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000330) FIELDREP 14 M 1941-07-18 25370.00
2030.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(000340) FIELDREP 16 M 1926-05-17 23840.00
1907.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200010) SALESREP 18 F 1933-08-14 46500.00 1000.00
4220.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200120) CLERK 14 M 1942-10-18 29250.00
2340.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200140) ANALYST 18 F 1946-01-19 28420.00
2274.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200170) DESIGNER 16 M 1951-01-05 24680.00
1974.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200220) DESIGNER 18 F 1948-03-19 29840.00
2387.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200240) CLERK 17 M 1954-03-31 28760.00
2301.00
600.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200280) OPERATOR 17 F 1936-03-28 26250.00
2100.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200310) OPERATOR 12 F 1931-04-21 15900.00
1272.00
300.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200330) FIELDREP 14 F 1941-07-18 25370.00
2030.00
500.00
---------+----------+---------+-----+-----------+--------------------+--------
+-------------------
(200340) FIELDREP 16 M 1926-05-17 23840.00
1907.00
500.00
-----------------------------------------------------------------------------------------
---
The project table describes each project that the business is currently
undertaking. Data contained in each row include the project number,
name,
person responsible, and schedule dates.
Because the table is self-referencing, the foreign key for that restraint
must be added later with:
A.4.1 Content
The following table shows the content of the project table columns:
------------------------------------------------------------------------------------
Table 55. Columns of the Project Table
-----------------------------------------------------------------------------------
Column Column Description
Name
---------+------------+------------------------------------------------------------
1 PROJNO Project ID (the primary key)
---------+------------+------------------------------------------------------------
2 PROJNAME Project name
---------+------------+------------------------------------------------------------
3 DEPTNO ID of department responsible for the project
---------+------------+------------------------------------------------------------
4 RESPEMP ID of employee responsible for the project
---------+------------+------------------------------------------------------------
5 PRSTAFF Estimated mean project staffing (mean number of
persons)
needed between PRSTDATE and PRENDATE to achieve
the whole
project, including any subprojects
---------+------------+------------------------------------------------------------
6 PRSTDATE Estimated project start date
---------+------------+------------------------------------------------------------
7 PRENDATE Estimated project end date
---------+------------+------------------------------------------------------------
8 MAJPROJ ID of any major project of which the subject project
may
be a part
----------------------------------------------------------------------------------
-----------------------------------------
Table 56. Indexes of the Project Table
----------------------------------------
Name On Type of
Column Index
---------------+---------+--------------
DSN8310.XPROJ1 PROJNO Primary,
ascending
---------------+---------+--------------
DSN8310.XPROJ2 RESPEMP Ascending
---------------------------------------
A.5.1 Content
------------------------------------------------------------------------------------
Table 57. Columns of the Project Activity Table
-----------------------------------------------------------------------------------
Column Column Description
Name
---------+----------+--------------------------------------------------------------
1 PROJNO Project ID
---------+----------+--------------------------------------------------------------
2 ACTNO Activity ID
---------+----------+--------------------------------------------------------------
3 ACSTAFF Estimated mean number of employees needed to staff
the
activity
---------+----------+--------------------------------------------------------------
4 ACSTDATE Estimated activity start date
---------+----------+--------------------------------------------------------------
5 ACENDATE Estimated activity completion date
----------------------------------------------------------------------------------
------------------------------------------------
Table 58. Index of the Project Activity Table
-----------------------------------------------
Name On Columns Type of
Index
-----------------+--------------+--------------
DSN8310.XPROJAC1 PROJNO, primary,
ACTNO, ascending
ACSTDATE
----------------------------------------------
A.6.1 Content
The following table shows the content of the employee to project activity
table's columns:
------------------------------------------------------------------------------------
Table 59. Columns of the Employee to Project Activity Table
-----------------------------------------------------------------------------------
Column Column Description
Name
-------+------------+--------------------------------------------------------------
1 EMPNO Employee ID number
-------+------------+--------------------------------------------------------------
2 PROJNO PROJNO of the project to which the employee is
assigned
-------+------------+--------------------------------------------------------------
3 ACTNO ID of an activity within a project to which an employee
is
assigned
-------+------------+--------------------------------------------------------------
4 EMPTIME A proportion of the employee's full time (between
0.00 and
1.00) to be spent on the project from EMSTDATE to
EMENDATE
-------+------------+--------------------------------------------------------------
5 EMSTDATE Date the activity starts
-------+------------+--------------------------------------------------------------
6 EMENDATE Completion date of the activity
----------------------------------------------------------------------------------
-----------------------------------------------------
Table 60. Indexes of the Employee to Project
Activity Table
----------------------------------------------------
Name On Columns Type of
Index
----------------------+--------------+--------------
DSN8310.XEMPPROJACT1 PROJNO, Unique,
ACTNO, ascending
EMSTDATE,
EMPNO
----------------------+--------------+--------------
DSN8310.XEMPPROJACT2 EMPNO Ascending
---------------------------------------------------
---------
v CASCADE
------------
----- DEPT ------
------------
SET Ø
NULL SET
v NULL
-----------
EMP ----------------------------------------
------------
RESTRICT RESTRICT
RESTRICT --------- ------------
v v CASCADE ACT
------------
----> PROJ ------ ------------
RESTRICT
------------ v
RESTRICT ------------
------------------------> PROJACT
------------
RESTRICT
v
------------
EMPPROJACT <-------
------------
Figure 84 shows how the sample tables are related to databases and
storage
groups. Two databases are used to illustrate the possibility. Normally,
related data is stored in the same database.
-----------
Storage group: DSN8G310
-----------
----------------------------
------------- -------------
Databases: DSN8D31A DSN8D31P
application programming
data tables
-------------- --------------
------------------------------------- -------------
Table -------------
spaces: -------------
------------- ------------- ------------- -------------
DSN8S31D DSN8S31E Separate DSN8S31C
department employee spaces for Common for
table table other programming
-------------- -------------- application -- tables
tables -- --------------
--------------
In addition to the storage group and databases shown in Figure 84, the
storage group DSN8G31U and database DSN8D31U are created when
you run
DSNTEJ2A.
A.8.2 Databases
The default database, created when DB2 is installed, is not used to store
the sample application data. Two databases are used: one for tables
related to applications, the other for tables related to programs. They
are defined by the following statements:
CREATE DATABASE DSN8D31A
STOGROUP DSN8G310
BUFFERPOOL BP0;
The following table spaces are explicitly defined by the statements shown
below. The table spaces not explicitly defined are created implicitly in
the DSN8D31A database, using the default space attributes.
You can examine the source code for the sample application programs in
the
online sample library included with the DB2 product. The name of this
sample library is DSN310.SDSNSAMP.
* Project structures
* Project activity listings
* Individual project processing
* Individual project activity estimate processing
* Individual project staffing processing.
-----------------------------------------------------------------------------------------
----------
Table 61. Application Languages and Environments
-----------------------------------------------------------------------------------------
---------
Programs ISPF/TSO IMS CICS Batch SPUFI
---------------+----------------+----------------+---------------+----------------
+---------------
Dynamic SQL Assembler
Programs PL/I
---------------+----------------+----------------+---------------+----------------
+---------------
Exit Routines Assembler Assembler Assembler Assembler
Assembler
---------------+----------------+----------------+---------------+----------------
+---------------
Organization COBOL(1) COBOL COBOL
PL/I PL/I
---------------+----------------+----------------+---------------+----------------
+---------------
Phone COBOL PL/I PL/I COBOL
PL/I FORTRAN
Assembler(2) PL/I
C
---------------+----------------+----------------+---------------+----------------
+---------------
Project PL/I PL/I
---------------+----------------+----------------+---------------+----------------
+---------------
SQLCA Assembler Assembler Assembler
Assembler
Formatting
Routines
-----------------------------------------------------------------------------------------
----
Note:
B.1.1 TSO
-------------------------------------------------------------------------
Table 62. Sample DB2 Applications for TSO
------------------------------------------------------------------------
Preparation
Program JCL Member Attachment
Application Name Name Facility Description
--------------+----------+--------------+-------------+-----------------
Phone DSN8BC3 DSNTEJ2C DSNELI This COBOL
batch program
lists employee
telephone
numbers and
updates them if
requested
--------------+----------+--------------+-------------+-----------------
Phone DSN8BD3 DSNTEJ2D DSNELI This C program
lists employee
telephone
numbers and
updates them if
requested
--------------+----------+--------------+-------------+-----------------
Phone DSN8BP3 DSNTEJ2P DSNELI This PL/I batch
program lists
employee
telephone
numbers and
updates them if
requested
--------------+----------+--------------+-------------+-----------------
Phone DSN8BF3 DSNTEJ2F DSNELI This FORTRAN
program lists
employee
telephone
numbers and
updates them if
requested
--------------+----------+--------------+-------------+-----------------
Organization DSN8HC3 DSNTEJ3C DSNALI This COBOL
ISPF
DSNTEJ6 program
displays and
updates
information
about a local
department. It
can also
display and
update
information
about an
employee at a
local or remote
location.
--------------+----------+--------------+-------------+-----------------
Phone DSN8SC3 DSNTEJ3C DSNALI This COBOL ISPF
program lists
employee
telephone
numbers and
updates them if
requested.
--------------+----------+--------------+-------------+-----------------
Phone DSN8SP3 DSNTEJ3P DSNALI This PL/I ISPF
program lists
employee
telephone
numbers and
updates them if
requested
--------------+----------+--------------+-------------+-----------------
UNLOAD DSNTIAUL DSNTEJ2A DSNELI This assembler
language
program allows
users to unload
the data from a
table or view
and to produce
LOAD utility
control
statements for
the data
--------------+----------+--------------+-------------+-----------------
Dynamic SQL DSNTIAD DSNTIJTM DSNELI This assembler
language
program
dynamically
executes
non-SELECT
statements read
in from SYSIN;
that is, it
uses dynamic
SQL to execute
non-SELECT SQL
statements
--------------+----------+--------------+-------------+-----------------
Dynamic SQL DSNTEP2 DSNTEJ1P DSNELI This PL/I
program
dynamically
executes SQL
statements read
in from SYSIN.
Unlike DSNTIAD,
this
application can
also execute
SELECT
statements
---------------------------------------------------------------------
B.1.2 IMS
-------------------------------------------------------------------------
Table 63. Sample DB2 Applications for IMS
------------------------------------------------------------------------
Program JCL Member
Application Name Name Description
-----------------+------------------+-----------------+-----------------
Organization DSN8IC0 DSNTEJ4C IMS COBOL
DSN8IC1 Organization
DSN8IC2 Application
-----------------+------------------+-----------------+-----------------
Organization DSN8IP0 DSNTEJ4P IMS PL/I
DSN8IP1 Organization
DSN8IP2 Application
-----------------+------------------+-----------------+-----------------
Project DSN8IP6 DSNTEJ4P IMS PL/I
DSN8IP7 Project
DSN8IP8 Application
-----------------+------------------+-----------------+-----------------
Phone DSN8IP3 DSNTEJ4P IMS PL/I Phone
Application.
This program
lists employees
telephone
numbers and
updates them if
requested
----------------------------------------------------------------------
B.1.3 CICS
-------------------------------------------------------------------------
Table 64. Sample DB2 Applications for CICS
------------------------------------------------------------------------
Program JCL Member
Application Name Name Description
-----------------+------------------+-----------------+-----------------
Organization DSN8CC0 DSNTEJ5C CICS COBOL
DSN8CC1 Organization
DSN8CC2 Application
-----------------+------------------+-----------------+-----------------
Organization DSN8CP0 DSNTEJ5P CICS PL/I
DSN8CP1 Organization
DSN8CP2 Application
-----------------+------------------+-----------------+-----------------
Project DSN8CP6 DSNTEJ5P CICS PL/I
DSN8CP7 Project
DSN8CP8 Application
-----------------+------------------+-----------------+-----------------
Phone DSN8CP3 DSNTEJ5P CICS PL/I Phone
Application.
This program
lists employee
telephone
numbers and
updates them if
requested
----------------------------------------------------------------------
In this case, the number of columns returned and their data types are
known when the program is written.
In this case, the number of columns returned and their data types are
not known when the program is written.
*****************************************************
* SQL DESCRIPTOR AREA IN VS COBOL II
*****************************************************
01 SQLDA.
02 SQLDAID PIC X(8) VALUE 'SQLDA '.
02 SQLDABC PIC S9(8) COMPUTATIONAL VALUE 33016.
02 SQLN PIC S9(4) COMPUTATIONAL VALUE 750.
02 SQLD PIC S9(4) COMPUTATIONAL VALUE 0.
02 SQLVAR OCCURS 1 TO 750 TIMES
DEPENDING ON SQLN.
03 SQLTYPE PIC S9(4) COMPUTATIONAL.
03 SQLLEN PIC S9(4) COMPUTATIONAL.
03 SQLDATA POINTER.
03 SQLIND POINTER.
03 SQLNAME.
49 SQLNAMEL PIC S9(4) COMPUTATIONAL.
49 SQLNAMEC PIC X(30).
The pointers for data and indicators are represented directly as pointers
for SQLDATA and SQLIND. The varying-length character string for
column
names is provided as a structure for SQLNAME.
The SET statement sets a pointer from the address of an area in the
linkage section or another pointer; the statement can also set the address
of an area in the linkage section. These uses of the SET statement are
provided in Figure 88 in topic C.1.4. The SET statement does not permit
the use of an address in the WORKING-STORAGE section.
C.1.4 Example
* *
* *PSEUDOCODE* *
* *
* PROCEDURE *
* CALL DSN8BCU2. *
* END. *
* *
*---------------------------------------------------------------*
/
IDENTIFICATION DIVISION.
*-----------------------
PROGRAM-ID. DSN8BCU1
*
ENVIRONMENT DIVISION.
*
CONFIGURATION SECTION.
DATA DIVISION.
*
WORKING-STORAGE SECTION.
*
01 WORKAREA-IND.
02 WORKIND PIC S9(4) COMP OCCURS 750 TIMES.
01 RECWORK.
02 RECWORK-LEN PIC S9(8) COMP VALUE 32700.
02 RECWORK-CHAR PIC X(1) OCCURS 32700 TIMES.
*
PROCEDURE DIVISION.
*
CALL 'DSN8BCU2' USING WORKAREA-IND RECWORK.
GOBACK.
Figure 88. Called Program that does Pointer Manipulation (Part 1 of 10)
* OUTPUT *
* *
* SYMBOLIC LABEL/NAME SYSPRINT *
* DESCRIPTION PRINTED RESULTS *
* *
* SYMBOLIC LABEL/NAME SYSREC01 *
* DESCRIPTION UNLOADED TABLE DATA *
* *
* EXIT-NORMAL RETURN CODE 0 NORMAL COMPLETION
*
* *
* EXIT-ERROR *
* *
* RETURN CODE NONE *
* *
* ABEND CODES NONE *
* *
* ERROR-MESSAGES *
* DSNT490I SAMPLE COBOL DATA UNLOAD PROGRAM
RELEASE 3.0*
* - THIS IS THE HEADER, INDICATING A NORMAL *
* - START FOR THIS PROGRAM. *
* DSNT493I SQL ERROR, SQLCODE NNNNNNNN *
* - AN SQL ERROR OR WARNING WAS
ENCOUNTERED *
* - ADDITIONAL INFORMATION FROM DSNTIAR *
* - FOLLOWS THIS MESSAGE. *
* DSNT495I SUCCESSFUL UNLOAD XXXXXXXX ROWS OF
*
* TABLE TTTTTTTT *
* - THE UNLOAD WAS SUCCESSFUL. XXXXXXXX IS
*
* - THE NUMBER OF ROWS UNLOADED. TTTTTTTT
*
* - IS THE NAME OF THE TABLE OR VIEW FROM *
* - WHICH IT WAS UNLOADED. *
* DSNT496I UNRECOGNIZED DATA TYPE CODE OF
NNNNN *
* - THE PREPARE RETURNED AN INVALID DATA *
* - TYPE CODE. NNNNN IS THE CODE, PRINTED *
* - IN DECIMAL. USUALLY AN ERROR IN *
* - THIS ROUTINE OR A NEW DATA TYPE. *
* DSNT497I RETURN CODE FROM MESSAGE ROUTINE
DSNTIAR *
* - THE MESSAGE FORMATTING ROUTINE
DETECTED *
* - AN ERROR. SEE THAT ROUTINE FOR RETURN *
* - CODE INFORMATION. USUALLY AN ERROR IN *
* - THIS ROUTINE. *
* DSNT498I ERROR, NO VALID COLUMNS FOUND
*
* - THE PREPARE RETURNED DATA WHICH DID NOT
*
* - PRODUCE A VALID OUTPUT RECORD. *
* - USUALLY AN ERROR IN THIS ROUTINE. *
* DSNT499I NO ROWS FOUND IN TABLE OR VIEW *
* - THE CHOSEN TABLE OR VIEWS DID NOT *
* - RETURN ANY ROWS. *
* ERROR MESSAGES FROM MODULE DSNTIAR *
* - WHEN AN ERROR OCCURS, THIS MODULE *
* - PRODUCES CORRESPONDING MESSAGES. *
* *
* EXTERNAL REFERENCES *
* ROUTINES/SERVICES *
* DSNTIAR - TRANSLATE SQLCA INTO MESSAGES
*
* *
* DATA-AREAS NONE *
* *
* CONTROL-BLOCKS *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES NONE *
* *
* CHANGE-ACTIVITY NONE *
* *
Figure 89. Called Program that does Pointer Manipulation (Part 2 of 10)
* *PSEUDOCODE* *
* *
* PROCEDURE *
* EXEC SQL DECLARE DT CURSOR FOR SEL END-EXEC.
*
* EXEC SQL DECLARE SEL STATEMENT END-EXEC.
*
*
INITIALIZE THE DATA, OPEN FILES. *
* OBTAIN STORAGE FOR THE SQLDA AND THE DATA
RECORDS. *
* READ A TABLE NAME. *
* OPEN SYSREC01. *
* BUILD THE SQL STATEMENT TO BE EXECUTED
*
* EXEC SQL PREPARE SQL STATEMENT INTO SQLDA END-
EXEC. *
* SET UP ADDRESSES IN THE SQLDA FOR DATA. *
* INITIALIZE DATA RECORD COUNTER TO 0. *
* EXEC SQL OPEN DT END-EXEC. *
* DO WHILE SQLCODE IS 0. *
* EXEC SQL FETCH DT USING DESCRIPTOR SQLDA END-
EXEC. *
* ADD IN MARKERS TO DENOTE NULLS. *
* WRITE THE DATA TO SYSREC01. *
* INCREMENT DATA RECORD COUNTER. *
* END. *
* EXEC SQL CLOSE DT END-EXEC. *
* INDICATE THE RESULTS OF THE UNLOAD OPERATION.
*
* CLOSE THE SYSIN, SYSPRINT, AND SYSREC01 FILES.
*
* END. *
*---------------------------------------------------------------*
/
IDENTIFICATION DIVISION.
*-----------------------
PROGRAM-ID. DSN8BCU2
*
ENVIRONMENT DIVISION.
*--------------------
CONFIGURATION SECTION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT SYSIN
ASSIGN TO DA-S-SYSIN.
SELECT SYSPRINT
ASSIGN TO UT-S-SYSPRINT.
SELECT SYSREC01
ASSIGN TO DA-S-SYSREC01.
*
DATA DIVISION.
*-------------
*
FILE SECTION.
FD SYSIN
RECORD CONTAINS 80 CHARACTERS
BLOCK CONTAINS 0 RECORDS
LABEL RECORDS ARE OMITTED
RECORDING MODE IS F.
01 CARDREC PIC X(80).
*
FD SYSPRINT
RECORD CONTAINS 120 CHARACTERS
LABEL RECORDS ARE OMITTED
DATA RECORD IS MSGREC
RECORDING MODE IS F.
01 MSGREC PIC X(120).
*
Figure 90. Called Program that does Pointer Manipulation (Part 3 of 10)
FD SYSREC01
# RECORD CONTAINS 5 TO 32704 CHARACTERS
LABEL RECORDS ARE OMITTED
DATA RECORD IS REC01
RECORDING MODE IS V.
01 REC01.
02 REC01-LEN PIC S9(8) COMP.
02 REC01-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES
DEPENDING ON REC01-LEN.
/
WORKING-STORAGE SECTION.
*
*****************************************************
* STRUCTURE FOR INPUT *
*****************************************************
01 IOAREA.
02 TNAME PIC X(72).
02 FILLER PIC X(08).
01 STMTBUF.
49 STMTLEN PIC S9(4) COMP VALUE 92.
49 STMTCHAR PIC X(92).
01 STMTBLD.
02 FILLER PIC X(20) VALUE 'SELECT * FROM'.
02 STMTTAB PIC X(72).
*
*****************************************************
* REPORT HEADER STRUCTURE *
*****************************************************
01 HEADER.
02 FILLER PIC X(35)
VALUE ' DSNT490I SAMPLE COBOL DATA UNLOAD '.
02 FILLER PIC X(85) VALUE 'PROGRAM RELEASE 3.0'.
01 MSG-SQLERR.
02 FILLER PIC X(31)
VALUE ' DSNT493I SQL ERROR, SQLCODE '.
02 MSG-MINUS PIC X(1).
02 MSG-PRINT-CODE PIC 9(8).
02 FILLER PIC X(81) VALUE ' '.
01 UNLOADED.
02 FILLER PIC X(28)
VALUE ' DSNT495I SUCCESSFUL UNLOAD '.
02 ROWS PIC 9(8).
02 FILLER PIC X(15) VALUE ' ROWS OF TABLE '.
02 TABLENAM PIC X(72) VALUE ' '.
01 BADTYPE.
02 FILLER PIC X(42)
VALUE ' DSNT496I UNRECOGNIZED DATA TYPE CODE
OF '.
02 TYPCOD PIC 9(8).
02 FILLER PIC X(71) VALUE ' '.
01 MSGRETCD.
02 FILLER PIC X(42)
VALUE ' DSNT497I RETURN CODE FROM MESSAGE
ROUTINE'.
02 FILLER PIC X(9) VALUE 'DSNTIAR '.
02 RETCODE PIC 9(8).
02 FILLER PIC X(62) VALUE ' '.
01 MSGNOCOL.
02 FILLER PIC X(120)
VALUE ' DSNT498I ERROR, NO VALID COLUMNS
FOUND'.
01 MSG-NOROW.
02 FILLER PIC X(120)
VALUE ' DSNT499I NO ROWS FOUND IN TABLE OR
VIEW'.
Figure 91. Called Program that does Pointer Manipulation (Part 4 of 10)
*****************************************************
* WORKAREAS *
*****************************************************
77 NOT-FOUND PIC S9(8) COMP VALUE +100.
*****************************************************
* VARIABLES FOR ERROR-MESSAGE FORMATTING *
00
*****************************************************
01 ERROR-MESSAGE.
02 ERROR-LEN PIC S9(4) COMP VALUE +960.
02 ERROR-TEXT PIC X(120) OCCURS 8 TIMES
INDEXED BY ERROR-INDEX.
77 ERROR-TEXT-LEN PIC S9(8) COMP VALUE +120.
*****************************************************
* SQL DESCRIPTOR AREA *
*****************************************************
01 SQLDA.
02 SQLDAID PIC X(8) VALUE 'SQLDA '.
02 SQLDABC PIC S9(8) COMPUTATIONAL VALUE
33016.
02 SQLN PIC S9(4) COMPUTATIONAL VALUE 750.
02 SQLD PIC S9(4) COMPUTATIONAL VALUE 0.
02 SQLVAR OCCURS 1 TO 750 TIMES
DEPENDING ON SQLN.
03 SQLTYPE PIC S9(4) COMPUTATIONAL.
03 SQLLEN PIC S9(4) COMPUTATIONAL.
03 SQLDATA POINTER.
03 SQLIND POINTER.
03 SQLNAME.
49 SQLNAMEL PIC S9(4) COMPUTATIONAL.
49 SQLNAMEC PIC X(30).
*
* DATA TYPES FOUND IN SQLTYPE, AFTER REMOVING THE
NULL BIT
*
77 VARCTYPE PIC S9(4) COMP VALUE +448.
77 CHARTYPE PIC S9(4) COMP VALUE +452.
77 VARLTYPE PIC S9(4) COMP VALUE +456.
77 VARGTYPE PIC S9(4) COMP VALUE +464.
77 GTYPE PIC S9(4) COMP VALUE +468.
77 LVARGTYP PIC S9(4) COMP VALUE +472.
77 FLOATYPE PIC S9(4) COMP VALUE +480.
77 DECTYPE PIC S9(4) COMP VALUE +484.
77 INTTYPE PIC S9(4) COMP VALUE +496.
77 HWTYPE PIC S9(4) COMP VALUE +500.
77 DATETYP PIC S9(4) COMP VALUE +384.
77 TIMETYP PIC S9(4) COMP VALUE +388.
77 TIMESTMP PIC S9(4) COMP VALUE +392.
*
01 RECPTR POINTER.
01 RECNUM REDEFINES RECPTR PICTURE S9(8)
COMPUTATIONAL.
01 IRECPTR POINTER.
01 IRECNUM REDEFINES IRECPTR PICTURE S9(8)
COMPUTATIONAL.
01 I PICTURE S9(4) COMPUTATIONAL.
01 J PICTURE S9(4) COMPUTATIONAL.
01 DUMMY PICTURE S9(4) COMPUTATIONAL.
01 MYTYPE PICTURE S9(4) COMPUTATIONAL.
01 COLUMN-IND PICTURE S9(4) COMPUTATIONAL.
01 COLUMN-LEN PICTURE S9(4) COMPUTATIONAL.
01 COLUMN-PREC PICTURE S9(4) COMPUTATIONAL.
01 COLUMN-SCALE PICTURE S9(4) COMPUTATIONAL.
01 INDCOUNT PIC S9(4) COMPUTATIONAL.
01 ROWCOUNT PIC S9(4) COMPUTATIONAL.
01 WORKAREA2.
02 WORKINDPTR POINTER OCCURS 750 TIMES.
Figure 92. Called Program that does Pointer Manipulation (Part 5 of 10)
*****************************************************
* DECLARE CURSOR AND STATEMENT FOR DYNAMIC SQL
*****************************************************
*
EXEC SQL DECLARE DT CURSOR FOR SEL END-EXEC.
EXEC SQL DECLARE SEL STATEMENT END-EXEC.
*
*****************************************************
* SQL INCLUDE FOR SQLCA *
*****************************************************
EXEC SQL INCLUDE SQLCA END-EXEC.
*
77 ONE PIC S9(4) COMP VALUE +1.
77 TWO PIC S9(4) COMP VALUE +2.
77 FOUR PIC S9(4) COMP VALUE +4.
77 QMARK PIC X(1) VALUE '?'.
*
LINKAGE SECTION.
01 LINKAREA-IND.
02 IND PIC S9(4) COMP OCCURS 750 TIMES.
01 LINKAREA-REC.
02 REC1-LEN PIC S9(8) COMP.
02 REC1-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES
DEPENDING ON REC1-LEN.
01 LINKAREA-QMARK.
02 INDREC PIC X(1).
/
PROCEDURE DIVISION USING LINKAREA-IND LINKAREA-
REC.
*
*****************************************************
* SQL RETURN CODE HANDLING *
*****************************************************
EXEC SQL WHENEVER SQLERROR GOTO DBERROR END-
EXEC.
EXEC SQL WHENEVER SQLWARNING GOTO DBERROR
END-EXEC.
EXEC SQL WHENEVER NOT FOUND CONTINUE END-
EXEC.
*
*****************************************************
* MAIN PROGRAM ROUTINE *
*****************************************************
SET IRECPTR TO ADDRESS OF REC1-CHAR(1).
* **OPEN FILES
OPEN INPUT SYSIN
OUTPUT SYSPRINT
OUTPUT SYSREC01.
* **WRITE HEADER
WRITE MSGREC FROM HEADER
AFTER ADVANCING 2 LINES.
* **GET FIRST INPUT
READ SYSIN RECORD INTO IOAREA.
* **MAIN ROUTINE
PERFORM PROCESS-INPUT THROUGH IND-RESULT.
*
PROG-END.
* **CLOSE FILES
CLOSE SYSIN
SYSPRINT
SYSREC01.
GOBACK.
/
Figure 93. Called Program that does Pointer Manipulation (Part 6 of 10)
**********************************************************
*****
* *
* PERFORMED SECTION: *
* PROCESSING FOR THE TABLE OR VIEW JUST READ
*
* *
**********************************************************
*****
PROCESS-INPUT.
*
MOVE TNAME TO STMTTAB.
MOVE STMTBLD TO STMTCHAR.
EXEC SQL PREPARE SEL INTO :SQLDA FROM :STMTBUF
END-EXEC.
**********************************************************
*****
* *
* SET UP ADDRESSES IN THE SQLDA FOR DATA. *
* *
**********************************************************
*****
IF SQLD ZERO THEN
WRITE MSGREC FROM MSGNOCOL
AFTER ADVANCING 2 LINES
GO TO IND-RESULT.
MOVE ZERO TO ROWCOUNT.
MOVE ZERO TO REC1-LEN.
SET RECPTR TO IRECPTR.
MOVE ONE TO I.
PERFORM COLADDR UNTIL I > SQLD.
**********************************************************
******
* *
* SET LENGTH OF OUTPUT RECORD. *
* EXEC SQL OPEN DT END-EXEC. *
* DO WHILE SQLCODE IS 0. *
* EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-
EXEC. *
* ADD IN MARKERS TO DENOTE NULLS. *
* WRITE THE DATA TO SYSREC01. *
* INCREMENT DATA RECORD COUNTER. *
* END. *
* *
**********************************************************
******
* **OPEN CURSOR
EXEC SQL OPEN DT END-EXEC.
PERFORM BLANK-REC.
EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-
EXEC.
* **NO ROWS FOUND
* **PRINT ERROR MESSAGE
IF SQLCODE NOT-FOUND
WRITE MSGREC FROM MSG-NOROW
AFTER ADVANCING 2 LINES
ELSE
* **WRITE ROW AND
* **CONTINUE UNTIL
* **NO MORE ROWS
PERFORM WRITE-AND-FETCH
UNTIL SQLCODE IS NOT EQUAL TO ZERO.
*
EXEC SQL WHENEVER NOT FOUND GOTO CLOSEDT END-
EXEC.
*
CLOSEDT.
EXEC SQL CLOSE DT END-EXEC.
*
Figure 94. Called Program that does Pointer Manipulation (Part 7 of 10)
**********************************************************
******
* *
* INDICATE THE RESULTS OF THE UNLOAD OPERATION.
*
* *
**********************************************************
******
IND-RESULT.
MOVE TNAME TO TABLENAM.
MOVE ROWCOUNT TO ROWS.
WRITE MSGREC FROM UNLOADED
AFTER ADVANCING 2 LINES.
GO TO PROG-END.
*
WRITE-AND-FETCH.
* ADD IN MARKERS TO DENOTE NULLS.
MOVE ONE TO INDCOUNT.
PERFORM NULLCHK UNTIL INDCOUNT SQLD.
MOVE REC1-LEN TO REC01-LEN.
WRITE REC01 FROM LINKAREA-REC.
ADD ONE TO ROWCOUNT.
PERFORM BLANK-REC.
EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-
EXEC.
*
NULLCHK.
IF IND(INDCOUNT) < 0 THEN
SET ADDRESS OF LINKAREA-QMARK TO
WORKINDPTR(INDCOUNT)
MOVE QMARK TO INDREC.
ADD ONE TO INDCOUNT.
*****************************************************
* BLANK OUT RECORD TEXT FIRST *
*****************************************************
BLANK-REC.
MOVE ONE TO J.
PERFORM BLANK-MORE UNTIL J > REC1-LEN.
BLANK-MORE.
MOVE ' ' TO REC1-CHAR(J).
ADD ONE TO J.
*
COLADDR.
SET SQLDATA(I) TO RECPTR.
**********************************************************
******
*
* DETERMINE THE LENGTH OF THIS COLUMN (COLUMN-
LEN)
* THIS DEPENDS UPON THE DATA TYPE. MOST DATA
TYPES HAVE
* THE LENGTH SET, BUT VARCHAR, GRAPHIC,
VARGRAPHIC, AND
* DECIMAL DATA NEED TO HAVE THE BYTES
CALCULATED.
* THE NULL ATTRIBUTE MUST BE SEPARATED TO
SIMPLIFY MATTERS.
*
**********************************************************
******
MOVE SQLLEN(I) TO COLUMN-LEN.
* COLUMN-IND IS 0 FOR NO NULLS AND 1 FOR NULLS
DIVIDE SQLTYPE(I) BY TWO GIVING DUMMY REMAINDER
COLUMN-IND.
* MYTYPE IS JUST THE SQLTYPE WITHOUT THE NULL BIT
MOVE SQLTYPE(I) TO MYTYPE.
SUBTRACT COLUMN-IND FROM MYTYPE.
* SET THE COLUMN LENGTH, DEPENDENT UPON DATA
TYPE
EVALUATE MYTYPE
WHEN CHARTYPE CONTINUE,
WHEN DATETYP CONTINUE,
WHEN TIMETYP CONTINUE,
WHEN TIMESTMP CONTINUE,
WHEN FLOATYPE CONTINUE,
Figure 95. Called Program that does Pointer Manipulation (Part 8 of 10)
WHEN VARCTYPE
ADD TWO TO COLUMN-LEN,
WHEN VARLTYPE
ADD TWO TO COLUMN-LEN,
WHEN GTYPE
MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-
LEN,
WHEN VARGTYPE
PERFORM CALC-VARG-LEN,
WHEN LVARGTYP
PERFORM CALC-VARG-LEN,
WHEN HWTYPE
MOVE TWO TO COLUMN-LEN,
WHEN INTTYPE
MOVE FOUR TO COLUMN-LEN,
WHEN DECTYPE
PERFORM CALC-DECIMAL-LEN,
WHEN OTHER
PERFORM UNRECOGNIZED-ERROR,
END-EVALUATE.
ADD COLUMN-LEN TO RECNUM.
ADD COLUMN-LEN TO REC1-LEN.
**********************************************************
******
* *
* IF THIS COLUMN CAN BE NULL, AN INDICATOR VARIABLE
IS *
* NEEDED. WE ALSO RESERVE SPACE IN THE OUTPUT
RECORD TO *
* NOTE THAT THE VALUE IS NULL. *
* *
**********************************************************
******
MOVE ZERO TO IND(I).
IF COLUMN-IND ONE THEN
SET SQLIND(I) TO ADDRESS OF IND(I)
SET WORKINDPTR(I) TO RECPTR
ADD ONE TO RECNUM
ADD ONE TO REC1-LEN.
*
ADD ONE TO I.
* PERFORMED PARAGRAPH TO CALCULATE COLUMN
LENGTH
* FOR A DECIMAL DATA TYPE COLUMN
CALC-DECIMAL-LEN.
DIVIDE COLUMN-LEN BY 256 GIVING COLUMN-PREC
REMAINDER COLUMN-SCALE.
MOVE COLUMN-PREC TO COLUMN-LEN.
ADD ONE TO COLUMN-LEN.
DIVIDE COLUMN-LEN BY TWO GIVING COLUMN-LEN.
* PERFORMED PARAGRAPH TO CALCULATE COLUMN
LENGTH
* FOR A VARGRAPHIC DATA TYPE COLUMN
CALC-VARG-LEN.
MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN.
ADD TWO TO COLUMN-LEN.
* PERFORMED PARAGRAPH TO NOTE AN UNRECOGNIZED
* DATA TYPE COLUMN
UNRECOGNIZED-ERROR.
*
* ERROR MESSAGE FOR UNRECOGNIZED DATA TYPE
*
MOVE SQLTYPE(I) TO TYPCOD.
WRITE MSGREC FROM BADTYPE
AFTER ADVANCING 2 LINES.
GO TO IND-RESULT.
Figure 96. Called Program that does Pointer Manipulation (Part 9 of 10)
*****************************************************
* SQL ERROR OCCURRED - GET MESSAGE *
*****************************************************
DBERROR.
* **SQL ERROR
MOVE SQLCODE TO MSG-PRINT-CODE.
IF SQLCODE < 0 THEN MOVE '-' TO MSG-MINUS.
WRITE MSGREC FROM MSG-SQLERR
AFTER ADVANCING 2 LINES.
CALL 'DSNTIAR' USING SQLCA ERROR-MESSAGE
ERROR-TEXT-LEN.
IF RETURN-CODE ZERO
PERFORM ERROR-PRINT VARYING ERROR-INDEX
FROM 1 BY 1 UNTIL ERROR-INDEX GREATER THAN 8
ELSE
* **ERROR FOUND IN DSNTIAR
* **PRINT ERROR MESSAGE
MOVE RETURN-CODE TO RETCODE
WRITE MSGREC FROM MSGRETCD
AFTER ADVANCING 2 LINES.
GO TO PROG-END.
*
*****************************************************
* PRINT MESSAGE TEXT *
*****************************************************
ERROR-PRINT.
WRITE MSGREC FROM ERROR-TEXT (ERROR-INDEX)
AFTER ADVANCING 1 LINE.
Figure 97. Called Program that does Pointer Manipulation (Part 10 of 10)
/**********************************************************
************/
/* Descriptive name Dynamic SQL sample using C language */
/* */
/* STATUS VERSION 2 RELEASE 2, LEVEL 0 */
/* */
/* Function To show examples of the use of dynamic and static */
/* SQL. */
/* */
/* Notes This example assumes that the EMP and DEPT tables are */
/* defined. They need not be the same as the DB2 Sample */
/* tables. */
/* */
/* Module type C program */
/* Processor DB2 precompiler, C compiler */
/* Module size see link edit */
/* Attributes not reentrant or reusable */
/* */
/* Input */
/* */
/* symbolic label/name DEPT */
/* description arbitrary table */
/* symbolic label/name EMP */
/* description arbitrary table */
/* */
/* Output */
/* */
/* symbolic label/name SYSPRINT */
/* description print results via printf */
/* */
/* Exit-normal return code 0 normal completion */
/* */
/* Exit-error */
/* */
/* Return code SQLCA */
/* */
/* Abend codes none */
/* */
/* External references none */
/* */
/* Control-blocks */
/* SQLCA - sql communication area */
/* */
/* Logic specification: */
/* */
/* There are four SQL sections. */
/* */
/* 1) STATIC SQL 1: using static cursor with a SELECT statement.
*/
/* Two output host variables. */
/* 2) Dynamic SQL 2: Fixed-list SELECT, using same SELECT
statement */
/* used in SQL 1 to show the difference. The prepared string */
/* :iptstr can be assigned with other dynamic-able SQL statements. */
/* 3) Dynamic SQL 3: Insert with parameter markers. */
/* Using four parameter markers which represent four input host */
/* variables within a host structure. */
/* 4) Dynamic SQL 4: EXECUTE IMMEDIATE */
/* A GRANT statement is executed immediately by passing it to DB2
*/
/* via a varying string host variable. The example shows how to */
/* set up the host variable before passing it. */
/* */
/**********************************************************
************/
#include "stdio.h"
#include "stdefs.h"
EXEC SQL INCLUDE SQLCA;
EXEC SQL INCLUDE SQLDA;
EXEC SQL BEGIN DECLARE SECTION;
short edlevel;
struct { short len;
char x1â56ã;
} stmtbf1, stmtbf2, inpstr;
struct { short len;
char x1â15ã;
} lname;
short hv1;
struct { char deptnoâ4ã;
struct { short len;
char xâ36ã;
} deptname;
char mgrnoâ7ã;
char admrdeptâ4ã;
} hv2;
short indâ4ã;
EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE EMP TABLE
(EMPNO CHAR(6) ,
FIRSTNAME VARCHAR(12) ,
MIDINIT CHAR(1) ,
LASTNAME VARCHAR(15) ,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) ,
HIREDATE DECIMAL(6) ,
JOBCODE DECIMAL(3) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DECIMAL(6) ,
SALARY DECIMAL(8,2) ,
FORFNAME VARGRAPHIC(12) ,
FORMNAME GRAPHIC(1) ,
FORLNAME VARGRAPHIC(15) ,
FORADDR LONG VARGRAPHIC ) ;
/*********************************************************
*********/
/* Dynamic SQL 3: PREPARE with parameter markers */
/* Insert into with four values. */
/*********************************************************
*********/
sprintf (stmtbf1.x1,
"INSERT INTO DEPT VALUES (?,?,?,?)");
stmtbf1.len strlen(stmtbf1.x1);
printf("??/n*** begin prepare ***");
EXEC SQL PREPARE s1 FROM :stmtbf1;
printf("??/n*** begin execute ***");
EXEC SQL EXECUTE s1 USING :hv2:ind;
printf("??/n*** following are expected insert results ***");
printf("??/n hv2.deptno %s",hv2.deptno);
printf("??/n hv2.deptname.len %d",hv2.deptname.len);
printf("??/n hv2.deptname.x %s",hv2.deptname.x);
printf("??/n hv2.mgrno %s",hv2.mgrno);
printf("??/n hv2.admrdept %s",hv2.admrdept);
EXEC SQL COMMIT;
/*********************************************************
*********/
/* Dynamic SQL 4: EXECUTE IMMEDIATE */
/* Grant select */
/*********************************************************
*********/
sprintf (stmtbf2.x1,
"GRANT SELECT ON EMP TO USERX");
stmtbf2.len strlen(stmtbf2.x1);
printf("??/n*** begin execute immediate ***");
EXEC SQL EXECUTE IMMEDIATE :stmtbf2;
printf("??/n*** end of program ***");
goto progend;
HANDWARN: HANDLERR: NOTFOUND: ;
printf("??/n SQLCODE %d",SQLCODE);
printf("??/n SQLWARN0 %c",SQLWARN0);
printf("??/n SQLWARN1 %c",SQLWARN1);
printf("??/n SQLWARN2 %c",SQLWARN2);
printf("??/n SQLWARN3 %c",SQLWARN3);
printf("??/n SQLWARN4 %c",SQLWARN4);
printf("??/n SQLWARN5 %c",SQLWARN5);
printf("??/n SQLWARN6 %c",SQLWARN6);
printf("??/n SQLWARN7 %c",SQLWARN7);
progend: ;
}
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
********************************************************
*********
* *
* MODULE NAME TWOPHASE *
* *
* DESCRIPTIVE NAME DB2 SAMPLE APPLICATION USING
*
* TWO PHASE COMMIT AND THE APPLICATION-
*
* DIRECTED DISTRIBUTED ACCESS METHOD
*
* *
* COPYRIGHT 5665-DB2 (C) COPYRIGHT IBM CORP 1982,
1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER
G120-2083 *
* *
* STATUS VERSION 3 RELEASE 1, LEVEL 0 *
* *
* FUNCTION THIS MODULE DEMONSTRATES DISTRIBUTED
DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN
EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE
OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1
AND *
* STLEC2. *
* *
* MODULE TYPE COBOL PROGRAM *
* PROCESSOR DB2 PRECOMPILER, VS COBOL II
*
* MODULE SIZE SEE LINK EDIT *
* ATTRIBUTES NOT REENTRANT OR REUSABLE
*
* *
* ENTRY POINT *
* PURPOSE TO ILLUSTRATE 2 PHASE COMMIT
*
* LINKAGE INVOKE FROM DSN RUN *
* INPUT NONE *
* OUTPUT *
* SYMBOLIC LABEL/NAME SYSPRINT *
* DESCRIPTION PRINT OUT THE DESCRIPTION OF
EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL RETURN CODE 0 FROM NORMAL
COMPLETION *
* *
* EXIT ERROR NONE *
* *
* EXTERNAL REFERENCES *
* ROUTINE SERVICES NONE *
* DATA-AREAS NONE *
* CONTROL-BLOCKS *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES NONE *
* *
* CHANGE-ACTIVITY NONE *
* *
* *
* *
* PSEUDOCODE *
* *
* MAINLINE. *
* Perform CONNECT-TO-SITE-1 to establish *
* a connection to the local connection. *
* If the previous operation was successful Then *
* Do. *
* Perform PROCESS-CURSOR-SITE-1 to obtain the *
* information about an employee that is *
* transferring to another location. *
* If the information about the employee was obtained *
* successfully Then *
* Do. *
* Perform UPDATE-ADDRESS to update the information *
* to contain current information about the *
* employee. *
* Perform CONNECT-TO-SITE-2 to establish *
* a connection to the site where the employee is *
* transferring to. *
* If the connection is established successfully *
* Then *
* Do. *
* Perform PROCESS-SITE-2 to insert the *
* employee information at the location *
* where the employee is transferring to. *
* End if the connection was established *
* successfully. *
* End if the employee information was obtained *
* successfully. *
* End if the previous operation was successful. *
* Perform COMMIT-WORK to COMMIT the changes made to
STLEC1 *
* and STLEC2. *
* *
* PROG-END. *
* Close the printer. *
* Return. *
* *
* CONNECT-TO-SITE-1. *
* Provide a text description of the following step. *
* Establish a connection to the location where the *
* employee is transferring from. *
* Print the SQLCA out. *
* *
* PROCESS-CURSOR-SITE-1. *
* Provide a text description of the following step. *
* Open a cursor that will be used to retrieve information *
* about the transferring employee from this site. *
* Print the SQLCA out. *
* If the cursor was opened successfully Then *
* Do. *
* Perform FETCH-DELETE-SITE-1 to retrieve and *
* delete the information about the transferring *
* employee from this site. *
* Perform CLOSE-CURSOR-SITE-1 to close the cursor. *
* End if the cursor was opened successfully. *
* *
* FETCH-DELETE-SITE-1. *
* Provide a text description of the following step. *
* Fetch information about the transferring employee. *
* Print the SQLCA out. *
* If the information was retrieved successfully Then *
* Do. *
* Perform DELETE-SITE-1 to delete the employee *
* at this site. *
* End if the information was retrieved successfully. *
* *
* DELETE-SITE-1. *
* Provide a text description of the following step. *
* Delete the information about the transferring employee *
* from this site. *
* Print the SQLCA out. *
* *
* CLOSE-CURSOR-SITE-1. *
* Provide a text description of the following step. *
* Close the cursor used to retrieve information about *
* the transferring employee. *
* Print the SQLCA out. *
* *
* UPDATE-ADDRESS. *
* Update the address of the employee. *
* Update the city of the employee. *
* Update the location of the employee. *
* *
* CONNECT-TO-SITE-2. *
* Provide a text description of the following step. *
* Establish a connection to the location where the *
* employee is transferring to. *
* Print the SQLCA out. *
* *
* PROCESS-SITE-2. *
* Provide a text description of the following step. *
* Insert the employee information at the location where *
* the employee is being transferred to. *
* Print the SQLCA out. *
* *
* COMMIT-WORK. *
* COMMIT all the changes made to STLEC1 and STLEC2. *
* *
********************************************************
*********
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
WORKING-STORAGE SECTION.
********************************************************
*********
* Variable declarations *
********************************************************
*********
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
01 H-EMPTBL-IND-TABLE.
02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
********************************************************
*********
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
********************************************************
*********
********************************************************
*********
* Declaration for the table that contains employee information *
********************************************************
*********
********************************************************
*********
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
********************************************************
*********
PROCEDURE DIVISION.
A101-HOUSE-KEEPING.
OPEN OUTPUT PRINTER.
********************************************************
*********
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2
*
* using the information obtained from STLEC1. *
********************************************************
*********
MAINLINE.
PERFORM CONNECT-TO-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM CONNECT-TO-SITE-2
IF SQLCODE IS EQUAL TO 0
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
********************************************************
*********
* Establish a connection to STLEC1 *
********************************************************
*********
CONNECT-TO-SITE-1.
********************************************************
*********
* Once a connection has been established successfully at STLEC1,*
* open the cursor that will be used to retrieve information *
* about the transferring employee. *
********************************************************
*********
PROCESS-CURSOR-SITE-1.
********************************************************
*********
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
********************************************************
*********
FETCH-DELETE-SITE-1.
********************************************************
*********
* Delete the employee from STLEC1. *
********************************************************
*********
DELETE-SITE-1.
********************************************************
*********
* Close the cursor used to retrieve information about the *
* transferring employee. *
********************************************************
*********
CLOSE-CURSOR-SITE-1.
********************************************************
*********
* Update certain employee information in order to make it *
* current. *
********************************************************
*********
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '1500 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 97804' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
********************************************************
*********
* Establish a connection to STLEC2 *
********************************************************
*********
CONNECT-TO-SITE-2.
********************************************************
*********
* Using the employee information that was retrieved from STLEC1 *
* and updated above, insert the employee at STLEC2. *
********************************************************
*********
PROCESS-SITE-2.
********************************************************
*********
* COMMIT any changes that were made at STLEC1 and STLEC2.
*
********************************************************
*********
COMMIT-WORK.
********************************************************
*********
* Include COBOL standard language procedures *
********************************************************
*********
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
IDENTIFICATION DIVISION.
PROGRAM-ID. TWOPHASE.
AUTHOR.
REMARKS.
********************************************************
*********
* *
* MODULE NAME TWOPHASE *
* *
* DESCRIPTIVE NAME DB2 SAMPLE APPLICATION USING
*
* TWO PHASE COMMIT AND THE SYSTEM-
DIRECTED *
* DISTRIBUTED ACCESS METHOD *
* *
* COPYRIGHT 5665-DB2 (C) COPYRIGHT IBM CORP 1982,
1989 *
* REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER
G120-2083 *
* *
* STATUS VERSION 3 RELEASE 1, LEVEL 0 *
* *
* FUNCTION THIS MODULE DEMONSTRATES DISTRIBUTED
DATA ACCESS *
* USING 2 PHASE COMMIT BY TRANSFERRING AN
EMPLOYEE *
* FROM ONE LOCATION TO ANOTHER. *
* *
* NOTE: THIS PROGRAM ASSUMES THE EXISTENCE
OF THE *
* TABLE SYSADM.EMP AT LOCATIONS STLEC1
AND *
* STLEC2. *
* *
* MODULE TYPE COBOL PROGRAM *
* PROCESSOR DB2 PRECOMPILER, VS COBOL II
*
* MODULE SIZE SEE LINK EDIT *
* ATTRIBUTES NOT REENTRANT OR REUSABLE
*
* *
* ENTRY POINT *
* PURPOSE TO ILLUSTRATE 2 PHASE COMMIT
*
* LINKAGE INVOKE FROM DSN RUN *
* INPUT NONE *
* OUTPUT *
* SYMBOLIC LABEL/NAME SYSPRINT *
* DESCRIPTION PRINT OUT THE DESCRIPTION OF
EACH *
* STEP AND THE RESULTANT SQLCA *
* *
* EXIT NORMAL RETURN CODE 0 FROM NORMAL
COMPLETION *
* *
* EXIT ERROR NONE *
* *
* EXTERNAL REFERENCES *
* ROUTINE SERVICES NONE *
* DATA-AREAS NONE *
* CONTROL-BLOCKS *
* SQLCA - SQL COMMUNICATION AREA *
* *
* TABLES NONE *
* *
* CHANGE-ACTIVITY NONE *
* *
* *
* *
* PSEUDOCODE *
* *
* MAINLINE. *
* Perform PROCESS-CURSOR-SITE-1 to obtain the information
*
* about an employee that is transferring to another *
* location. *
* If the information about the employee was obtained *
* successfully Then *
* Do. *
* Perform UPDATE-ADDRESS to update the information to *
* contain current information about the employee. *
* Perform PROCESS-SITE-2 to insert the employee *
* information at the location where the employee is *
* transferring to. *
* End if the employee information was obtained *
* successfully. *
* Perform COMMIT-WORK to COMMIT the changes made to
STLEC1 *
* and STLEC2. *
* *
* PROG-END. *
* Close the printer. *
* Return. *
* *
* PROCESS-CURSOR-SITE-1. *
* Provide a text description of the following step. *
* Open a cursor that will be used to retrieve information *
* about the transferring employee from this site. *
* Print the SQLCA out. *
* If the cursor was opened successfully Then *
* Do. *
* Perform FETCH-DELETE-SITE-1 to retrieve and *
* delete the information about the transferring *
* employee from this site. *
* Perform CLOSE-CURSOR-SITE-1 to close the cursor. *
* End if the cursor was opened successfully. *
* *
* FETCH-DELETE-SITE-1. *
* Provide a text description of the following step. *
* Fetch information about the transferring employee. *
* Print the SQLCA out. *
* If the information was retrieved successfully Then *
* Do. *
* Perform DELETE-SITE-1 to delete the employee *
* at this site. *
* End if the information was retrieved successfully. *
* *
* DELETE-SITE-1. *
* Provide a text description of the following step. *
* Delete the information about the transferring employee *
* from this site. *
* Print the SQLCA out. *
* *
* CLOSE-CURSOR-SITE-1. *
* Provide a text description of the following step. *
* Close the cursor used to retrieve information about *
* the transferring employee. *
* Print the SQLCA out. *
* *
* UPDATE-ADDRESS. *
* Update the address of the employee. *
* Update the city of the employee. *
* Update the location of the employee. *
* *
* PROCESS-SITE-2. *
* Provide a text description of the following step. *
* Insert the employee information at the location where *
* the employee is being transferred to. *
* Print the SQLCA out. *
* *
* COMMIT-WORK. *
* COMMIT all the changes made to STLEC1 and STLEC2. *
* *
********************************************************
*********
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT PRINTER, ASSIGN TO S-OUT1.
DATA DIVISION.
FILE SECTION.
FD PRINTER
RECORD CONTAINS 120 CHARACTERS
DATA RECORD IS PRT-TC-RESULTS
LABEL RECORD IS OMITTED.
01 PRT-TC-RESULTS.
03 PRT-BLANK PIC X(120).
WORKING-STORAGE SECTION.
********************************************************
*********
* Variable declarations *
********************************************************
*********
01 H-EMPTBL.
05 H-EMPNO PIC X(6).
05 H-NAME.
49 H-NAME-LN PIC S9(4) COMP-4.
49 H-NAME-DA PIC X(32).
05 H-ADDRESS.
49 H-ADDRESS-LN PIC S9(4) COMP-4.
49 H-ADDRESS-DA PIC X(36).
05 H-CITY.
49 H-CITY-LN PIC S9(4) COMP-4.
49 H-CITY-DA PIC X(36).
05 H-EMPLOC PIC X(4).
05 H-SSNO PIC X(11).
05 H-BORN PIC X(10).
05 H-SEX PIC X(1).
05 H-HIRED PIC X(10).
05 H-DEPTNO PIC X(3).
05 H-JOBCODE PIC S9(3)V COMP-3.
05 H-SRATE PIC S9(5) COMP.
05 H-EDUC PIC S9(5) COMP.
05 H-SAL PIC S9(6)V9(2) COMP-3.
05 H-VALIDCHK PIC S9(6)V COMP-3.
01 H-EMPTBL-IND-TABLE.
02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES.
********************************************************
*********
* Includes for the variables used in the COBOL standard *
* language procedures and the SQLCA. *
********************************************************
*********
********************************************************
*********
* Declaration for the table that contains employee information *
********************************************************
*********
********************************************************
*********
* Declaration of the cursor that will be used to retrieve *
* information about a transferring employee *
********************************************************
*********
PROCEDURE DIVISION.
A101-HOUSE-KEEPING.
OPEN OUTPUT PRINTER.
********************************************************
*********
* An employee is transferring from location STLEC1 to STLEC2. *
* Retrieve information about the employee from STLEC1, delete *
* the employee from STLEC1 and insert the employee at STLEC2
*
* using the information obtained from STLEC1. *
********************************************************
*********
MAINLINE.
PERFORM PROCESS-CURSOR-SITE-1
IF SQLCODE IS EQUAL TO 0
PERFORM UPDATE-ADDRESS
PERFORM PROCESS-SITE-2.
PERFORM COMMIT-WORK.
PROG-END.
CLOSE PRINTER.
GOBACK.
********************************************************
*********
* Open the cursor that will be used to retrieve information *
* about the transferring employee. *
********************************************************
*********
PROCESS-CURSOR-SITE-1.
********************************************************
*********
* Retrieve information about the transferring employee. *
* Provided that the employee exists, perform DELETE-SITE-1 to *
* delete the employee from STLEC1. *
********************************************************
*********
FETCH-DELETE-SITE-1.
MOVE 'FETCH C1 ' TO STNAME
WRITE PRT-TC-RESULTS FROM STNAME
EXEC SQL
FETCH C1 INTO :H-EMPTBL:H-EMPTBL-IND
END-EXEC.
PERFORM PTSQLCA.
IF SQLCODE IS EQUAL TO ZERO
PERFORM DELETE-SITE-1.
********************************************************
*********
* Delete the employee from STLEC1. *
********************************************************
*********
DELETE-SITE-1.
********************************************************
*********
* Close the cursor used to retrieve information about the *
* transferring employee. *
********************************************************
*********
CLOSE-CURSOR-SITE-1.
********************************************************
*********
* Update certain employee information in order to make it *
* current. *
********************************************************
*********
UPDATE-ADDRESS.
MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN.
MOVE '1500 NEW STREET' TO H-ADDRESS-DA.
MOVE TEMP-CITY-LN TO H-CITY-LN.
MOVE 'NEW CITY, CA 97804' TO H-CITY-DA.
MOVE 'SJCA' TO H-EMPLOC.
********************************************************
*********
* Using the employee information that was retrieved from STLEC1 *
* and updated above, insert the employee at STLEC2. *
********************************************************
*********
PROCESS-SITE-2.
********************************************************
*********
* COMMIT any changes that were made at STLEC1 and STLEC2.
*
********************************************************
*********
COMMIT-WORK.
********************************************************
*********
* Include COBOL standard language procedures *
********************************************************
*********
INCLUDE-SUBS.
EXEC SQL INCLUDE COBSSUB END-EXEC.
-----------------------------------------------------------------------------------------
----------
Table 65. Actions Allowed on DB2 SQL Statements
-----------------------------------------------------------------------------------------
---------
Interactively Processed by
or ----------------------------------------------
Dynamically Requesting
SQL Statement Executable Prepared System Server
Precompiler
--------------------+--------------+---------------+---------------+---------------
+--------------
ALTER Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
BEGIN DECLARE Y
SECTION
--------------------+--------------+---------------+---------------+---------------
+--------------
CLOSE Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
COMMENT Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
COMMIT Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
CONNECT (Type 1 Y Y
and Type 2)
--------------------+--------------+---------------+---------------+---------------
+--------------
CREATE Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
DECLARE CURSOR Y
--------------------+--------------+---------------+---------------+---------------
+--------------
DECLARE STATEMENT Y
--------------------+--------------+---------------+---------------+---------------
+--------------
DECLARE TABLE Y
--------------------+--------------+---------------+---------------+---------------
+--------------
DELETE Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
DESCRIBE Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
DROP Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
END DECLARE Y
SECTION
--------------------+--------------+---------------+---------------+---------------
+--------------
EXECUTE Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
EXECUTE IMMEDIATE Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
EXPLAIN Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
FETCH Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
GRANT Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
INCLUDE Y
--------------------+--------------+---------------+---------------+---------------
+--------------
INSERT Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
LABEL Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
LOCK TABLE Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
OPEN Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
PREPARE Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
RELEASE Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
REVOKE Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
ROLLBACK Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
SELECT INTO Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
SET CONNECTION Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
SET CURRENT DEGREE Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
SET CURRENT Y Y
PACKAGESET
--------------------+--------------+---------------+---------------+---------------
+--------------
SET CURRENT SQLID Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT DATE
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT DEGREE
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT
PACKAGESET
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT SERVER
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT SQLID
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT TIME
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT
TIMESTAMP
--------------------+--------------+---------------+---------------+---------------
+--------------
SET host-variable Y Y
CURRENT TIMEZONE
--------------------+--------------+---------------+---------------+---------------
+--------------
UPDATE Y Y Y
--------------------+--------------+---------------+---------------+---------------
+--------------
WHENEVER Y
-----------------------------------------------------------------------------------------
-----
-----------------------------------------------------------------------------------------
----------
Table 66. Program Preparation Options for Packages
-----------------------------------------------------------------------------------------
---------
Generic option Equivalent for Requesting DB2 Bind or Precompile
DB2 Server Support
description Option
---------------------+-------------------------------+----------------------
+---------------------
Name of remote CURRENTSERVER(location name) B
Supported as a BIND
database PLAN option
---------------------+-------------------------------+----------------------
+---------------------
Package name MEMBER B Supported
---------------------+-------------------------------+----------------------
+---------------------
Package version VERSION P Supported
---------------------+-------------------------------+----------------------
+---------------------
Consistency token LEVEL P Supported
---------------------+-------------------------------+----------------------
+---------------------
Default qualifier QUALIFIER B Supported
---------------------+-------------------------------+----------------------
+---------------------
Existence checking: VALIDATE(BIND) B
Supported
full
---------------------+-------------------------------+----------------------
+---------------------
Existence checking: VALIDATE(RUN) B
Supported
deferred
---------------------+-------------------------------+----------------------
+---------------------
Creation control: (Not available) Supported
create no package
---------------------+-------------------------------+----------------------
+---------------------
Creation control: SQLERROR(NO PACKAGE) B
Supported
create no package
if there are errors
---------------------+-------------------------------+----------------------
+---------------------
Creation control: SQLERROR(CONTINUE) B
Supported
create a package
despite errors
---------------------+-------------------------------+----------------------
+---------------------
Package ACTION(REPLACE) B Supported
replacement:
replace existing
packages
---------------------+-------------------------------+----------------------
+---------------------
Package ACTION(ADD) B Supported
replacement:
protect existing
packages
---------------------+-------------------------------+----------------------
+---------------------
Package ACTION(REPLACE REPLVER B
Supported
replacement: (version-id))
version name
---------------------+-------------------------------+----------------------
+---------------------
Privilege default Supported
inheritance:
retain
---------------------+-------------------------------+----------------------
+---------------------
Privilege (Not available) Not supported
inheritance:
revoke
---------------------+-------------------------------+----------------------
+---------------------
Package isolation ISOLATION(RR) B
Supported
level: RR
---------------------+-------------------------------+----------------------
+---------------------
Package isolation (Not available) Processed as
level: RS ISOLATION(RR)
---------------------+-------------------------------+----------------------
+---------------------
Package isolation ISOLATION(CS) B
Supported
level: CS
---------------------+-------------------------------+----------------------
+---------------------
Package isolation (Not available) Processed as
level: UR ISOLATION(CS)
---------------------+-------------------------------+----------------------
+---------------------
Package owner OWNER B Supported
---------------------+-------------------------------+----------------------
+---------------------
Explain option EXPLAIN B Supported
---------------------+-------------------------------+----------------------
+---------------------
Application-directed CONNECT(1) P Supported
access: SQL CONNECT
(Type 1)
---------------------+-------------------------------+----------------------
+---------------------
Application-directed CONNECT(2) P Supported
access: SQL CONNECT
(Type 2)
---------------------+-------------------------------+----------------------
+---------------------
Lock release option RELEASE B Supported
---------------------+-------------------------------+----------------------
+---------------------
Block protocol: (Not available) Not supported
Never block data
---------------------+-------------------------------+----------------------
+---------------------
Block protocol: Do CURRENTDATA(YES) B
Supported
not block data for
an ambiguous cursor
---------------------+-------------------------------+----------------------
+---------------------
Block protocol: CURRENTDATA(NO) B
Supported
Block data when
possible
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Supported
subtype: system
default
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Not supported
subtype: BIT
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Not supported
subtype: SBCS
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Not supported
subtype: DBCS
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Not supported
CCSID: SBCS
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Not supported
CCSID: Mixed
---------------------+-------------------------------+----------------------
+---------------------
Default character (Not available) Not supported
CCSID: Graphic
---------------------+-------------------------------+----------------------
+---------------------
Statement string APOSTSQL/QUOTESQL P
Supported
delimiter
---------------------+-------------------------------+----------------------
+---------------------
Statement decimal PERIOD/COMMA P
Supported
delimiter
---------------------+-------------------------------+----------------------
+---------------------
Date format of DATE P Supported
statement
---------------------+-------------------------------+----------------------
+---------------------
Time format of TIME P Supported
statement
---------------------+-------------------------------+----------------------
+---------------------
Maximum decimal DEC(15) P Supported
precision: 15
---------------------+-------------------------------+----------------------
+---------------------
Maximum decimal DEC(31) P Supported
precision: 31
---------------------+-------------------------------+----------------------
+---------------------
Package label (Not available) Ignores when
received; does not
return an error
-----------------------------------------------------------------------------------------
-------
DSNCPRMA Maps the parameter list for dynamic plan allocation exit.
DSNCPRMC Maps the parameter list for dynamic plan allocation exit.
DSNCPRMP Maps the parameter list for dynamic plan allocation exit.
DSNDDTXP Maps the exit parameter list for date and time routines.
DSNDEXPL Maps the exit parameter list for connection and sign-on
routines.
DSNDFPPB Maps the field procedure parameter list and the areas that
points to.
DSNDQWAS Maps the SMF common header for SMF type 101 trace
records
(IFCID 0003).
DSNDQWA0 Maps the DB2 self-defining section for IFCID 0003
(SMF type
101, subtype 000 record).
DSNDQWHU Maps the data for the central processing unit (CPU) trace
header.
DSNDQWSA Maps the statistics data regarding CPU timer values for
each
resource manager and control address space.
DSNDQWSP Maps the SMF common header for SMF type 102 trace
records.
DSNDQWST Maps the SMF common header for SMF type 100 trace
records
(IFCIDs 0001-0002).
DSNDQWT0 Maps the self-defining section for SMF type 102 records.
DSNDQW00 Maps the trace records for IFC events (IFCIDs 0004-
0057).
DSNDQW01 Maps the trace records for IFC events (IFCIDs 0058-
0129).
DSNDQW02 Maps the trace records for IFC events (IFCIDs 0140-
0200).
DSNDQW03 Maps the trace records for IFC events (IFCIDs 0201 and
over).
DSNDQ3ST Maps the statistics block for the SSSS and SSAM resource
managers.
DSNDRIB Maps the DB2 release, change level, and product code
information.
DSNDSLRB Maps the request block for stand-alone log read services.
DSNDSLRF Maps the feedback area for stand-alone log read services.
GLOSSARY Glossary
The following terms and abbreviations are defined as they are used in the
DB2 library. If you do not find the term you are looking for, refer to
the index or to Dictionary of Computing.
----
A
----
access path. The path used to get to data specified in SQL statements.
An access path can involve an index or a sequential search.
allied thread. A thread originating at the local DB2 subsystem that may
access data at a remote DB2 subsystem.
application plan. The control structure produced during the bind process
and used by DB2 to process SQL statements encountered during
statement
execution.
application process. The unit to which resources and locks are allocated.
An application process involves the execution of one or more programs.
authorization ID. A string that can be verified for connection to DB2 and
to which a set of privileges are allowed. It can represent an individual,
an organizational group, or a function, but DB2 does not determine this
representation.
----
B
----
base table. A table created by the SQL CREATE TABLE statement that
is
used to hold persistent data. Contrast with result table.
binary integer. A basic data type that can be further classified as small
integer or large integer.
bind. The process by which the output from the DB2 precompiler is
converted to a usable control structure called a package or an application
plan. During the process, access paths to the data are selected and some
authorization checking is performed.
static bind. A process by which SQL statements are bound after they
have been precompiled. All static SQL statements are prepared for
execution at the same time. Contrast with dynamic bind.
bit data. Data that is not associated with a coded character set.
----
C
----
claim class. A specific type of object access which can be one of the
following:
cold start. A process by which DB2 restarts without processing any log
records. Contrast with warm start.
commit. The operation that ends a unit of work by releasing locks so that
the database changes made by that unit of work can be perceived by other
processes.
constraint. A rule that limits the values that can be inserted, deleted,
or updated in a table. See referential constraint and uniqueness
constraint.
----
D
----
DATABASE 2 Interactive (DB2I). The DB2 facility that provides for the
execution of SQL statements, DB2 (operator) commands, programmer
commands,
and utility invocation.
data currency. The state in which data retrieved into a host variable in
your program is a copy of data in the base table.
data definition name (DD name). The name of a data definition (DD)
statement that corresponds to a data control block containing the same
name.
data language 1 (DL/I). A database access and language used under CICS
and IMS.
DB2I Kanji Feature. The tape that contains the panels and jobs that allow
a site to display DB2I panels in Kanji.
DSN. (1) The default DB2 subsystem name. (2) The name of the TSO
command
processor of DB2. (3) The first three characters of DB2 module and
macro
names.
duration. A number that represents an interval of time. See date
duration, labeled duration, and time duration.
dynamic SQL. SQL statements that are prepared and executed within an
application program while the program is executing. In dynamic SQL, the
SQL source is contained in host language variables rather than being
coded
into the application program. The SQL statement can change several
times
during the application program's execution.
----
E
----
----
F
----
----
G
----
----
H
----
----
I
----
index key. The set of columns in a table used to determine the order of
index entries.
index space. A page set used to store the entries of one index.
isolation level. The degree to which a unit of work is isolated from the
updating operations of other units of work. See also cursor stability and
repeatable read.
----
J
----
----
K
----
----
L
----
load module. A program unit that is suitable for loading into main
storage for execution. The output of a linkage editor.
lock promotion. The process of changing the size or mode of a DB2 lock
to
a higher level.
lock size. The amount of data controlled by a DB2 lock on table data; the
value can be a page, a table, or a table space.
logical index partition. The set of all keys that reference the same data
partition.
LU name. From logical unit name, the name by which VTAM refers to a
node
in a network. Contrast with location name.
mixed data string. A character string that can contain both single-byte
and double-byte characters.
----
N
----
----
P
----
page set. A table space or index space consisting of pages that are
either 4KB or 32KB in size. Each page set is made from a collection of
VSAM data sets.
parent table. A table whose primary key is referenced by the foreign key
of a dependent table.
parent table space. A table space that contains a parent table. A table
space containing a dependent of that table is a dependent table space.
partitioned table space. A table space subdivided into parts (based upon
index key range), each of which may be processed by utilities
independently.
PPT. (1) Processing program table (CICS). (2) Program properties table
(MVS).
precompilation. A processing of application programs containing SQL
statements that takes place before compilation. SQL statements are
replaced with statements that are recognized by the host language
compiler. Output from this precompilation includes source code that can
be submitted to the compiler and the database request module (DBRM)
that
is input to the bind process.
----
Q
----
----
R
----
remote subsystem. Any RDBMS, except the local subsystem, with which
the
user or application can communicate. The subsystem need not be remote
in
any physical sense, and may even operate on the same processor under the
same MVS system.
----
S
----
sequential data set. A non-DB2 data set whose records are organized on
the basis of their successive physical positions, such as on magnetic
tape. Several of the DB2 database utilities require sequential data sets.
server. Also application server (AS). The target for a request from a
remote RDBMS, the RDBMS that provides the data.
storage group. A named set of DASD volumes on which DB2 data can be
stored.
subselect. That form of a query that does not include ORDER BY clause,
UPDATE clause, or UNION operators.
----
T
----
table space. A page set used to store the records in one or more tables.
----
U
----
UNION. An SQL operation that combines the results of two select
statements. UNION is often used to merge lists of values obtained from
several tables.
unique index. An index which ensures that no identical key values are
stored in a table.
----
V
----
----
W
----
warm start. The normal DB2 restart process which involves reading and
processing log records so that data under the control of DB2 is
consistent. Contrast with cold start.
BIBLIOGRAPHY Bibliography
BIBLIOGRAPHY.1 IBM DATABASE 2
Volume I
- Planning and Installing DB2
- Communicating with Other Systems
Volume II
- Designing a Database
- Security and Auditing
- Operation and Recovery
Volume III
- Performance Monitoring and Tuning
- Appendixes
- Commands
- Utilities
* IBM DATABASE 2 Version 3 Diagnosis Guide and Reference, LY27-
9603**
- Functional Descriptions
- Keyword Descriptions
- Diagnostic Aids and Techniques
- Data Management
- Physical Formats and Diagrams
- Data Areas
- Trace Codes
- DB2 Concepts
- Language Elements
- Functions
- Queries
- Statements
* Program Directory
* Available on Transaction Processing and Data Collection Kit CD-
ROM,
SK2T-0730
BIBLIOGRAPHY.2 Ada/370
BIBLIOGRAPHY.3 APL2
BIBLIOGRAPHY.4 AS/400
BIBLIOGRAPHY.5 Assembler H
BIBLIOGRAPHY.6 BASIC
BIBLIOGRAPHY.7 C/370
BIBLIOGRAPHY.8 CICS/ESA
BIBLIOGRAPHY.9 CICS/MVS
BIBLIOGRAPHY.10 CICS/OS/VS
* CICS/OS/VS Application Programmer's Reference Manual (Command
Level),
SC33-0241
* CICS/OS/VS Facilities and Planning Guide, SC33-0202
BIBLIOGRAPHY.11 COBOL/370
BIBLIOGRAPHY.14 Education
BIBLIOGRAPHY.15 IMS/ESA
BIBLIOGRAPHY.16 IMS/VS
BIBLIOGRAPHY.17 ISPF
BIBLIOGRAPHY.18 LE/370
BIBLIOGRAPHY.19 MVS/ESA
BIBLIOGRAPHY.20 MVS/XA
BIBLIOGRAPHY.21 OS/2
BIBLIOGRAPHY.22 OS/VS2
* OS/VS2 TSO Command Language Reference, GC28-0646
BIBLIOGRAPHY.23 PL/I
BIBLIOGRAPHY.24 Prolog
BIBLIOGRAPHY.26 TSO
BIBLIOGRAPHY.27 VS COBOL II
* VS COBOL II Application Programming Guide for MVS and CMS,
SC26-4045
* VS COBOL II Application Programming: Language Reference, GC26-
4047
BIBLIOGRAPHY.28 VS FORTRAN
BIBLIOGRAPHY.29 VTAM
INDEX Index
v
---------------------
Special Characters
---------------------
_ (underscore)
assembler host variable 3.4.1.3
character 2.1.5.5.2
LIKE predicate 2.1.5.5.2
: (colon)
assembler host variable 3.4.1.4
C host variable 3.4.2.4
C program 3.4.2.7.1
COBOL 3.4.3.4
COBOL host variable 3.4.3.4
FORTRAN 3.4.4.4
FORTRAN host variable 3.4.4.4
PL/I host variable 3.4.5.4
preceding a host variable 3.1.4.1
? (question mark) 5.1.2.2.1
% (percent sign)
LIKE predicate 2.1.5.5.1
(vertical bars) 2.1.7
v
----
A
----
abend
DB2 5.5.2.4.2
effect on cursor position 3.2.3
exits 5.5.4.2
for synchronization calls 4.4.2.7
IMS
U0102 4.4.5.2
U0775 4.1.3.3.4
U0778 4.1.3.4.1
multiple-mode program 4.1.3.3
program 4.1.3.1
reason codes 5.5.6
return code posted to CAF CONNECT 5.5.2.6
single-mode program 4.1.3.3
system
X'04E 4.4.1.1
ABRT parameter of CAF (call attachment facility) 5.5.2.8, 5.5.7.4
access path
affects lock attributes 4.1.2.9
description 1.2
index access 5.2.3.4
index access only 5.2.2.4
low cluster ratio
effects of 5.2.3.4
suggests table space scan 5.2.3.1
with list prefetch 5.2.6.1
multiple index access
description 5.2.3.2.5
PLAN_TABLE 5.2.2.2
resolving problems 5.2.15
selection
influencing with SQL 5.2.15.2
problems 5.2.15.1
queries containing host variables 5.2.10
table space scan 5.2.3.1
unique index with matching value 5.2.3.2.8
ACQUIRE
option of BIND PLAN subcommand
locking tables and table spaces 4.1.2.7.1
activity sample table A.1
Ada language 3.1
address space
initialization
CAF CONNECT command 5.5.2.6.2
CAF OPEN command 5.5.2.7
sample scenarios 5.5.3
separate tasks 5.5.1.1.1
termination
CAF CLOSE command 5.5.2.8.2
CAF DISCONNECT command 5.5.2.9.2
alias
authority to create 2.1.17.3
considerations for using 2.1.17.2
creating 2.1.17.3
description 2.1.17.2
referencing another DB2 4.1.4.2
ALL
quantified predicate 2.3.2.2
ambiguous cursor 4.1.4.6.2
AMODE link-edit option 4.2.5, 4.2.7.1.14
AND
operator of WHERE clause 2.1.6.1
ANSI/ISO SQL standard of 1992 compliance
SET CONNECTION statement 4.1.4.5.3
ANY
quantified predicate 2.3.2.2
APL2
applications 4.2.2
language 3.1
APOST option
precompiler 4.2.2.4
apostrophe string delimiter 4.2.2.4
APOSTSQL option
precompiler 4.2.2.4
application plan
binding 4.2.4
creating 4.2.3
description 1.2.14
dynamic plan selection for CICS applications
using packages with 4.2.4.4
listing packages 4.2.4.2
rebinding 4.1.1.3.3
using packages 4.1.1.2.1
application program
application-directed access
CONNECT precompiler option 4.1.4.5.1
precompiler options 4.1.4.5.1
coding SQL statements
assembler 3.4.1
COBOL 3.4.3
data declarations 3.3
data entry 2.2.2.1
description 3.1
dynamic SQL 5.1
FORTRAN 3.4.4.1
host variables 3.1.4
PL/I 3.4.5.1
selecting rows using a cursor 3.2
design considerations
checkpoint 4.4.2.6
coding 4.1
description 4.1
IMS calls 4.4.2.3
planning for changes 4.1.1.3
programming for DL/I batch 4.4.2
SQL statements 4.4.2.3
structure 5.4.1
symbolic checkpoint 4.4.2.5
synchronization call abends 4.4.2.7
using ISPF (interactive system productivity facility) 4.2.7.1
XRST call 4.4.2.6
error handling 4.1.3.3.1
preparation
application-directed access 4.1.4.5
assembling 4.2.5
binding 4.1.1.2, 4.2.3
compiling 4.2.5
DB2 precompiler option defaults 4.2.2.4
defining to CICS 4.2.2.4
example 4.2.7.1.4
link-editing 4.2.5
precompiler option defaults 4.2.2
preparing for running 4.2
process 4.1.1.1
program preparation panel 4.2.7.1
using DB2I (DB2 Interactive) 4.2.7.1
rebinding 4.1.1.4
running
CAF (call attachment facility) 5.5.1.3.3
CICS 4.2.6.3, 4.3.1
IMS 4.2.6.3
program synchronization in DL/I batch 4.4.2.5
TSO 4.2.6, 4.2.6.3
suspension
controlled by IRLM 4.1.2.6
description 4.1.2.6
latch 4.1.2.6
test environment 4.3.1
testing 4.3.1
application programming
design considerations
CAF 5.5.1.1.1
application-directed access
bind options 4.1.4.5.2
coding an application 4.1.4.4
interoperability 4.1.4.3
planning 4.1.4
precompiler options 4.1.4.5.1
sample program C.3
update restrictions 4.1.4.3
using 4.1.4.4.2
writing and preparing programs 4.1.4.5
arithmetic expressions in UPDATE statement 2.2.2.2
ascending order of results 2.1.10.1
assembler application program
assembling 4.2.5
character host variables 3.4.1.5
coding SQL statements 3.4.1
data type compatibility 3.4.1.8
DB2 mapping macros 3.4.1.11, G.0
fixed-length character string 3.4.1.5
graphic host variables 3.4.1.5
host variable 3.4.1.3, 3.4.1.4
indicator variable 3.4.1.9
naming convention 3.4.1.3
variable declaration 3.4.1.7
varying-length character string 3.4.1.5
assignment rules
numeric
FORTRAN 3.4.4.6.1
attachment facility
See CAF (call attachment facility)
attention processing 5.5.1.1.1, 5.5.4.1
audit trace
description 1.2.15
auditing
data 1.2.15
authority
creating test tables 4.3.1.1.2
identifier authorization ID 4.2.6.2
SYSIBM.SYSTABAUTH table 2.4.6.1
authorization ID
authority to use tables 2.2.1.1.2
description 1.2.15
AUTOCOMMIT field of SPUFI panel 2.4.1
automatic
rebind
EXPLAIN processing 5.2.1
packages and plans 4.1.1.4
AVG function 2.1.9.1
v
----
B
----
C application program
character host variables 3.4.2.7.1
NUL-terminated 3.4.2.5
single-character 3.4.2.5
VARCHAR-structured 3.4.2.5
coding SQL statements 3.4.2, 3.4.2.9
constants
syntax 3.4.2.7.2
dynamic SQL examples C.2
static SQL example C.2
fixed-length string 3.4.2.8
indicator variables 3.4.2.9
naming convention 3.4.2.3
precompiler option defaults 4.2.2.4
sample application B.0
variable declaration 3.4.2.7.1
varying-length string 3.4.2.8
C++ application program
special considerations 3.4.2.11
C++ program
preparing 4.2.7.3
CACHESIZE
option of BIND PLAN subcommand 4.2.4.5
option of REBIND subcommand 4.2.4.5
CAF (call attachment facility)
application program
examples 5.5.7.1
preparation 5.5.1.1.4
connecting to DB2 5.5.7.4
description 5.5
function descriptions 5.5.2.5.1
load module structure 5.5.2
parameters 5.5.2.5.2
programming language 5.5.1.1.2
register conventions 5.5.2.5.1
restrictions 5.5.1
return codes
checking 5.5.7.5
CLOSE 5.5.2.8
CONNECT 5.5.2.6
DISCONNECT 5.5.2.9
OPEN 5.5.2.7
TRANSLATE 5.5.2.10
run environment 5.5.1.2.4
running an application program 5.5.1.3.3
calculated values
displaying 2.1.8
groups with conditions 2.1.13
results 2.1.8
summarizing group values 2.1.12
call attachment facility (CAF)
See CAF (call attachment facility)
CALL DSNALI statement 5.5.2.5 to 5.5.2.11
Cartesian join 5.2.8.4.2
catalog
statistics
influencing access paths 5.2.15.2.5
production system 5.2.13
catalog tables
accessing 2.4.6
description 1.2.11
SYSINDEXES
access path selection 5.2.3.4
catalog, DB2
description 1.2.11
CCSID (coded character set identifier)
considerations for COBOL 3.4.3.3
SQLDA 5.1.4.2.9
CD-ROM, books on 1.1.5
CDSSRDEF subsystem parameter 5.2.5
character data
comparative operators 2.1.5.3
LIKE predicate of WHERE clause 2.1.5.5
string literals 3.1.1
width of column in results 2.4.2, 2.4.5.1
character host variables
assembler 3.4.1.5
C 3.4.2.5, 3.4.2.7.1
COBOL 3.4.3.5
FORTRAN 3.4.4.5
PL/I 3.4.5.5
character string
mixed data 2.1.2
checkpoint
calls 4.1.3.3, 4.1.3.3.2
frequency 4.1.3.3.5
CHKP call 4.1.3.2
CICS
connecting to DB2
authorization 4.3.1
DSNTIAC subroutine
assembler 3.4.1.10
C 3.4.2.10
COBOL 3.4.3.10
PL/I 3.4.5.10
facilities
command language translator 4.2.2.4
control areas 4.3.1
EDF (execution diagnostic facility) 4.3.3.3.2
language interface module (DSNCLI)
use in link-editing an application 4.2.5
logical unit of work 4.1.3.2
operating
running a program 4.3.1
system failure 4.1.3.2
planning
environment 4.2.6.3
programming
DFHEIENT macro 3.4.1.3
sample applications B.1, B.1.3
SYNCPOINT command 4.1.3.2
storage handling
assembler 3.4.1.10
C 3.4.2.10
COBOL 3.4.3.10
PL/I 3.4.5.10
unit of work 4.1.3.2
CLOSE
connection function of CAF
description 5.5.2.1
program example 5.5.7.4
syntax 5.5.2.8
usage 5.5.2.8
statement
description 3.2.2.7
WHENEVER NOT FOUND clause 5.1.3.1.5, 5.1.4.3.3
cluster ratio
description 5.2.3.4
effect on table space scan 5.2.3.1
effects 5.2.3.4
with list prefetch 5.2.6.1
COBOL application program
character host variables
fixed-length strings 3.4.3.5
varying-length strings 3.4.3.5
coding SQL statements 3.1, 3.4.3
compiling 4.2.5
data declarations 3.3
data type compatibility 3.4.3.8
DB2 precompiler option defaults 4.2.2.4
DECLARE statement 3.4.3.3
declaring a variable 3.4.3.7.1
dynamic SQL in VS COBOL II 5.1.5 to 5.1.5.2
FILLER entry name 3.4.3.7.1
host structure 3.1.4.2
host variable
use of hyphens 3.4.3.3
indicator variables 3.4.3.9
naming convention 3.4.3.3
null values 3.1.4.1.4
options
DYNAM 3.4.3.3
NODYNAM 3.4.3.3
preparation 4.2.5
record description from DCLGEN 3.3.2
resetting SQL-INIT-FLAG 3.4.3.3
sample program C.1
variables in SQL 3.1.4
WHENEVER statement 3.4.3.3
with object-oriented extensions
special considerations 3.4.3.11
coding
SQL statements
assembler 3.4.1
C 3.4.2
COBOL 3.4.3
dynamic 5.1
FORTRAN 3.4.4
PL/I 3.4.5
collection, package
identifying 4.2.4.3.2
SET CURRENT PACKAGESET statement 4.2.4.3.2
colon
assembler host variable 3.4.1.4
C host variable 3.4.2.4
COBOL host variable 3.4.3.4
FORTRAN host variable 3.4.4.4
PL/I host variable 3.4.5.4
preceding a host variable 3.1.4.1
column
data types 2.1.2
description 1.2.5
displaying
column names 2.4
list of columns 2.4.6.2
expressions 2.1.8
heading created by SPUFI 2.1.8.1, 2.4.5.1
labels 3.3.1, 5.1.4.2.11
name
UPDATE statement 2.2.2.2
ordering 2.1.10.1
retrieving
by SELECT 2.1.3
specified in CREATE TABLE 2.2.1
unnamed 2.1.8.1
values, absence 2.1.5.1
width of results 2.4.2, 2.4.5.1
combining SELECT statements 2.1.15
COMMA
precompiler 4.2.2.4
commands
concurrency 4.1.2
commit
rollback coordination 4.1.3.4.1
COMMIT
commit point
description 4.1.3.1
IMS unit of work 4.1.3.3
lock releasing
DB2 4.1.3.3
IMS 4.1.3.3
COMMIT statement
description 2.4.1
ending unit of work 4.1.3.1
when to issue 4.1.3.1
comparison
compatibility rules 2.1.2
operator subqueries 2.3.2.1
compatibility
data types 2.1.2
locks 4.1.2.5
rules 2.1.2
composite table
See join operation
concatenation
data in multiple columns 2.1.7
keyword (CONCAT) 2.1.7
operator 2.1.7
concurrency
commands 4.1.2
control by locks 4.1.2
description 4.1.2
effect of
lock escalation 4.1.2.6
utilities 4.1.2
CONNECT
connection function of CAF (call attachment facility)
description 5.5.2.1
program example 5.5.7.4
syntax 5.5.2.6 to 5.5.2.6.3
usage 5.5.2.6
option of precompiler 4.2.2.4
statement
SPUFI 2.4.1
CONNECT LOCATION field of SPUFI panel 2.4.1
connection
DB2
connecting from tasks 5.4.1.1
DB2 private 4.1.4.2
function of CAF
CLOSE 5.5.2.8, 5.5.7.4
CONNECT 5.5.2.6 to 5.5.2.6.3, 5.5.7.4
description 5.5.2
DISCONNECT 5.5.2.9, 5.5.7.4
OPEN 5.5.2.7, 5.5.7.4
sample scenarios 5.5.3 to 5.5.3.3
summary of behavior 5.5.2.11
TRANSLATE 5.5.2.10, 5.5.7.5
constants
assembler 3.4.1.3
COBOL 3.4.3.3
syntax
C 3.4.2.7.2
FORTRAN 3.4.4.6.2
CONTINUE
clause of WHENEVER statement 3.1.5.2
description 3.1.5.2
copying objects from remote locations 4.1.4.6.3
correlated reference
correlation name
example 2.3.3.1
correlated subqueries
See subquery
COUNT function 2.1.9.1
CREATE TABLE statement 2.2.1
CREATE VIEW statement
use 1.2.12
CS (cursor stability)
compared with RR (figure) 4.1.2.7.2
differences from RR 4.1.2.7.2
distributed environment 4.1.2.7.2
effect on locking 4.1.2.7.2
page lock and release example 4.1.2.7.2
page locking 4.1.2.7.2
current data 4.1.4.6.2
CURRENT DATE special register 2.1.18
CURRENT DEGREE 2.1.18
CURRENT DEGREE special register 2.1.18
changing subsystem default 5.2.5
CURRENT PACKAGESET special register 2.1.18, 4.2.4.3.2, 4.2.4.4
CURRENT SERVER 2.1.18
CURRENT SERVER special register 2.1.18, 4.2.4.3.1
CURRENT SQLID special register 2.1.18
CURRENT TIME 2.1.18
description 2.2.2.2
use in test 4.3.1.1
CURRENT TIME special register 2.1.18
CURRENT TIMESTAMP 2.1.18
CURRENT TIMESTAMP special register 2.1.18
CURRENT TIMEZONE 2.1.18
CURRENT TIMEZONE special register 2.1.18
USER 2.1.18
CURRENTDATA option
BIND subcommand 4.1.4.6.2
CURRENTSERVER
CURRENT SQLID 2.1.18
option of BIND PLAN 2.1.18
cursor
ambiguous 4.1.4.6.2
closing 3.2.2.7
declaring 3.2.2.1
deleting a current row 3.2.2.6
description 3.2, 3.2.1
effect of abend on position 3.2.3
end of data 3.2.2.3
example 3.2.2
isolation levels 4.1.2.7.2
maintaining position 3.2.3
opening 3.2.2.2
retrieving a row of data 3.2.2.4
retrieving limited rows 5.3.8
updating a current row 3.2.2.5
cursor stability (CS)
See CS (cursor stability)
v
----
D
----
data
access control
control 1.2.15
adding to the end of a table 5.3.9
associated with WHERE clause 2.1.5
currency 4.1.4.6.2
improving access 5.2
indoubt state 4.1.3.3
integrity 4.1.2
item declaring 3.1.4
nontabular storing 5.3.12
retrieval using SELECT * 5.3.7
retrieving a set of rows 3.2.2.4
retrieving large volumes 5.3.6
scrolling backward through 5.3.2
security and integrity 4.1.3.2
structures 3.3
understanding access 5.2
updating during retrieval 5.3.4
updating previously retrieved data 5.3.3
data security and integrity 4.1.3.2
data type
compatibility
assembler 3.4.1.7
assembler application program 3.4.1.8
C 3.4.2.7
COBOL 3.4.3.7, 3.4.3.8
FORTRAN 3.4.4.6.1, 3.4.4.7
PL/I 3.4.5.8
equivalent
FORTRAN 3.4.4.6
PL/I 3.4.5.7
database
description 1.2.10
sample application A.8.2
DATE
option of precompiler 4.2.2.4
datetime arithmetic operations 2.1.8
DB2 abend 5.5.2.4.2
DB2 application
design considerations 4.1
DB2 books on line 1.1.5
DB2 commit point 4.1.3.3
DB2 precompiler
See precompiler
DB2I (DB2 Interactive)
interrupting 2.4.4.1
menu 2.4.1, 3.3.2
panels
BIND PACKAGE 4.2.7.1.8
BIND PLAN 4.2.7.1.11
Current SPUFI Defaults 2.4.1
DB2 Program Preparation 4.2.7.1.4
DB2I Primary Option Menu 2.4.1, 4.2.7.1.2
DCLGEN 3.3.1, 3.3.4.2
Defaults for BIND PLAN 4.2.7.1.12
Package List for BIND PLAN 4.2.7.1.13
Precompile 4.2.7.1.7
Program Prep.. 4.2.7.1.14
SPUFI 2.4.1
System Connection Types for BIND.. 4.2.7.1.10
preparing programs 4.2.7.1
program preparation example 4.2.7.1.4
selecting
DCLGEN (declarations generator) 3.3.2
SPUFI 2.4.1
DBCS (double-byte character set)
constants 3.4.5.3
inserting into tables 2.2.2.1.3
table names 3.3
translation in CICS 4.2.2.4
use of labels with DCLGEN 3.3.1
DBRM (database request module)
deciding how to bind 4.1.1.2.1
description 1.2.14, 4.2.2.3
DCLGEN subcommand of DSN
building data declarations 3.3
example 3.3.4
identifying tables 3.3
INCLUDE statement 3.3.2
including declarations in a program 3.3.2
invoking 3.3
using 3.3
DDITV02 input data set 4.4.3.1
DDOTV02 output data set 4.4.3.2
deadlock
description 4.1.2.6
example 4.1.2.6
indications
in CICS 4.1.2.6
in IMS 4.1.2.6
in TSO 4.1.2.6
IRLM
controlled by 4.1.2.6
debugging application programs 4.3.3
DEC precompiler option 4.2.2.4
DECIMAL
data type
C compatibility 3.4.2.7.1
declaration
generator (DCLGEN) 3.3
in an application program 3.3.2
variables in CAF program examples 5.5.7.7
DECLARE CURSOR statement
description 3.2.1, 3.2.2.1
prepared statement 5.1.3.1.1, 5.1.4.2.3
WITH HOLD option 3.2.3
DECLARE statement in COBOL 3.4.3.3
DECLARE TABLE statement
description 3.1.3
table description 3.3
DEGREE
CURRENT PACKAGESET 2.1.18
option of BIND PACKAGE subcommand 2.1.18
option of BIND PLAN subcommand 2.1.18
DELETE 2.2.2.4.1
statement
checking return codes 3.1.4.2.2
correlated subqueries 2.3.3.4
description 2.2.2.4
rules 2.2.2.4.1
subqueries 2.3.1.4
when to avoid 2.2.2.4.2
WHERE CURRENT clause 3.2.2.6
DELETE CASCADE statement 2.2.2.4.1
deleting
current rows 3.2.2.6
data 2.2.2.4
every row from a table 2.2.2.4.2
primary key 2.2.2.4.1
rows from a table 2.2.2.4
delimiter
SQL statements 3.1.2
string 4.2.2.4
department sample table
creating 2.2.1.1.1
description A.2
dependent
table 2.2.2.3
descending order of results 2.1.10.1
DESCRIBE statement
column labels 5.1.4.2.11
INTO clauses 5.1.4.2.4, 5.1.4.2.8
DFHEIENT macro 3.4.1.3
DFSLI000 (IMS language interface module) 4.2.5
DISABLE option
BIND PACKAGE subcommand 4.2.3.4
DISCONNECT
connection function of CAF
description 5.5.2.1
program example 5.5.7.4
syntax 5.5.2.9
usage 5.5.2.9
displaying
calculated values 2.1.8
column names 2.4
lists
table columns 2.4.6.2
tables 2.4.6.1
output 2.4.5
results from GROUP BY in order 2.1.12
DISTINCT
clause of SELECT statement 2.1.11
distributed data
application-directed access
comparing with system-directed access 4.1.4.1
description 4.1.4.3
mixed environment E.0
operating
performance considerations 5.1.3.2
performance considerations 4.2.4.8
program preparation 4.1.4.5.2
programming
coding with application-directed access 4.1.4.4
coding with system-directed access 4.1.4.4
considerations for design 4.1.4.6
programming considerations 4.1.4.6
system-directed access
description 4.1.4.2
mixed environment E.0
Distributed Relational Database Architecture (DRDA)
See DRDA (Distributed Relational Database Architecture)
division by zero 3.1.5.3
DL/I
batch
application programming 4.4.2
checkpoint ID 4.4.5.3
DB2 requirements 4.4.1.2
DDITV02 input data set 4.4.3.1
description 4.4
DSNMTV01 module 4.4.4.4
features 4.4.1.1
SSM parameter 4.4.4.4
submitting an application 4.4.4.4
TERM call 4.1.3.2
drain lock
description 4.1.2
DRDA (distributed relational database architecture)
application-directed access 4.1.4.3
DROP
statement 2.2.3
dropping
tables 2.2.3
DSN applications, running with CAF 5.5.1.3.3
DSN command of TSO
command processor
services lost under CAF 5.5.1.3.3
subcommands
See also individual subcommands
RUN 4.2.6
DSN8BC3 sample program
calls DSNTIAR subroutine 3.4.3.10
DSN8BD3 sample program
calls DSNTIAR subroutine 3.4.2.10
DSN8BF3 sample program
calls DSNTIAR subroutine 3.4.4.9
DSN8BP3 sample program
calls DSNTIAR subroutine 3.4.5.10
DSNALI (CAF language interface module)
deleting 5.5.7.1
loading 5.5.7.1
DSNCLI (CICS language interface module)
include in link-edit 4.2.5
DSNELI (TSO language interface module) 5.5.1.3.3
DSNH command of TSO
See also precompiler
obtaining SYSTERM output 4.3.4.2
DSNHASM precompiler procedure 4.2.7.2
DSNHASMH precompiler procedure 4.2.7.2
DSNHC precompiler procedure 4.2.7.2
DSNHCOB precompiler procedure 4.2.7.2
DSNHCOB2 precompiler procedure 4.2.7.2
DSNHFOR precompiler procedure 4.2.7.2
DSNHLI entry point to DSNALI
implicit calls 5.5.2.2
program example 5.5.7.6
DSNHLI2 entry point to DSNALI 5.5.7.5
DSNHPLI precompiler procedure 4.2.7.2
DSNMTV01 module 4.4.4.4
DSNTIAC subroutine
assembler 3.4.1.10
C 3.4.2.10
calling from Assembler program 3.4.1.10
calling from C program 3.4.2.10
calling from COBOL program 3.4.3.10
calling from PL/I program 3.4.5.10
COBOL 3.4.3.10
PL/I 3.4.5.10
use in CICS application 3.4.1.10, 3.4.2.10, 3.4.3.10, 3.4.5.10
DSNTIAD sample program
calls DSNTIAR subroutine 3.4.1.10
DSNTIAR subroutine
assembler 3.4.1.10
C 3.4.2.10
COBOL 3.4.3.10
description 3.1.5.4
FORTRAN 3.4.4.9
PL/I 3.4.5.10
return codes 3.1.5.4.4
DSNTIAUL sample program 4.3.1.2
DSNTIR subroutine 3.4.4.9
DSNTRACE data set 5.5.5
duration of locks
controlling 4.1.2.7.1
description 4.1.2.3
DYNAM option of COBOL 3.4.3.3
dynamic plan selection
restrictions with CURRENT PACKAGESET special register 4.2.4.4
using packages with 4.2.4.4
dynamic SQL
advantages and disadvantages 5.1.1
assembler program 5.1.4.2.1
C program 5.1.4.2.1
COBOL program 3.4.3.3
description 1.2.3, 5.1
effects of WITH HOLD cursor 5.1.2.2.3
EXECUTE IMMEDIATE statement 5.1.2.1
executing using application-directed access 4.1.4.6.1
fixed-list SELECT statements 5.1.3 to 5.1.3.1.5
FORTRAN program 3.4.4.3
host languages 5.1.1.6
non-SELECT statements 5.1.2 to 5.1.2.2.5
PL/I 5.1.4.2.1
PREPARE and EXECUTE 5.1.2.2 to 5.1.2.2.5
programming 5.1 to 5.1.5.2
requirements 5.1.1.4
restrictions 5.1.1.3
sample C program C.2
varying-list SELECT statements 5.1.4 to 5.1.4.4.3
VS COBOL II 5.1.5 to 5.1.5.2
v
----
E
----
FETCH statement
host variables 5.1.3.1.4
scrolling through data 5.3.2
USING DESCRIPTOR clause 5.1.4.3.2
field procedure
changing collating sequence 2.1.10.1
fixed-length character string
assembler 3.4.1.5
C 3.4.2.8
fixed-length string 2.2.1
FLAG
option
BIND PACKAGE subcommand 4.2.3.1
BIND PLAN subcommand 4.2.4.2
FLAG option
precompiler 4.2.2.4
flags, resetting 3.4.3.3
FOR FETCH ONLY clause
DECLARE CURSOR statement 4.1.4.6.2
FOR UPDATE OF clause
used to update a column 3.2.2.1
foreground
running with PARMS clause 4.2.6.1
using a CLIST 4.2.6.3
foreign key 1.2.7.2
format
SELECT statement results 2.4.5.1
SQL in input data set 2.4.3
FORTRAN application program
@PROCESS statement 3.4.4.3
assignment rules, numeric 3.4.4.6.1
byte data type 3.4.4.3
character host variable
declaration 3.4.4.4
syntax 3.4.4.5
coding SQL statements 3.4.4
comment lines 3.4.4.3
constants, syntax differences 3.4.4.6.2
continuing SQL statements 3.4.4.3
data types
compatibility 3.4.4.6.1
equivalent 3.4.4.6
declaring
tables 3.4.4.3
variables 3.4.4.6.1
views 3.4.4.3
description of SQLCA 3.4.4.1
host variable
declaring 3.4.4.5
numeric 3.4.4.5
using 3.4.4.4
including code 3.4.4.3
indicator variables 3.4.4.8
margins for SQL statements 3.4.4.3
naming convention 3.4.4.3
parallel option 3.4.4.3
precompiler option defaults 4.2.2.4
sequence numbers 3.4.4.3
SQL error return codes 3.4.4.9
SQL INCLUDE statement 3.4.4.3
statement labels 3.4.4.3
FROM clause
joining tables 2.1.14
SELECT statement 2.1.3
FRR (functional recovery routine) 5.5.4.2
function
column
AVG 2.1.9.1
COUNT 2.1.9.1
descriptions 2.1.9.1
MAX 2.1.9.1
MIN 2.1.9.1
nesting 2.1.9.2.2
SUM 2.1.9.1
scalar
example 2.1.9.2
nesting 2.1.9.2.1
functional recovery routine (FRR) 5.5.4.2
v
----
G
----
I/O processing
parallel
description 5.2.5
enabling 5.2.5
monitoring 5.2.5.1
queries 5.2.5
related PLAN_TABLE columns 5.2.2.7
tuning 5.2.5.2
IBM COBOL for MVS 726 VM program
preparing 4.2.7.3
IFCID (instrumentation facility component identifier)
identifiers by number
0221 5.2.5.1
IKJEFT01 terminal monitor program in TSO 4.2.6.2
IMS
application programs 4.1.3.3.3
batch 4.1.3.4.2
checkpoint calls 4.1.3.3
commit point 4.1.3.3
connecting to DB2
attachment facility 5.5
error handling 4.1.3.3.1
operating
language interface module (DFSLI000) 4.2.5
planning
environment 4.2.6.3
recovery 4.1.3.2
SYNC call 4.1.3.3
restrictions 4.1.3.3
ROLB call 4.1.3.4.1
ROLL call 4.1.3.4.1
unit of work 4.1.3.3
IMS Resource Lock Manager (IRLM)
See IRLM (IMS Resource Lock Manager)
IN
clause in subqueries 2.3.2.3
predicate 2.1.6.3
INCLUDE statement
description 3.3.2
SQLCA
See SQLCA (SQL communication area)
SQLDA
See SQLDA (SQL desriptor area)
index
access methods
access path selection 5.2.3.2
access path type 5.2.15.1.5
by nonmatching index 5.2.3.2.3
IN-list index scan 5.2.3.2.4
matching index columns 5.2.2.3
matching index description 5.2.3.2.1
multiple 5.2.3.2.5
one-fetch index scan 5.2.3.2.6
description 1.2.6
locking 4.1.2.4
space
description 1.2.8
indicator variable
assembler application program 3.4.1.9
C 3.4.2.9
COBOL 3.4.3.9
description 3.1.4.1.4
FORTRAN 3.4.4.8
PL/I 3.4.1.11, 3.4.5.9
setting null values in a COBOL program 3.1.4.1.4
structures in a COBOL program 3.1.4.2.2
input data set DDITV02 4.4.3.1
INSERT statement
description 2.2.2.1
several rows 2.2.2.1.3
subqueries 2.3.1.4
VALUES clause 2.2.2.1
inserting
graphic data into tables 2.2.2.1.3
INTENT EXCLUSIVE lock mode 4.1.2.4
INTENT SHARE lock mode 4.1.2.4
interactive SQL 1.2.4
Interactive System Productivity Facility (ISPF)
See ISPF (Interactive System Productivity Facility)
internal resource lock manager (IRLM)
See IRLM (internal resource lock manager)
interoperability in distributed data access 4.1.4
IRLM (IMS Resource Lock Manager)
deadlock 4.1.2.6
function 4.1.2.6
lock suspension 4.1.2.6
performance 4.1.2.6
timeout 4.1.2.6
IRLM (internal resource lock manager)
description 4.4.4.4.1
ISOLATION
option of BIND PACKAGE subcommand 4.2.3.1
option of BIND PLAN subcommand 4.2.4.2
effects on locks 4.1.2.7.2
isolation level
CS (cursor stability) 4.1.2.7.2
effect on duration of page locks 4.1.2.7.2
effect on open cursors 4.1.2.7.2
RR (repeatable read) 4.1.2.7.2
set by BIND or REBIND 4.1.2.7.2
ISPF (Interactive System Productivity Facility)
browse 2.4.1, 2.4.5
DB2 uses dialog management 2.4
DB2I Menu 4.2.7.1.2
precompiling under 4.2.7.1.1
preparation
Program Preparation panel 4.2.7.1.4
programming 5.4 to 5.4.1.4
scroll command 2.4.5.1
ISPLINK SELECT services 5.4.1.4
v
----
J
----
key
composite 1.2.7
description 1.2.7
foreign
description 1.2.7.2
primary
description 1.2.7.1
unique 1.2.7, 5.3.1
keywords, reserved D.0
v
----
L
----
mapping macro
assembler applications 3.4.1.11, G.0
list of G.0
MARGINS option of precompiler 4.2.2.4
mass insert 2.2.2.1.3
MAX function 2.1.9.1
MEMBER option
BIND PACKAGE subcommand 4.2.3.1
BIND PLAN subcommand 4.2.4.2
message
analyzing 4.3.4.1
CAF errors 5.5.2.11
obtaining text 3.1.5.4
assembler 3.4.1.10
C 3.4.2.10
COBOL 3.4.3.10
FORTRAN 3.4.4.9
PL/I 3.4.5.10
message by identifier
DSNBB440I 5.2.5.1
MIN function 2.1.9.1
mixed data
description 2.1.2
inserting into tables 2.2.2.1.3
mode of a lock 4.1.2.4
multiple-mode IMS programs 4.1.3.3.3
MVS
31-bit addressing 4.2.5, 4.2.7.1.14
v
----
N
----
names
COBOL 3.4.3.3
naming convention
aliases 2.1.17.2
assembler 3.4.1.3
C 3.4.2.3
COBOL 3.4.3.3
FORTRAN 3.4.4.3
PL/I 3.4.5.3
tables you create 2.2.1.1
three-part names 2.1.17.1
new table
See join operation
NODYNAM option of COBOL 3.4.3.3
NOFOR option
precompiler 4.2.2.4
NOGRAPHIC option of precompiler 4.2.2.4
non-correlated subqueries
See subquery
nonsegmented table space
scan 5.2.3.1
nontabular data storage 5.3.12
NOOPTIONS option of precompiler 4.2.2.4
NOSOURCE option of precompiler 4.2.2.4
NOT FOUND clause of WHENEVER statement 3.1.5.2
NOT NULL clause of CREATE TABLE statement 2.2.1
NOT operator of WHERE clause 2.1.5.4
NOXREF option of precompiler 4.2.2.4
NUL character in C 3.4.2.3
NULL
attribute of UPDATE statement 2.2.2.2
option of WHERE clause 2.1.5.1
pointer in C 3.4.2.3
null value
COBOL programs 3.1.4.1.4
description 2.1.5.1
numeric
assignments 3.4.4.6.1
data
cannot be used with LIKE 2.1.5.5
width of column in results 2.4.2, 2.4.5.1
v
----
O
----
object
copying from remote locations 4.1.4.6.3
manipulating
local 4.1.4.6.3
remote 4.1.4.6.3
object of a lock 4.1.2.1
ONEPASS option of precompiler 4.2.2.4
online books 1.1.5
OPEN
connection function of CAF
description 5.5.2.1
program example 5.5.7.4
syntax 5.5.2.7
usage 5.5.2.7
statement
opening a cursor 3.2.2.2
prepared SELECT 5.1.3.1.3
USING DESCRIPTOR clause 5.1.4.4.3
without parameter markers 5.1.4.3.1
open cursor 3.2.3
OPTIMIZE FOR n ROWS
access path selection
influencing 5.2.15.2.1
usage guidelines 5.2.14
clause of SELECT statement 5.3.8
description 5.2.14
OPTIONS option of precompiler 4.2.2.4
OR operator of WHERE clause 2.1.6.1
ORDER BY clause
effect on OPTIMIZE clause 5.2.14
SELECT statement 2.1.10.1
organization application
examples B.0
overflow 3.1.5.3
ownership
objects 1.2.15
v
----
P
----
package
advantages 4.1.1.2.1
binding
DBRM to a package 4.2.3
EXPLAIN option for remote 5.2.1
PLAN_TABLE 5.2.1.1
remotely 4.2.3.2
to plans 4.2.4.2
deciding how to use 4.1.1.2.1
description 1.2.14
identifying at run time 4.2.4.3
list
plan binding 4.2.4.2
location
CONNECT statement 4.2.4.3.1
CURRENTSERVER option 4.2.4.3.1
identifying 4.2.4.3.1
rebinding 4.1.1.3.2
selecting 4.2.4.3, 4.2.4.3.2
subcommand keywords and parameters 4.2.3.1
version, identifying 4.2.4.3.5
page
locks
description 4.1.2.2
page set 1.2.8
panel
BROWSE (for output data set) 2.4.5
Current SPUFI Defaults 2.4.1
DB2I Menu 2.4.1
DB2I Primary Option Menu 2.4.1
DCLGEN 3.3.1, 3.3.4.2
DSNEDP01 3.3.1, 3.3.4.2
DSNEPRI 2.4.1
DSNESP01 2.4.1
DSNESP02 2.4.1
EDIT (for input data set) 2.4.3
SPUFI 2.4.1
SPUFI Main 2.4.1
Parallel I/O processing 5.2.5 to 5.2.5.2
parameter marker
dynamic SQL 5.1.2.2.1
more than one 5.1.2.2.5
values provided by OPEN 5.1.3.1.3
with arbitrary statements 5.1.4.4 to 5.1.4.4.3
parent table 2.2.2.3
partitioned table space
locking 4.1.2.2
PCT (program control table) 4.3.1
PDS (partitioned data set) 3.3
percent sign 2.1.5.5.1
performance
affected by
application structure 5.4.1.4
DEFER(PREPARE) 4.2.4.8
NODEFER (PREPARE) 4.2.4.8
remote queries 4.1.4.6.2, 4.2.4.8, 5.1.3.2
IRLM 4.1.2.6
monitoring
with EXPLAIN 5.2.1
PERIOD
option of precompiler 4.2.2.4
phone application
description B.0
PL/I application program
character host variables 3.4.5.5
coding SQL statements 3.4.5
comments 3.4.5.3
considerations 3.4.5.3
data types 3.4.5.7, 3.4.5.8
declaring tables 3.4.5.3
declaring views 3.4.5.3
graphic host variables 3.4.5.5
host variable
declaring 3.4.5.5
numeric 3.4.5.5
using 3.4.5.4
indicator variables 3.4.5.9
naming convention 3.4.5.3
sequence numbers 3.4.5.3
SQLCA, defining 3.4.5.1
SQLDA, defining 3.4.5.2
statement labels 3.4.5.3
variable, declaration 3.4.5.7.1
WHENEVER statement 3.4.5.3
plan element 1.2.14
PLAN_TABLE table
column descriptions 5.2.1.1
planning
accessing distributed data 4.1.4
binding 4.1.1, 4.1.1.2
concurrency 4.1.1.4
recovery 4.1.3
POINTER type in VS COBOL II 5.1.5.2
PPT (processing program table) 4.3.1
precompiler
binding on another system 4.2.2.3
description 4.2.2
diagnostics 4.2.2.3
functions 4.2.2
input 4.2.2.2
invoking
dynamically 4.2.7.2.2
JCL for procedures 4.2.7.2
option descriptions 4.2.2.4
options
application-directed access 4.1.4.5.1
CONNECT(1) 4.1.4.5.1
CONNECT(2) 4.1.4.5.1
defaults 4.2.2.4
output
database request modules 4.2.2.3
listing 4.2.2.3
terminal diagnostics 4.2.2.3
translated source statements 4.2.2.3
precompiling programs 4.2.2
preparation 4.1.1.1
SQL escape character 4.2.2.4
submitting jobs
DB2I panels 4.2.7.1.7
ISPF panels 4.2.7.1.2, 4.2.7.1.4
predicate
description 5.2.7
effect on access paths 5.2.7
filter factor 5.2.7.2
general rules 2.1.5, 5.2.7.1.8
generation 5.2.7.3
impact on access paths 5.2.9.4
indexable 5.2.7.1.4
join 5.2.7.1.3
local 5.2.7.1.3
modification 5.2.7.3
processing 5.2.7.1.10
properties 5.2.7.1
quantified 2.3
stage 1 (sargable) 5.2.7.1.5
stage 2
evaluated 5.2.7.1.5
influencing creation 5.2.15.2.3
subquery 5.2.7.1.2
types 5.2.7.1.10
PRELINK utility 4.2.7.1.4
PREPARE
statement
dynamic execution 5.1.2.2.2
host variable 5.1.3.1.2
INTO clause 5.1.4.2.4
primary index 1.2.7.1, 2.2.1.2
primary key 1.2.7.1
problem determination
guidelines 4.3.4
processing
SQL statements 2.4.4
processing program table (PPT)
See PPT (processing program table)
program control table (PCT) 4.3.1
program preparation
See application program preparation
program problems checklist
documenting error situations 4.3.3
error messages 4.3.3.1.2, 4.3.3.2
project activity sample table A.5
project sample table A.4
v
----
Q
----
sample application
application-directed access C.3
call attachment facility 5.5.1.1.2
dynamic SQL C.2
environments B.1
languages B.1
organization B.0
phone B.0
print B.2
programs B.1
project B.0
static SQL C.2
structure of A.8
system-directed access C.4
use B.1
sample program
DSN8BC3 3.4.3.10
DSN8BD3 3.4.2.10
DSN8BF3 3.4.4.9
DSN8BP3 3.4.5.10
DSNTIAD 3.4.1.10
sample table
description 1.2.5
DSN8310.ACT (activity) A.1
DSN8310.DEPT (department) A.2
DSN8310.EMP (employee) A.3
DSN8310.EMPPROJACT (employee to project activity) A.6
DSN8310.PROJ (project) A.4
PROJACT (project activity) A.5
scalar function
description 2.1.9.2
nesting 2.1.9.2.1
scope of a lock 4.1.2.2
scrolling
backward through data 5.3.2 to 5.3.2.2
ISPF (interactive system productivity facility) 2.4.5.1
search condition
comparison operators 2.1.5
SELECT statements 2.3
using WHERE clause 2.1.5
security
SQL statements 1.2.15
segmented table space
locking 4.1.2.2
scan 5.2.3.1
SELECT statement
changing result format 2.4.5.1
clauses
DISTINCT 2.1.11
FROM 2.1.3
GROUP BY 2.1.12
HAVING 2.1.13
ORDER BY 2.1.10.1
UNION 2.1.15
WHERE 2.1.4, 2.1.5
fixed-list 5.1.3 to 5.1.3.1.5
parameter markers 5.1.4.4
search condition 2.3
selecting a set of rows 3.2
subqueries 2.3
using with
* (to select all columns) 2.1.3
column-name list 2.1.3
DECLARE CURSOR statement 3.2.2.1
varying-list 5.1.4 to 5.1.4.4.3
selecting
all columns 2.1.3
all rows except 2.1.5.4
more than one row 3.1.4.1.1
on conditions 2.1.5.5.1
rows 2.1.4
some columns 2.1.3
sequence numbers
COBOL program 3.4.3.3
FORTRAN 3.4.4.3
PL/I 3.4.5.3
sequential detection 5.2.4.2 to 5.2.4.2.2
sequential prefetch
bind time 5.2.4.1
description 5.2.4
server
accessible 4.1.4.3
current 4.1.4.2
SET
clause of UPDATE statement 2.2.2.2
statement in VS COBOL II 5.1.5.2
SET CURRENT DEGREE statement 5.2.5
SET CURRENT PACKAGESET statement 4.2.4.3.2
SHARE
INTENT EXCLUSIVE lock mode 4.1.2.4
lock mode
page 4.1.2.4
table and table space 4.1.2.4
simple table space
locking 4.1.2.2
single-mode IMS programs 4.1.3.3.3
softcopy publications 1.1.5
SOME quantified predicate 2.3.2.2
sort
program
data 5.2.12.1
data rows 5.2.12
RIDs (record identifiers) 5.2.12.2
when performed 5.2.12.3
SOURCE option
precompiler 4.2.2.4
special register
CURRENT DATE 2.1.18
CURRENT DEGREE 5.2.5
definition 2.1.18
description 2.1.18
setting
by connection 2.1.18
by sign-on 2.1.18
SPUFI
changed column widths 2.4.5.1
CONNECT LOCATION field 2.4.1
created column heading 2.1.8.1, 2.4.5.1
default values 2.4.2
end SQL statements 2.4.3
panels
allocates RESULT data set 2.4.1
create an input data set 2.4.6.1
filling in the panel 2.4.1
format and display output 2.4.5
previous values displayed on panel 2.4.1
selecting on DB2I menu 2.4.1
processing SQL statements 2.4, 2.4.4
remote use 4.1.4.4.2
SQLCODE returned 2.4.5
USER value 2.4.6.1
SQL
option of precompiler 4.2.2.4
SQL (Structured Query Language)
coding
assembler 3.4.1
basics 3.1
C 3.4.2
COBOL 3.4.3
dynamic 5.1.5.2
FORTRAN program 3.4.4.3
PL/I 3.4.5.1
cursors 3.2
description 1.2, 2.1.1
dynamic
coding 5.1
description 1.2.3
sample C program C.2
using application-directed access 4.1.4.6.1
escape character 4.2.2.4
host variables 3.1.4
interactive 1.2.4
return codes
See also SQLCODE
checking 3.1.4.2.2
handling 3.4.2.10
SQLSTATEs
See SQLSTATE
static
description 1.2.2
sample C program C.2
string delimiter 4.2.7.1.5
structures 3.1.4
syntax checking 4.1.4.5.3
varying-list 5.1.4 to 5.1.4.4.3
SQL communication area (SQLCA) 3.1.5
See also SQLCA (SQL communication area)
SQL keywords, reserved D.0
SQL return code
See SQLCODE
SQL statements
associated with two-phase commit 4.1.4.5.3
CLOSE 3.2.2.7, 5.1.3.1.5
CONNECT (Type 1) 4.1.4.4.2, 4.1.4.5.3
CONNECT (Type 2) 4.1.4.5.3
CONNECT RESET 4.1.4.5.3
continuation
COBOL 3.4.3.3
DECLARE CURSOR
description 3.2.2.1
example 5.1.3.1.1, 5.1.4.2.3
DECLARE TABLE 3.1.3, 3.3
DELETE
description 3.2.2.6
example 2.2.2.4
DESCRIBE 5.1.4.2.4
DROP 2.2.3
embedded 4.2.2.2
error return codes 3.1.5.4
EXECUTE 5.1.2.2.3
EXECUTE IMMEDIATE 5.1.2.1
EXPLAIN 5.2.1
FETCH
description 3.2.2.4
example 5.1.3.1.4
INCLUDE 3.3.2
INCLUDE SQLDA 5.1.4.2.1
INSERT 2.2.2.1
OPEN
description 3.2.2.2
example 5.1.3.1.3
PREPARE 5.1.2.2.2
RELEASE 4.1.4.5.3
SELECT
%, _ 2.1.5.5
AND operator 2.1.6.1
BETWEEN predicate 2.1.6.2
description 2.1.5
IN predicate 2.1.6.3
joining a table to itself 2.1.14.2
joining tables 2.1.14
joining two tables 2.1.14.1
LIKE predicate 2.1.5.5
multiple conditions 2.1.6.1
NOT operator 2.1.5.4
OR operator 2.1.6.1
outer join using UNION clause 2.1.14.3
parentheses 2.1.6.1
SET CURRENT DEGREE 5.2.5
set symbols 3.4.1.3
UPDATE
description 3.2.2.5
example 2.2.2.2
WHENEVER 3.1.5.2
SQL syntax checking 4.1.4.5.3
SQL-INIT-FLAG, resetting 3.4.3.3
SQL(ALL) option of precompiler 4.1.4.6.3
SQLCA (SQL communication area) 3.1.5
assembler 3.4.1
C 3.4.2.1
COBOL 3.4.3
DSNTIAC subroutine
assembler 3.4.1.10
C 3.4.2.10
COBOL 3.4.3.10
PL/I 3.4.5.10
DSNTIAR subroutine 3.1.5.4
assembler 3.4.1.10
C 3.4.2.10
COBOL 3.4.3.10
FORTRAN 3.4.4.9
PL/I 3.4.5.10
FORTRAN 3.4.4.1
PL/I 3.4.5.1
sample C program C.2
SQLCODE
-811 5.2.9.2.1
-911 3.1.5
-923 4.4.3.1
-924 5.5.2.10.2
-925 4.1.3.4.1, 4.4.1.1
-926 4.1.3.4.1, 4.4.1.1
+004 5.5.2.8.2, 5.5.2.9.2
+100 3.1.5.2
+256 5.5.4.2
+802 3.1.5.3
SQLDA (SQL descriptor area)
allocating storage 5.1.4.2.7
assembler 3.4.1.2
assembler program 5.1.4.2.1
C 3.4.2.2, 5.1.4.2.1
COBOL 3.4.3.2
dynamic SELECT example 5.1.4.2.9
FORTRAN 3.4.4.2
no occurrences of SQLVAR 5.1.4.2.4
OPEN statement 5.1.3.1.3
parameter in CAF TRANSLATE 5.5.2.10
parameter markers 5.1.4.4.2
PL/I 3.4.5.2, 5.1.4.2.1
requires storage addresses 5.1.4.2.10
varying-list SELECT statement 5.1.4.2.1
VS COBOL II 5.1.5.1
SQLDA (SQL desriptor area)
SQLERROR
clause of WHENEVER statement 3.1.5.2
SQLFLAG option of precompiler 4.2.2.4
SQLN field of SQLDA
DESCRIBE 5.1.4.2.5
SQLSTATE
'01519 3.1.5.3
'56021 4.1.3.4.1, 4.4.1.1
'57015 4.4.3.1
'58006 5.5.2.10.2
SQLVAR field of SQLDA
fetching rows 5.1.4.2.9
SQLWARNING clause
WHENEVER statement in COBOL program 3.1.5.2
SSID (subsystem identifier), specifying 4.2.7.1.5
SSN (subsystem name)
CALL DSNALI parameter list 5.5.2.5.1
parameter in CAF CONNECT function 5.5.2.6
parameter in CAF OPEN function 5.5.2.7
SQL calls to CAF (call attachment facility) 5.5.2.2
state
of a lock 4.1.2.4
statement
descriptions
See SQL statements
labels
FORTRAN 3.4.4.3
PL/I 3.4.5.3
operational form 1.2
preparation 1.2
source form 1.2
static SQL
description 1.2.2, 5.1
host variables 5.1.1.1
sample C program C.2
STDSQL option
precompiler 4.2.2.4
STOP DATABASE command
timeout 4.1.2.6
storage
acquiring
retrieved row 5.1.4.2.9
SQLDA 5.1.4.2.7
addresses in SQLDA 5.1.4.2.10
storage group
DB2
description 1.2.9
sample application A.8.1
storage structure 1.2.8
string
delimiter
apostrophe 4.2.2.4
fixed-length
assembler 3.4.1.5
C 3.4.2.8
COBOL 3.4.3.5
PL/I 3.4.5.8
value in CREATE TABLE statement 2.2.1
varying-length
assembler 3.4.1.5
C 3.4.2.8
COBOL 3.4.3.5
PL/I 3.4.5.8
subquery
correlated
DELETE statement 2.3.3.4
example 2.3.3
subquery 2.3.3
tuning 5.2.9.1
UPDATE statement 2.3.3.3
DELETE statement 2.3.3.4
description 2.3
join transformation 5.2.9.3
non-correlated 5.2.9.2
referential constraints 2.3.3.4
restrictions with DELETE 2.3.3.4
tuning 5.2.9
tuning examples 5.2.9.4
UPDATE statement 2.3.3.3
use with UPDATE, DELETE, and INSERT 2.3.1.4
subselect
INSERT statement 2.2.2.1.2
subsystem identifier (SSID), specifying 4.2.7.1.5
subsystem name (SSN)
See SSN (subsystem name)
SUM function 2.1.9.1
summarizing group values 2.1.12
SYNC call 4.1.3.2
SYNC parameter of CAF (call attachment facility) 5.5.2.8, 5.5.7.4
synchronization call abends 4.4.2.7
SYNCPOINT statement of CICS 4.1.3.2
syntax diagrams, how to read 1.1.3
SYSLIB data sets 4.2.7.2.1
SYSPRINT
precompiler output
options section 4.3.4.3
source statements section, example 4.3.4.3
summary section, example 4.3.4.3
symbol cross-reference section 4.3.4.3
used to analyze errors 4.3.4.3
system-directed access
coding an application 4.1.4.4
interoperability 4.1.4.2
planning 4.1.4
sample program C.4
update restrictions 4.1.4.2
using 4.1.4.4.1
SYSTERM output to analyze errors 4.3.4.2
v
----
T
----
table
access requirements in COBOL program 3.4.3.3
altering
changing definitions 5.3.11
base table 1.2.5
declaring 3.1.3, 3.3
deleting rows 2.2.2.4
dependent
updating 2.2.2.3
description 1.2.5
displaying
list of columns 2.4.6.2
list of tables 2.4.6.1
dropping 2.2.3
empty table 1.2.5
locks
description 4.1.2.2
parent
updating 2.2.2.3
populating
filling with test data 4.3.1.2
inserting rows 2.2.2.1
protection 1.2.15
qualified name 2.2.1.1.2
requirements for access 3.1.3
result table 1.2.5
retrieving
using a cursor 3.2
updating rows 2.2.2.2
table space
description 1.2.8
for sample application A.8.3
locks
description 4.1.2.2
scans
access path 5.2.3.1
determined by EXPLAIN 5.2.1
tables used in examples
DSN8310.ACT (activity) A.1
DSN8310.DEPT (department) A.2
DSN8310.EMP (employee) A.3
DSN8310.EMPPROJACT (employee to project activity) A.6
DSN8310.PROJ (project) A.4
DSN8310.PROJACT (project activity) A.5
task control block (TCB)
See TCB (task control block)
TCB (task control block)
capabilities with CAF 5.5.1.1.1
issuing CAF CLOSE 5.5.2.8.2
issuing CAF OPEN 5.5.2.7.2
TERM call
DL/I 4.1.3.2
terminal monitor program (TMP)
See TMP (terminal monitor program)
terminating
plan using CAF CLOSE function 5.5.2.8.2
TEST command of TSO 4.3.3.1.2
test environment, designing 4.3.1
thread
creation
OPEN function 5.5.2.1
termination
CLOSE function 5.5.2.1
three-part name
description 2.1.17.1, 4.1.4.2
TIME option
precompiler 4.2.2.4
timeout
controlled by IRLM 4.1.2.6
description 4.1.2.6
indications in IMS 4.1.2.6
timestamp
inserting in table 5.3.1
TMP (terminal monitor program)
DSN command processor 5.5.1.3.1
running under TSO 4.2.6.2
transaction lock
description 4.1.2
transaction-oriented BMP, checkpoints in 4.1.3.3.3
TRANSLATE
connection function of CAF
description 5.5.2.1
TRANSLATE connection function of CAF
program example 5.5.7.5
syntax 5.5.2.10
usage 5.5.2.10
translated compiler source statements 4.2.2.3
translating
requests from end users into SQL 5.3.10
TSO
CLISTs
invoking application programs 4.2.6.3
connections
to DB2 5.5
DSNALI language interface module 5.5.1.3.3
TEST command 4.3.3.1.2
unit of work, completion 4.1.3.2
tuning
DB2
queries containing host variables 5.2.10
two-phase commit
ANSI/ISO SQL standard of 1992 compliance 4.1.4.5.2
bind options 4.1.4.5.2
TWOPASS
option of precompiler 4.2.2.4
v
----
U
----
UNION clause
effect on OPTIMIZE clause 5.2.14
SELECT statement 2.1.15
when sort required 5.2.2.8, 5.2.12.1
unique index
creating using timestamp 5.3.1
description 1.2.7
unit of recovery
indoubt 4.1.3.2, 4.1.3.3
unit of work
beginning 4.1.3.1
CICS description 4.1.3.2
completion
commit 4.1.3.1
open cursors 3.2.3
rollback 4.1.3.1
TSO 4.1.3.1, 4.1.3.2
description 4.1.3
DL/I batch 4.1.3.4
duration 4.1.3
IMS
batch 4.1.3.4
commit point 4.1.3.3
ending 4.1.3.3
starting point 4.1.3.3
prevention of data access by other users 4.1.3
TSO
completion 4.1.3.1
ROLLBACK statement 4.1.3.1
unknown characters 2.1.5.5.1
unlock 5.5.1.1.1
UPDATE
lock mode
page 4.1.2.4
table and table space 4.1.2.4
statement
correlated subqueries 2.3.3.3
description 2.2.2.2
SET clause 2.2.2.2
subqueries 2.3.1.4
WHERE CURRENT clause 3.2.2.5
update restrictions
application-directed access 4.1.4.3
system-directed access 4.1.4.2
updating
during retrieval 5.3.4
large volumes 5.3.5
values from host variables 3.1.4.1.2
USER
CURRENT PACKAGESET 2.2.2.2
CURRENT SERVER 2.2.2.2
CURRENT SQLID 2.2.2.2
CURRENT TIME 2.2.2.2
CURRENT TIMESTAMP 2.2.2.2
CURRENT TIMEZONE 2.2.2.2
special register 2.2.2.2
USER 2.2.2.2
value in UPDATE statement 2.2.2.2
USER special register 2.1.18
USING DESCRIPTOR clause
EXECUTE statement 5.1.4.4.3
FETCH statement 5.1.4.3.2
OPEN statement 5.1.4.4.3
utilities
concurrency 4.1.2
v
----
V
----
validity checking 4.2.3.5, 4.2.4.7
value
composite 1.2.7
list 2.1.6.3
null 2.1.5.1
retrieving in a range 2.1.6.2
similar to a character string 2.1.5.5
VALUES
clause of INSERT statement 2.2.2.1
variable
declaration
assembler application program 3.4.1.7
C 3.4.2.7.1
COBOL 3.4.3.7.1
FORTRAN 3.4.4.6.1
PL/I 3.4.5.7.1
host
assembler 3.4.1.5
C 3.4.2.5
COBOL 3.4.3.5
FORTRAN 3.4.4.5
PL/I 3.4.5.5
varying-length character string
assembler 3.4.1.5
C 3.4.2.8
COBOL 3.4.3.5
VERSION
option of precompiler 4.2.2.4
version of a package 4.2.4.3.5
view
contents 2.2.1.4
creating
declaring a view in COBOL 3.4.3.3
description 3.1.3
description 1.2.12, 2.2.1.3
EXPLAIN 5.2.11.3
performance 5.2.11.4
processing
view materialization 5.2.2.5, 5.2.11.2.1
view merge 5.2.11
summary data 2.2.1.3
using
deleting rows 2.2.2.4
inserting rows 2.2.2.1
selecting rows using a cursor 3.2
updating rows 2.2.2.2
v
----
W
----
WHENEVER statement
assembler 3.4.1.3
C 3.4.2.3
COBOL 3.4.3.3
FORTRAN 3.4.4.3
PL/I 3.4.5.3
SQL error codes 3.1.5.2
WHERE clause
NULL option 2.1.5.1
SELECT statement
%, _ 2.1.5.5
AND operator 2.1.6.1
BETWEEN ... AND predicate 2.1.6.2
description 2.1.5
IN (...) predicate 2.1.6.3
joining a table to itself 2.1.14.2
joining tables 2.1.14
LIKE predicate 2.1.5.5
NOT operator 2.1.5.4
OR operator 2.1.6.1
outer join using UNION clause 2.1.14.3
parentheses 2.1.6.1
WITH HOLD clause of DECLARE CURSOR statement 3.2.3
WITH HOLD cursor 5.1.2.2.3
v
----
X
----
XREF option
precompiler 4.2.2.4
XRST call, IMS application program 4.1.3.3.1
v
----
Y
----
YDEPT table
creating 2.2.1.1.1
:exmp.
.es by 1.5
Please use this form to tell us what you think about the accuracy,
clarity, organization, and appearance of this manual. Any suggestions you
have for its improvement will be welcome.
Note: Do not use this form to request IBM publications. Please direct
any requests for copies of publications, or for assistance in using your
IBM system, to your IBM representative or to the IBM branch office
serving
your locality.
If you prefer to send comments by fax, use this U.S. number: (408)
463-4393
If you would like a reply, be sure to print your name, and your address or
phone number, below.
Name . . . . . . . . .
_______________________________________________
Company or Organization
_______________________________________________
Address . . . . . . . .
_______________________________________________
_______________________________________________
_______________________________________________
Phone No. . . . . . . .
_______________________________________________